-
Notifications
You must be signed in to change notification settings - Fork 39.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mark componentstatus as deprecated #93570
Mark componentstatus as deprecated #93570
Conversation
/cc @smarterclayton |
/priority important-longterm |
/hold for api-review |
@liggitt when will we be able to remove it? (if at all?) |
/retest |
I'm not sure (a core/v1 API predating the deprecation policy, with no replacement, is sort of unprecedented), but communicating the lack of continued development and known inconsistent behavior of what is currently there is important. |
A Kubernetes provider is not required to follow the configuration necessary to expose this data, nor is this required for all distributions to implement. We have explicitly stated that we would not put this in conformance because conformant distributions may choose not to expose this. Component status should be removed. /approve I will leave the hold for another reviewer. |
thanks! |
/approve |
If reversing data flow is a problem re
then are we to expect a deprecation of |
YES (Actually this is really hard, because those are much-used features. The current "plan" is to add the notion of subresource to the aggregator, and then move "model breaking" subresources, such as those, out of the main apiserver into a separate binary. Ideally also adding a redirect rather than proxy mode, so that actual traffic doesn't have to go through apiserver at all. This isn't staffed or being worked on right now but if it's something someone wants to work on, come talk to SIG API Machinery...) |
Hi, any idea for alternative method to check status? |
Any example on this would be greatly appreciated. ;-) edit: Found etcd check example on https://kubernetes.io/docs/reference/using-api/health-checks/, but only seems to work starting with Kubernetes 1.20:
|
`ComponentStatus` was deprecated a while ago and will be removed at some point: kubernetes/kubernetes#93570 We might add an alternative later.
Previously, the kube_apiserver_controlplane used ComponentStatus to report control plane components' liveness. This has been deprecated in [Kubernetes 1.19](kubernetes/kubernetes#93570) and will be removed at some point in the future. To remediate that, we're following the recommendation in the deprecation notice to use the components' own health check endpoints.
Previously, the kube_apiserver_controlplane used ComponentStatus to report control plane components' liveness. This has been deprecated in [Kubernetes 1.19](kubernetes/kubernetes#93570) and will be removed at some point in the future. To remediate that, we're following the recommendation in the deprecation notice to use the API Server's health endpoint instead. This change also removes the `component` tag in this service check, as it no longer reports separate components, and just the API server itself. Per-component service checks will eventually be available through the kube_controller_manager and kube_scheduler checks themselves.
Previously, the kube_apiserver_controlplane used ComponentStatus to report control plane components' liveness. This has been deprecated in [Kubernetes 1.19](kubernetes/kubernetes#93570) and will be removed at some point in the future. To remediate that, we're following the recommendation in the deprecation notice to use the API Server's health endpoint instead. This change also removes the `component` tag in this service check, as it no longer reports separate components, and just the API server itself. Per-component service checks will eventually be available through the kube_controller_manager and kube_scheduler checks themselves.
Now it comes nearly to an end for the componentstatus. If you headed here from Google and you are a Rancher user you can use Rancher management API to ask the status: $ kubectl get clusters.management.cattle.io <your-cluster-id> -o json | jq '.status.componentStatuses[] | .name,.conditions[].message'
"controller-manager"
"ok"
"etcd-0"
"{\"health\":\"true\"}"
"etcd-1"
"{\"health\":\"true\"}"
"etcd-2"
"{\"health\":\"true\"}"
"scheduler"
"ok" plain curl: $ curl -s -H "Content-Type: application/json" -H "authorization: Bearer <token>" https://<rancher-server>/k8s/clusters/local/apis/management.cattle.io/v3/clusters/<your-cluster-id> | jq '.status.componentStatuses[] | .name,.conditions[].message'
"controller-manager"
"ok"
"etcd-0"
"{\"health\":\"true\"}"
"etcd-1"
"{\"health\":\"true\"}"
"etcd-2"
"{\"health\":\"true\"}"
"scheduler"
"ok"' |
`ComponentStatus` was deprecated a while ago and will be removed at some point: kubernetes/kubernetes#93570 We might add an alternative later.
To avoid the warning: W0503 01:07:59.564568 2724335 warnings.go:70] v1 ComponentStatus is deprecated in v1.19+ Maybe it is possible to silence the warning, but this resource is not needed now so lets skip it. [1] kubernetes/kubernetes#93570
What type of PR is this?
/kind api-change
/kind deprecation
What this PR does / why we need it:
xref kubernetes/enhancements#553 (comment) and #19570
The current state of this API is problematic, and requires reversing the actual data flow (it requires the API server to call to its clients), and is not functional across deployment topologies.
Leaving it in place attracts new attempts to make additions to it (#74643, #82247) and leads to confusion or bug reports for deployments that do not match its topology assumptions (#93342, #93472). It should be clearly marked as deprecated.
Does this PR introduce a user-facing change?:
/cc @deads2k @lavalamp @neolit123
/sig api-machinery cluster-lifecycle