-
Notifications
You must be signed in to change notification settings - Fork 6.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make all etcd commands use ETCDCTL_API 3 #5998
Conversation
Hi @EppO. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
"Error: unknown command "cluster-health" /ok-to-test |
https://etcd.io/docs/v3.3.12/upgrades/upgrade_3_4/ cluster-health is |
etcd v3.4.x uses etcd API v3 by default, I need to add support for that along the existing code using v2 |
my plan is to use |
missed your reply yesterday but we ended up saying the same thing :) |
may be use |
[root@master01 ~]$ ETCDCTL_API=3 etcdctl --cacert /etc/kubernetes/ssl/etcd/ca.crt --cert /etc/kubernetes/ssl/etcd/server.crt --key /etc/kubernetes/ssl/etcd/server.key endpoint --cluster health
https://10.1.2.11:2379 is healthy: successfully committed proposal: took = 22.245747ms
https://10.1.2.10:2379 is healthy: successfully committed proposal: took = 22.403949ms
https://10.1.2.12:2379 is healthy: successfully committed proposal: took = 23.057453ms
[root@master01 ~]$ ETCDCTL_API=3 etcdctl --cacert /etc/kubernetes/ssl/etcd/ca.crt --cert /etc/kubernetes/ssl/etcd/server.crt --key /etc/kubernetes/ssl/etcd/server.key endpoint --cluster status
https://10.1.2.12:2379, 93fb71900fda3e90, 3.4.3, 4.8 MB, false, false, 7, 703702, 703702,
https://10.1.2.10:2379, ba78c505e26a6ec4, 3.4.3, 4.9 MB, false, false, 7, 703702, 703702,
https://10.1.2.11:2379, ee92d3874288ea8f, 3.4.3, 4.8 MB, true, false, 7, 703702, 703702,
[root@master01 ~]$ ETCDCTL_API=3 etcdctl --cacert /etc/kubernetes/ssl/etcd/ca.crt --cert /etc/kubernetes/ssl/etcd/server.crt --key /etc/kubernetes/ssl/etcd/server.key endpoint --cluster status -w table
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| ENDPOINT | ID | VERSION | DB SIZE | IS LEADER | IS LEARNER | RAFT TERM | RAFT INDEX | RAFT APPLIED INDEX | ERRORS |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+
| https://10.1.2.12:2379 | 93fb71900fda3e90 | 3.4.3 | 4.8 MB | false | false | 7 | 704652 | 704652 | |
| https://10.1.2.10:2379 | ba78c505e26a6ec4 | 3.4.3 | 4.9 MB | false | false | 7 | 704652 | 704652 | |
| https://10.1.2.11:2379 | ee92d3874288ea8f | 3.4.3 | 4.8 MB | true | false | 7 | 704652 | 704652 | |
+---------------------------+------------------+---------+---------+-----------+------------+-----------+------------+--------------------+--------+ |
Actually in term of parsing, I think json output would be the best: [root@master01 ~]$ ETCDCTL_API=3 etcdctl --cacert /etc/kubernetes/ssl/etcd/ca.crt --cert /etc/kubernetes/ssl/etcd/server.crt --key /etc/kubernetes/ssl/etcd/server.key endpoint --cluster health -w json
[{"endpoint":"https://10.1.2.10:2379","health":true,"took":"20.651634ms"},{"endpoint":"https://10.1.2.11:2379","health":true,"took":"21.428639ms"},{"endpoint":"https://10.1.2.12:2379","health":true,"took":"21.314939ms"}] |
when a etcd member is down, there is a error message we can grep on the stderr: [root@master01 ~]$ etcdctl --endpoints="https://10.1.2.12:2379,https://10.1.2.10:2379,https://10.1.2.11:2379" endpoint --cluster health
{"level":"warn","ts":"2020-04-23T08:55:42.138-0700","caller":"clientv3/retry_interceptor.go:61","msg":"retrying of unary invoker failed","target":"endpoint://client-94557385-ce08-4d26-96bb-4ea6ac5814ec/10.1.2.12:2379","attempt":0,"error":"rpc error: code = DeadlineExceeded desc = context deadline exceeded"}
https://10.1.2.10:2379 is healthy: successfully committed proposal: took = 14.872284ms
https://10.1.2.11:2379 is healthy: successfully committed proposal: took = 16.285592ms
https://10.1.2.12:2379 is unhealthy: failed to commit proposal: context deadline exceeded
Error: unhealthy cluster bottom line, an easy solution would be to
and reverse the existing logic which is looking for healthy cluster, not unhealthy one. |
There is a lot of logic at different spots that involves etcdctl. I upgraded everything to API v3 besides the canal network plugin, which I'm not sure it's using the in-cluster etcd instance. I'm not very familiar on canal so I'd need some guidance here if it's using the same etcd as kubernetes. Troubleshooting the CI job to understand the error right now. |
Since this is still being worked on I'm marking this for v2.14, but could also be backported into v2.13.1 |
didn't have time to troubleshoot, there is a lot of "magic" involved with etcd bootstrapping and I couldn't figure out what is wrong with host/docker deployment so far. kubeadm deployment works though |
HA recovering test cases are failing. At least we're in Deploy-part3 now. |
760ea9c
to
15c3271
Compare
CI is now passing 🥳 |
/assign @Miouge1 |
Well done! |
Looks good to me You can remove the |
/cc @LuckySB |
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: EppO, LuckySB, Miouge1 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/lgtm |
* 'master' of https://github.com/kubernetes-sigs/kubespray: Upgrade etcd to 3.4.3 (kubernetes-sigs#5998) add audit webhook support (kubernetes-sigs#6317)
What type of PR is this?
/kind feature
What this PR does / why we need it:
kubespray should follow kubeadm's default images list unless show-stopper bugs fixed upstream.
Which issue(s) this PR fixes:
Special notes for your reviewer:
Upgrading existing cluster from etcd 3.3.x to 3.4.x should be alright in a rolling-update fashion.
Does this PR introduce a user-facing change?: