-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mark KEP-2568 as implementable. #2654
Conversation
Hi @vinayakankugoyal. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/cc @randomvariable @fabriziopandini
do we have more comments on this KEP before we mark it as "implementable"?
/cc @PushkarJ
any more comments from your as a SIG Security representative?
i will review this myself again, tomorrow.
@neolit123: GitHub didn't allow me to request PR reviews from the following users: pushkarj. Note that only kubernetes members and repo collaborators can review this PR, and authors cannot review their own PRs. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
/milestone v1.22 |
looks like the PRR file is mandatory as part of CI now?
so this is something that has to be resolved with the PRR owners - e.g. whether we are going to skip it for some KEPS. |
55b7904
to
315d1f3
Compare
at this point github makes this difficult to review so i will just go trough the file and copy / comment.
going back to this. do you see a risk in existing users having a breakage due to this change and depending on their seccomp settings?
i would extend the sentence with "to avoid vulnerabilities."
given this is the best option that we picked (3), how are we going to handle running kubeadm init/join/upgrade in case the set of with respect to cleanup, do we expect kubeadm reset to remove these users . groups. i'd say yes? cleanup on "reset" would work for "init/join" but "upgrade" does not have a command that can reverse its state, it's idempotent. i think we might want to have a section in the doc that covers what do we do after a set of users / groups already exists on the node covering how we re-use them and clean them up. you could call this "Reusing user/groups and cleanup" under "Design details". but first you can tell me what do you think about this.
i would add "hidden behind a feature-gate, until it graduates to GA". it will be feature-gated only until GA. once the FG is dropped it becomes locked to true.
we should indicate that kubeadm would take exclusive ownership of these users/groups names and throw errors if it encounters existing names that are not in the expected ID range or about their permissions.
we normally add e2e tests during alpha, not sure i've mentioned that on the other PRs.
we can rephrase this to: |
Yes, definitely we should reuse them.
Agnostic on this one. I don't think it's a deal breaker if the users/groups hang around, which probably answers the upgrade question. |
No there isn't. The control-plane has had that setting in kube-up for a long time and we have not seen any issues. I'd be extremely surprised if the control-plane components started failing because of seccomProfile of runtime/default. |
I'm supportive to get this implementable in 1.22 as soon as pending comments are addressed. |
as pointed out, reset does clean all artifacts the kubeadm init/join left on the machine. so we might want to add that for consistency. |
Updated, thanks!
Updated, thanks!
Updated thanks!
Updated, thanks!
Updated, thanks! |
I added 2 sections: one for reuse and one for cleanup. Let me know how those look. Thanks! |
315d1f3
to
7f77385
Compare
7f77385
to
5d80977
Compare
keps/sig-cluster-lifecycle/kubeadm/2568-kubeadm-non-root-control-plane/README.md
Outdated
Show resolved
Hide resolved
thank you for the updates. found one minor typo, the rest seems good to me. once this gets approved and LGTM-ed, we will let it sit for few more days and apply lazy consensus on it - i.e. if nobody else has comments we will merge it, as is. i'm proposing the lazy consensus to apply after EOD Tuesday next week. /approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: neolit123, vinayakankugoyal The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Yeah that seems reasonable to me. What time zone will we follow? |
please squash the commits to 1; you can drop my authorship too.
we usually follow PST, as that is what k8s does. i've notified the SIG mailing list if more people want to have a look: |
6029f9a
to
a4cf338
Compare
/hold cancel |
We are now past the deadline. Anyone with the ability to LGTM please do so, so that we can merge. Thanks in advance! |
@fabriziopandini - since you are the other approver of the KEP do you want to LGTM? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
Sorry, I'm late to this one... |
No worries! Thank you for taking a look. |
No description provided.