-
Notifications
You must be signed in to change notification settings - Fork 14.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pid limiting documentation #23929
pid limiting documentation #23929
Conversation
Deploy preview for kubernetes-io-vnext-staging processing. Building with commit 7b7ed6b https://app.netlify.com/sites/kubernetes-io-vnext-staging/deploys/5f63963c9331a6000839477b |
What does this mean? I think it's about imposing a resource limit on Pods, where the resource is the number of host system PIDs that the Pod can use. PIDs are always isolated between Pods. I don't think it's possible for two Pods in the same namespace hierarchy (ie, both running on one Node with a Linux namespaces-and-cgroups container runtime) to have the same host system PID, that just doesn't make sense. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this PR - here are some suggestions on the wording.
- derekwaynecarr | ||
title: PID Limiting | ||
content_type: concept | ||
weight: 10 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I recommend setting a higher value, as there are other documents in the policy section that should come first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I should have read contributing.md. Just copied from other file. Let me take a look
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is the doc I found: https://v1-16.docs.kubernetes.io/docs/contribute/style/content-organization/#page-order I updated weight for more articles. Was it the right decision? Or just assign 40 to the new page?
content/en/docs/concepts/configuration/manage-resources-containers.md
Outdated
Show resolved
Hide resolved
content/en/docs/concepts/configuration/manage-resources-containers.md
Outdated
Show resolved
Hide resolved
This is isolation form attack perspective. I agree it worth re-phrasing. Looking... |
5a336e4
to
95631e3
Compare
@sftim thank you very much for review and rewrites! |
95631e3
to
94af350
Compare
@tengqm thank you for pointing all uncertainties in a language. PTAL - I rephrased |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM other than some nits.
94af350
to
13699bc
Compare
@kubernetes/sig-node-pr-reviews Hello 👋, 1.20 docs lead here. Can we get a tech review? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A bit more feedback. IMO it'd be fine to merge without addressing that, we can pick those nits up later.
|
||
{{< feature-state for_k8s_version="v1.20" state="stable" >}} | ||
|
||
Kubernetes allow you to limit the number of process IDs (PIDs) that a {{< glossary_tooltip term_id="Pod" text="Pod" >}} can use. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do think about adding a heading for the content starting at line 13?
The first heading I see is Node PID limits
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just followed the patter from other files. I think the title will become an H1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
|
||
{{< note >}} | ||
On certain Linux installations, the operating system sets the PIDs limit to a low default, | ||
such as `32768`. Consider raising the value of `/proc/sys/kernel/pid_max`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is referring to something like:
In order to use the
pids controller, set the maximum number of tasks in pids.max (this is not available in the root cgroup for obvious reasons). The number of processes currently in the cgroup is given by pids.current.
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's this tip:
https://www.cyberciti.biz/tips/howto-linux-increase-pid-limits.html
Nothing to do with Kubernetes per se, but relevant if like most people you're using Linux nodes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it something you want to add to the note? Or replace with?
You can configure a kubelet to limit the number of PIDs a given pod can consume. | ||
For example, if your node's host OS is set to use a maximum of `262144` PIDs and | ||
expect to host less than `250` pods, one can give each pod a budget of `1000` | ||
PIDs to prevent using up that node's overall number of available PIDs. If the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about this edit?
For example, if your node's host OS is set to use a maximum of 262144 PIDs and
you expect to host less than 250 pods, give each pod a budget of 1000
PIDs to prevent using up that node's overall number of available PIDs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think that is fine.
its an operator choice to choose to over-commit or under-commit pids.
the main goal with pid limiting per pod is to stop a simple fork-bomb as we don't require pods to requests pids as a resource in favor of letting an admin set a per pod max.
Should there be a note that on windows these flags has no effect? I'm not sure what's the best practice here |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can configure a kubelet to limit the number of PIDs a given pod can consume. | ||
For example, if your node's host OS is set to use a maximum of `262144` PIDs and | ||
expect to host less than `250` pods, one can give each pod a budget of `1000` | ||
PIDs to prevent using up that node's overall number of available PIDs. If the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think that is fine.
its an operator choice to choose to over-commit or under-commit pids.
the main goal with pid limiting per pod is to stop a simple fork-bomb as we don't require pods to requests pids as a resource in favor of letting an admin set a per pod max.
@SergeyKanzhelev its hard to keep up with the delta among linux and windows nodes, but we could put a piece of text that says pid limiting only applies to kubelets running on linux hosts. |
LGTM |
Relevant to #23906 |
I think it is fixes the issue, correct? I can add to description for auto-closing the issue on PR merge |
This PR targets the v1.20 release. I would hold off on closing #23906 given that the master branch (and live documentation) will not include documentation for this (existing) feature until the v1.20 release. Once this merges, though, we can drop the priority of #23906. |
@annajung happy for this to land in dev-1.20? |
Yes, that sounds good to me |
@annajung: You must be a member of the kubernetes/website-milestone-maintainers GitHub team to set the milestone. If you believe you should be able to issue the /milestone command, please contact your Website milestone maintainers and have them propose you as an additional delegate for this responsibility. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/milestone 1.20 |
/assign @annajung |
https://5f63963c9331a6000839477b--kubernetes-io-vnext-staging.netlify.app/docs/reference/command-line-tools-reference/feature-gates/ also needs updating, to mark the related feature gates as graduated. If you change that, I think this will be good to go. |
|
||
{{< note >}} | ||
Before Kubernetes version 1.20, PID resource limiting for Pods required enabling | ||
the [feature gate](/docs/reference/command-line-tools-reference/feature-gates/) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess it is opened by default since 1.14
SupportPodPidsLimit | true | Beta | 1.14 |
---|
Looks good to me, thanks! |
LGTM label has been added. Git tree hash: 2fa56c7877086b8d0a56541e0b8ce078f9bab5ac
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: irvifa The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Quickly combined info about pid limiting. It is a follow up from kubernetes/kubernetes#94140 (comment)
It also requires this PR #23883 as it adds the description of the feature flag that this PR refer to.
Related #23906 (issue can be closed once this branch is merged back to master)
/sig node
/assign @sftim
/assign @derekwaynecarr