-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PidLimits promoted to GA #18
Conversation
Thanks! Lets hold this until the feature PR merges. This also means we have to release a new Minor of this library and revendor it in k/k. /lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: neolit123, SergeyKanzhelev The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/kind feature |
@neolit123: The label(s) In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
PR in k/k got merged. I wonder if I can cancel hold on self-authored PR... /hold cancel |
@SergeyKanzhelev do you happen to know if there are other planned features from SIG node related to this repository? |
I don't think so after briefly looking thru the code. even for this change, cutting release is not critical as it is not a breaking change. So perhaps, this can wait in case something else will come up |
ok, since this library's "release process" is a bit bottlenecked on my availability, i'd prefer if we cut the release now and re-vendor in k/k right away to unblock tests if needed(?). and we can cut more releases here if needed for 1.20 later. |
here is the new release: would you be able to send the PR to re-vendor? here is an example commit on how we update the vendor: it uses the script: |
Yes, thank you for the detailed instructions! |
Follow up from kubernetes/kubernetes#94140