-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KEP-2021: Support scaling HPA to/from zero pods for object/external metrics #2022
base: master
Are you sure you want to change the base?
KEP-2021: Support scaling HPA to/from zero pods for object/external metrics #2022
Conversation
Welcome @johanneswuerbach! |
Hi @johanneswuerbach. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
aeff3ee
to
d09c208
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Enhancements Lead here, some notes on the the current PR:
The dir structure should be: keps/sig-autoscaling/2021-scale-from-zero/ (Note we put the issue number in the title now)
The current file 20200926-scale-from-zero.md should be renamed README.md and a kep.yaml should also be added.
More detail can be found here: https://github.com/kubernetes/enhancements/tree/master/keps/NNNN-kep-template
Best,
Kirsten
@kikisdeliveryservice thank you, folder structure fixed. |
/assign @gjtempleton I hoped to get the current state documented first, before we start talking about how we could move towards beta. Does that make sense? |
I think that's a good plan for now. |
@johanneswuerbach @gjtempleton how's this going so far? Can I be of any help? |
@jeffreybrowning yes. I assumed we could merge this KEP already as is to document the current state and then iterate on it towards beta. I'll try to present it at the next sig-autoscaling meeting to get some input and discuss next steps, but if you have any input I'm happy to incorporate it already. |
@johanneswuerbach missed autoscaling meeting today -- do you have clarity on next steps? |
Me too, I assumed those are bi-weekly or are the meetings on-demand? |
In all honesty, it would have been my first one -- the work you started on this feature for beta has encouraged me to get involved and help you push this through. It will really help using HPA + async job queue to scale down to 0 workers when not processing tasks. |
Hey, the meetings are held weekly at 14:00 UTC, you can see more info including a link to the agenda if you want to raise it here. I've raised this at a previous meeting, so the community's already aware this work has started, but it would be good to have some more in-depth discussion. |
Thanks I got confused by the mention of biweekly here https://github.com/kubernetes/community/tree/master/sig-autoscaling#meetings and assumed the calendar invite is wrong. Will ask upfront next time :-) |
Pinging here. How's this enhancement coming along? What are next steps? |
Pinging back before the holidays hit. What are next steps? |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Anything we the community can do to help move this along? |
@johnbelamaric could you have a look again as the author addressed your comments? |
Shouldn't someone re-request @johnbelamaric? Since he's still in the |
/cc |
Has there been any update on this? |
/assign @johnbelamaric Anything I could do here to move this forward? |
We finally grew weary of waiting for this and migrated from HPA to KEDA. So far it's worked out pretty well. Have already cut our AWS bill by ~30% (for the EKS cluster nodes), with more to come. |
Hi @noahlz! KEDA maintainer here. I would love to hear more about this, if you have a time, please. Could you please reach out (email on my GH account, Slack, LinkedIn, whatever) or tell me how can I connect with you? |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
It has been nearly 18 months since the KEP was updated after the previous review. I know these comments don't really help, but an update on the progress (or otherwise) of this would be great. |
The Kubernetes project currently lacks enough contributors to adequately respond to all PRs. This bot triages PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
/remove-lifecycle stale |
Can anyone tell why this pull request is not getting reviewed?
Would really love to see this feature flag reaching beta or GA :) |
Enhancement issue: #2021
Rendered version https://github.com/johanneswuerbach/enhancements/blob/kep-2021-alpha/keps/sig-autoscaling/2021-scale-from-zero/README.md