-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Implement TemplateNodeInfo for magnum cloudprovider #6890
base: master
Are you sure you want to change the base?
Implement TemplateNodeInfo for magnum cloudprovider #6890
Conversation
|
Welcome @b0e! |
Hi @b0e. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
804b9fc
to
b3fd828
Compare
@b0e you have to sign the CLA before the PR can be reviewed. |
To check EasyCLA /easycla |
Hi @BigDarkClown @x13n, Fixes #6018 Thank you! |
Cloud provider specific changes should be reviewed by dedicated OWNERS. @tghartland can you take a look? /assign tghartland |
@x13n: GitHub didn't allow me to assign the following users: tghartland. Note that only kubernetes members with read permissions, repo collaborators and people who have commented on this issue/PR can be assigned. Additionally, issues/PRs can only have 10 assignees at the same time. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
@tghartland would you please take a look at this? |
Hi, I think I was away for a while when this came in, I must have missed it when I got back. Sorry for the long delay, I'll take a look at it this week. |
I had some problems getting a working test environment going, but I've tested scaling up from 0 and that works and gives the expected number of nodes for the resources of pending pods and nodegroup flavor size. While scaling down I saw some odd issues (master nodegroup going to a bad state) though it might just be more problems from my test environment. I want to take another look at that, but I am travelling from today until Tuesday so it will have to wait. The diff looks fine, I don't see anything in there that would be causing the scale down issues but I need the time to double check. |
Thanks for the quick response. We've been using the patchset for quite a while now and haven't encountered problems with the master NG. But to be fair, we don't use the main heat stack for scaling and always create another NG. But yeah heat does some weird stuff sometimes. |
What we've encountered though is that on scale down one node will be removed at a time and not in a batch. |
I've managed to get my devstack environment set to the stable magnum version, and that doesn't have the master nodegroup issue I saw when scaling down. About your point on batching, the core autoscaler makes multiple requests for single nodes when doing a scale down.
And magnum is able to handle those requests in parallel, I see it deleting both nodes at the same time:
In the first implementation of the magnum provider, it did wait and try to batch up those into one request to openstack, but that was before the magnum resize API was available. Once magnum could handle multiple requests at the same time, it was a lot easier to let it handle that as the code in the provider was very awkward. |
Thanks for your contribution! /lgtm |
@tghartland: changing LGTM is restricted to collaborators In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: b0e, tghartland The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@x13n the CI bot never likes me, could you set this PR as ready to merge? |
/ok-to-test |
giving |
@tghartland, IMO, it is because you are not in k8s org. cc @x13n please approve this PR. |
@b0e, please resolve the merge conflict. |
b3fd828
to
a249ca9
Compare
New changes are detected. LGTM label has been removed. |
cc @tghartland Thanks! |
As the owner of Magnum cloud provider in CA, it would be good if you could join k8s Org (become a member) so that you can easily approve the PR related to Magnum. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Implement TempalteNodeInfo() for the cluster-autoscaler magnum cloud provider
Which issue(s) this PR fixes:
Haven't found any.
Special notes for your reviewer:
Scaling up a nodegroup from zero didn't work because TemplateNodeInfo was not implemented.
Does this PR introduce a user-facing change?
NONE
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: