-
Notifications
You must be signed in to change notification settings - Fork 108
✨ Add fetchConfig support for all provider types #893
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Since the mapping is on resource name, you cannot have multiple providers sharing the same name with their own fetchConfig.
✅ Deploy Preview for kubernetes-sigs-cluster-api-operator ready!
To edit notification comments on pull requests, go to your Netlify project configuration. |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @WesleyKlop. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/honk |
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Hello, my values :
And my rendering
A proposal quick fix ?
let me know :) |
@morganBlanloeil fixed it! :) |
Thanks! But I don't think this solution always works. For example, if you have kubeadm for control plane & bootstrap, Helm won't know which key to use. Shouldn't the configuration be under the provider? Like
|
/test ? |
@furkatgofurov7: The following commands are available to trigger required jobs:
The following commands are available to trigger optional jobs:
Use In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/test pull-cluster-api-operator-e2e-main |
1 similar comment
/test pull-cluster-api-operator-e2e-main |
This is already part of #780 and has been pending review for a while. |
You are correct. I have already implemented something that would work as part of #780. See this commit. I also took the liberty of marking the current partial implementation as deprecated. |
Yes I already proposed several solutions in the OP |
I dis not know, that was nog clear to me from seeing the title. I don't mind which implementation gets merged but if I can do anything to help either this or your PR forward, let me know. |
I just found out that there is an even more featureful PR pending review in #832. I've requested that the author adds Looks like both you and me might get to close our PRs @WesleyKlop 😅 |
@rbjorklin, both #780 and #832 have been open for quite a while and I do not really see a reason to close at least your PR. #832 is quite breaking for users, meaning they might be slow to move to that when it releases. We also do not know how long it will take before that will be released and I do not see why we would need to halt all other improvements waiting for that. In the mean time either/both your (or my) additions could be merged and released, giving the users new features while waiting for the bigger changes proposed in #832. |
That's fair. My PR got approved yesterday so the maintainers could just merge that and ask the author of #832 to rebase. |
Hey, #780 is merged now. Can we close this PR? |
PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Nice! Yeah sure I'll close it. |
What this PR does / why we need it:
It is currently not possible to specify fetchConfig for all resource types, only for infrastructure and ipam types. This PR adds support to all provider types.
There is one issue though: for example the k3s/cluster-api-k3s project provides both control-plane and bootstrap providers, under the same name. But since the fetchConfig mapping is not taking into consideration the type of resource, collisions can occur.
I did not want to change the workings of the fetchConfig value too much before discussing but I had several ideas on how to solve this issue:
Changing hoe fetchConfig works could also be considered out of scope for this PR, but then we could maybe follow up on it in a different issue.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #834