Skip to content

Conversation

WesleyKlop
Copy link

@WesleyKlop WesleyKlop commented Sep 20, 2025

What this PR does / why we need it:

It is currently not possible to specify fetchConfig for all resource types, only for infrastructure and ipam types. This PR adds support to all provider types.

There is one issue though: for example the k3s/cluster-api-k3s project provides both control-plane and bootstrap providers, under the same name. But since the fetchConfig mapping is not taking into consideration the type of resource, collisions can occur.

I did not want to change the workings of the fetchConfig value too much before discussing but I had several ideas on how to solve this issue:

  1. Prefix names with their type. So you could have something like:
    fetchConfig:
      # Having <provider> as a key and the <name> under it would be fine by me too.
      controlPlane/k3s:
        url: ''
      bootstrap/k3s:
        url: ''
  2. Move fetchConfig values to their provider definition. In which case your config could look like:
    bootstrap:
      k3s:
        fetchConfig:
          url: ''

Changing hoe fetchConfig works could also be considered out of scope for this PR, but then we could maybe follow up on it in a different issue.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #834

Since the mapping is on resource name, you cannot have multiple
providers sharing the same name with their own fetchConfig.
Copy link

netlify bot commented Sep 20, 2025

Deploy Preview for kubernetes-sigs-cluster-api-operator ready!

Name Link
🔨 Latest commit 0322ca7
🔍 Latest deploy log https://app.netlify.com/projects/kubernetes-sigs-cluster-api-operator/deploys/68d1a0e32e73ef00084cd8fd
😎 Deploy Preview https://deploy-preview-893--kubernetes-sigs-cluster-api-operator.netlify.app
📱 Preview on mobile
Toggle QR Code...

QR Code

Use your smartphone camera to open QR code link.

To edit notification comments on pull requests, go to your Netlify project configuration.

@k8s-ci-robot k8s-ci-robot added the cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. label Sep 20, 2025
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by:
Once this PR has been reviewed and has the lgtm label, please assign alexander-demicev for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Sep 20, 2025
@k8s-ci-robot
Copy link
Contributor

Hi @WesleyKlop. Thanks for your PR.

I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@k8s-ci-robot k8s-ci-robot added the size/M Denotes a PR that changes 30-99 lines, ignoring generated files. label Sep 20, 2025
@WesleyKlop
Copy link
Author

/honk

@k8s-ci-robot
Copy link
Contributor

@WesleyKlop:
goose image

In response to this:

/honk

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@morganBlanloeil
Copy link

Hello,
I tested this change (which, by the way, we are very interested in). But when I enter a fetch config for the Helm add-on, the rendering is not good.

my values :

fetchConfig:
  helm:
    selector:
      matchLabels:
        provider-components: helm-addon
addon:
  helm:
    createNamespace: false
    version: v0.3.2

And my rendering

apiVersion: operator.cluster.x-k8s.io/v1alpha2
kind: AddonProvider
metadata:
  name: helm
  namespace: helm-addon-system
  annotations:
    "helm.sh/hook": "post-install,post-upgrade"
    "helm.sh/hook-weight": "2"
    "argocd.argoproj.io/sync-wave": "2"
spec:
  version: v0.3.2
  fetchConfig:
      selector: map[matchLabels:map[provider-components:helm-addon]]

A proposal quick fix ?

{{- if and (kindIs "map" $.Values.fetchConfig) (hasKey $.Values.fetchConfig $addonName) }}
  fetchConfig:
    {{ toYaml (index $.Values.fetchConfig $addonName) | nindent 4 | trim }}
{{- end }}

let me know :)

@WesleyKlop
Copy link
Author

@morganBlanloeil fixed it! :)

@morganBlanloeil
Copy link

Thanks! But I don't think this solution always works. For example, if you have kubeadm for control plane & bootstrap, Helm won't know which key to use.

Shouldn't the configuration be under the provider? Like

bootstrap:
  k3s:
    fetchConfig:
      url: ''

@furkatgofurov7
Copy link
Member

/test ?

@k8s-ci-robot
Copy link
Contributor

@furkatgofurov7: The following commands are available to trigger required jobs:

/test pull-cluster-api-operator-build-main
/test pull-cluster-api-operator-e2e-main
/test pull-cluster-api-operator-make-main
/test pull-cluster-api-operator-test-main
/test pull-cluster-api-operator-verify-main

The following commands are available to trigger optional jobs:

/test pull-cluster-api-operator-apidiff-main

Use /test all to run all jobs.

In response to this:

/test ?

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@furkatgofurov7
Copy link
Member

/test pull-cluster-api-operator-e2e-main

1 similar comment
@furkatgofurov7
Copy link
Member

/test pull-cluster-api-operator-e2e-main

@rbjorklin
Copy link
Contributor

This is already part of #780 and has been pending review for a while.

@rbjorklin
Copy link
Contributor

rbjorklin commented Sep 23, 2025

Thanks! But I don't think this solution always works. For example, if you have kubeadm for control plane & bootstrap, Helm won't know which key to use.

Shouldn't the configuration be under the provider? Like

bootstrap:
  k3s:
    fetchConfig:
      url: ''

You are correct. I have already implemented something that would work as part of #780. See this commit.

I also took the liberty of marking the current partial implementation as deprecated.

@WesleyKlop
Copy link
Author

Thanks! But I don't think this solution always works. For example, if you have kubeadm for control plane & bootstrap, Helm won't know which key to use.

Shouldn't the configuration be under the provider? Like


bootstrap:

  k3s:

    fetchConfig:

      url: ''

Yes I already proposed several solutions in the OP

@WesleyKlop
Copy link
Author

This is already part of #780 and has been pending review for a while.

I dis not know, that was nog clear to me from seeing the title.

I don't mind which implementation gets merged but if I can do anything to help either this or your PR forward, let me know.

@rbjorklin
Copy link
Contributor

I just found out that there is an even more featureful PR pending review in #832. I've requested that the author adds fetchConfig support there.

Looks like both you and me might get to close our PRs @WesleyKlop 😅

@WesleyKlop
Copy link
Author

@rbjorklin, both #780 and #832 have been open for quite a while and I do not really see a reason to close at least your PR. #832 is quite breaking for users, meaning they might be slow to move to that when it releases. We also do not know how long it will take before that will be released and I do not see why we would need to halt all other improvements waiting for that.

In the mean time either/both your (or my) additions could be merged and released, giving the users new features while waiting for the bigger changes proposed in #832.

@rbjorklin
Copy link
Contributor

@rbjorklin, both #780 and #832 have been open for quite a while and I do not really see a reason to close at least your PR.

That's fair. My PR got approved yesterday so the maintainers could just merge that and ask the author of #832 to rebase.

@furkatgofurov7
Copy link
Member

Hey,

#780 is merged now. Can we close this PR?

@k8s-ci-robot k8s-ci-robot added the needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. label Sep 25, 2025
@k8s-ci-robot
Copy link
Contributor

PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

@WesleyKlop
Copy link
Author

Nice! Yeah sure I'll close it.

@WesleyKlop WesleyKlop closed this Sep 26, 2025
@WesleyKlop WesleyKlop deleted the main branch September 26, 2025 05:02
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. size/M Denotes a PR that changes 30-99 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Support for FetchConfig on all provider types in Helm Chart
5 participants