Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[cabpk] Add Support for Component Config configuration files #1584

Closed
fabriziopandini opened this issue Aug 23, 2019 · 24 comments
Closed

[cabpk] Add Support for Component Config configuration files #1584

fabriziopandini opened this issue Aug 23, 2019 · 24 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@fabriziopandini
Copy link
Member

/kind feature

Describe the solution you'd like
SCL is pushing the adoption of component config objects, and kubeadm already accepts KubeletConfiguration and KubeProxyConfiguration as part of its config file.
I would like to be able to use component config object with CABPK as well.

Anything else you would like to add:
The first possible approach to achieve this is to add KubeletConfiguration and KubeProxyConfiguration types to the KubeadmConfig object, but this comes with pros (e.g. Open API validation) and cons (e.g. more dependencies and dependencies management)

So, I would like to put on the table also some alternative approach, based on the assumption that CABPK only needs to pass component configs to kubeadm (no need of any logic based on the component config content). If this assumption holds, component configs can be treated basically as "special" Files, so we can consider if:

  • Extend current Files Type by adding a ComponentConfig bool that signals to CABPK that this file should be added to the kubeadm config file
  • Add a new []ComponentConfigFile field to the KubeadmConfig object, so we can have a more explicit semantics around component configs

/cc @chuckha @detiber

@detiber
Copy link
Member

detiber commented Aug 23, 2019

I'm torn on this, if we just pass through without any type of validation, then we risk hard to debug errors.

That said, if we do add basic validation we still wouldn't have all the validation that is done on the backend and could still end up with hard to debug errors...

The question then begins just how easy do we want to make it for users to shoot themselves in their own foot?

@chuckha
Copy link
Contributor

chuckha commented Aug 28, 2019

/priority important-longterm
/milestone Next

@ncdc ncdc changed the title Add Support for Component Config configuration files [cabpk] Add Support for Component Config configuration files Oct 16, 2019
@ncdc ncdc transferred this issue from kubernetes-retired/cluster-api-bootstrap-provider-kubeadm Oct 16, 2019
@k8s-ci-robot k8s-ci-robot added the kind/feature Categorizes issue or PR as related to a new feature. label Oct 16, 2019
@chuckha
Copy link
Contributor

chuckha commented Oct 24, 2019

I'm ok with big foot-guns for now. Especially given the work to surface kubeadm init logs into the machine. This should mitigate the hard-to-debug validation concerns. But until then, I don't see a problem with adding this functionality as outlined by @fabriziopandini.

I'm a little wary on the bool signaling a ComponentConfig...I think the original issue is outlining two possible approaches?

If so, I'm in favor of ComponentConfigs and having them be appended to the kubeadm config.

/milestone v0.3.0
/priority important-longterm

@k8s-ci-robot k8s-ci-robot added this to the v0.3.0 milestone Oct 24, 2019
@k8s-ci-robot k8s-ci-robot added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label Oct 24, 2019
@akutz
Copy link
Contributor

akutz commented Nov 13, 2019

@ncdc
Copy link
Contributor

ncdc commented Dec 20, 2019

@fabriziopandini do you see this as a hard requirement for v1a3?

@fabriziopandini
Copy link
Member Author

@ncdc As far as I know the current API for CABPK is enough for most of the use-cases, so IMO this is a backlog and can be moved to the next milestone.

@ncdc ncdc added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Jan 2, 2020
@ncdc ncdc modified the milestones: v0.3.0, Next Jan 2, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 1, 2020
@vincepri
Copy link
Member

vincepri commented Apr 1, 2020

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 1, 2020
@vincepri
Copy link
Member

/area ux

@Arvinderpal
Copy link
Contributor

Is there a documented workaround for this? For example, if I want to specify a custom KubeProxyConfiguration?

@sb464f
Copy link

sb464f commented Jun 5, 2020

We are looking for this feature to enable ipvs mode in kubeProxyConfiguration

@sb464f
Copy link

sb464f commented Jun 5, 2020

Workaround:
adding it to prekubeadmcommands
- /tmp/generate-kube-proxy.sh
creating a file with script

    files:
      - path: /tmp/generate-kube-proxy.sh
        permissions: "0700"
        owner: root:root
        content: |
          #!/bin/bash
          for i in $(ls /tmp | grep kubeadm); do
              cat <<EOF>> /tmp/$i
          ---
          kind: KubeProxyConfiguration
          apiVersion: kubeproxy.config.k8s.io/v1alpha1
          mode: ipvs
          EOF
          done

@Arvinderpal
Copy link
Contributor

Trying to understand how onerous it would be to add KubeletConfiguration and KubeProxyConfiguration to KubeadmConfig -- the first possible approach noted by @fabriziopandini.

Looking at the code, it appears that kubeadm itself does no validation on either KubeletConfiguration or KubeProxyConfiguration. kubeadm has the framework for component specific validation, but currently, that does nothing:
https://github.com/kubernetes/kubernetes/blob/a054010d032b301e495d1a421f53b9a37a0a0109/cmd/kubeadm/app/componentconfigs/configset.go#L244

What kubeadm does provide is some default values for certain fields if none are specified. Here is the one for kube-proxy:
https://github.com/kubernetes/kubernetes/blob/master/cmd/kubeadm/app/componentconfigs/kubeproxy.go#L86

Beyond KubeletConfiguration and KubeProxyConfiguration, I don't see any other xyzConfiguration, as all other K core components have their fields split across the init/join/cluster configurations.

CABPK could take a similar approach to kubeadm -- treat these two configurations as just pass-through w/o validation. We would document this of course and possibly provide some troubleshooting steps in case there are failures (e.g. go look at the cloud-config logs on machine-x).

In any case, it would be great to prioritize this as people who are building more complex clusters will definitely want to tweak kube-proxy and kubelet settings.

@cprivitere
Copy link
Member

Here's a user story for you in case it helps motivate folks. The need to edit kube-proxy to do metrics binding to 0.0.0.0 instead of 127.0.0.1 so that prometheus can get metrics from kube-proxy on cluster-api provisioned clusters.

@cprivitere
Copy link
Member

cprivitere commented Jul 24, 2020

One other workaround a co-worker came up with for editing the kube-proxy config is to simply update the kube-proxy configmap:

So apply a confgmap named "kube-proxy" into the kube-system namespace:

data:
  config.conf: |-
    apiVersion: kubeproxy.config.k8s.io/v1alpha1
    bindAddress: 0.0.0.0
    clientConnection:
      acceptContentTypes: ""
      burst: 0
      contentType: ""
      kubeconfig: /var/lib/kube-proxy/kubeconfig.conf
      qps: 0
    clusterCIDR: 10.96.0.0/16
    configSyncPeriod: 0s
    conntrack:
      maxPerCore: null
      min: null
      tcpCloseWaitTimeout: null
      tcpEstablishedTimeout: null
    detectLocalMode: ""
    enableProfiling: false
    healthzBindAddress: ""
    hostnameOverride: ""
    iptables:
      masqueradeAll: false
      masqueradeBit: null
      minSyncPeriod: 0s
      syncPeriod: 0s
    ipvs:
      excludeCIDRs: null
      minSyncPeriod: 0s
      scheduler: ""
      strictARP: false
      syncPeriod: 0s
      tcpFinTimeout: 0s
      tcpTimeout: 0s
      udpTimeout: 0s
    kind: KubeProxyConfiguration
    metricsBindAddress: "0.0.0.0:10249"
    mode: ""
    nodePortAddresses: null
    oomScoreAdj: null
    portRange: ""
    showHiddenMetricsForVersion: ""
    udpIdleTimeout: 0s
    winkernel:
      enableDSR: false
      networkName: ""
      sourceVip: ""
<etc....>

Is there a major downside to just editing the configmap? If nothing else I hope this at least gives folks another option for configuring kube-proxy in the interim till this is implemented.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 22, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 21, 2020
@MPV
Copy link

MPV commented Nov 23, 2020

One other workaround a co-worker came up with for editing the kube-proxy config is to simply update the kube-proxy configmap:

So apply a confgmap named "kube-proxy" into the kube-system namespace:

<etc....>

Is there a major downside to just editing the configmap? If nothing else I hope this at least gives folks another option for configuring kube-proxy in the interim till this is implemented.

I guess one downside of editing the configmap for kube-proxy is that it duplicates the configuration of the cluster, for example things like clusterCIDR: 10.96.0.0/16.

I very much think it's still desirable to allow this to be configurable as part of the KubeadmConfig.

I'm also having this use case (where I want to monitor kube-proxy using Prometheus):

Here's a user story for you in case it helps motivate folks. The need to edit kube-proxy to do metrics binding to 0.0.0.0 instead of 127.0.0.1 so that prometheus can get metrics from kube-proxy on cluster-api provisioned clusters.

@vincepri
Copy link
Member

/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Nov 30, 2020
@vincepri
Copy link
Member

vincepri commented Feb 8, 2021

/remove-lifecycle frozen

@fabriziopandini What's the status of this issue? Could this be tackled as part of the node agent?

@k8s-ci-robot k8s-ci-robot removed the lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. label Feb 8, 2021
@fabriziopandini
Copy link
Member Author

I have no strong opinions here.
There are use cases for this, but at the same time it isn't a priority and this keeps getting pushed down in the list.
Fine for me to keep this in the radar or close and eventually re-open if there is a stronger ask/capacity to tackle this.

@neolit123
Copy link
Member

linking to the upstream kubelet issue about migrating flags to config:
kubernetes/kubernetes#86843

@fabriziopandini
Copy link
Member Author

/close
In favour of #4464, given that it provides a more specific use cases for KubeletConfiguration at least.
Eventually we can create a similar issue for Kubeadm-proxy as well, but given that the two configurations are managed in very different ways in a running cluster, let's keep the issues separated

@k8s-ci-robot
Copy link
Contributor

@fabriziopandini: Closing this issue.

In response to this:

/close
In favour of #4464, given that it provides a more specific use cases for KubeletConfiguration at least.
Eventually we can create a similar issue for Kubeadm-proxy as well, but given that the two configurations are managed in very different ways in a running cluster, let's keep the issues separated

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests