Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

--extra-config does not take effect on an existing cluster #8242

Closed
tstromberg opened this issue May 21, 2020 · 9 comments · Fixed by #9634
Closed

--extra-config does not take effect on an existing cluster #8242

tstromberg opened this issue May 21, 2020 · 9 comments · Fixed by #9634
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@tstromberg
Copy link
Contributor

For example, if you specify -extra-config kubelet.kube-api-qps=5 --extra-config controller-manager.kube-api-qps=5, and run sudo ps -afe | grep controller, you'll see that the new arguments do not appear.

@tstromberg tstromberg added kind/bug Categorizes issue or PR as related to a bug. priority/backlog Higher priority than priority/awaiting-more-evidence. labels May 26, 2020
@jmazzitelli
Copy link

jmazzitelli commented Jun 1, 2020

This is especially painful when wanting to integrate OpenID Connect. If you start with a clean cluster, you have to install an OpenID Connect impl. (like Dex) and then you have to restart with the --extra-config settings that define the oidc configs (the ones here). But because of this bug, it won't work. You have to start the cluster with the oidc settings (but this means you are setting them before you even have the OpenId Connect stuff in place). And note that since Dex is being installed directly in minikube and is the issuer, then one of the configs requires a hostname to the minikube IP (oidc-issuer-url), but I can't determine that until minikube starts up and I get its ip via minikube ip - but by then it is too late (I cannot then shutdown and restart minikube with the extra config containing the URL with the IP because this bug won't allow that change to take effect).

I'm seeing this in minikube 1.11.0.

@franck102
Copy link

Is this even working for new clusters? I installed a fresh minikube 1.11.0 on OSX, created a new cluster with this command:

minikube --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0 start

and the bind-adress for controller-manager is unaffected:

> kc describe -n kube-system pod kube-controller-manager-minikube
... Command: kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 ...

As a side note figuring out controller-manager.address from the documentation is not trivial (and maybe that is what I got wrong?), the reference APIs mentioned in the section about extra-config have the Address attribute with a capital A, on a sub-element of the struct (Generic GenericControllerManagerConfiguration).

@medyagh
Copy link
Member

medyagh commented Jun 8, 2020

Is this even working for new clusters? I installed a fresh minikube 1.11.0 on OSX, created a new cluster with this command:

minikube --extra-config=kubelet.authentication-token-webhook=true --extra-config=kubelet.authorization-mode=Webhook --extra-config=scheduler.address=0.0.0.0 --extra-config=controller-manager.address=0.0.0.0 start

and the bind-adress for controller-manager is unaffected:

> kc describe -n kube-system pod kube-controller-manager-minikube
... Command: kube-controller-manager --authentication-kubeconfig=/etc/kubernetes/controller-manager.conf --authorization-kubeconfig=/etc/kubernetes/controller-manager.conf --bind-address=127.0.0.1 ...

As a side note figuring out controller-manager.address from the documentation is not trivial (and maybe that is what I got wrong?), the reference APIs mentioned in the section about extra-config have the Address attribute with a capital A, on a sub-element of the struct (Generic GenericControllerManagerConfiguration).

do you mind sharing what driver you were using ? currently minikube we do not support remote access to minikube. ( I noticed you were trying to pass 0.0.0.0)
in VM drivers you might be able to do it at your own risk, but in Docker/podman driver, we explicitly set the listen IP to local so you wont be able to over-ride that.

given that we could still provide a better message to the user that minikube remote is not supported.

also you are we could really use some help on the documenting --
extra-config flag. I would be happy to review any PR that adds docs for this flag.

@medyagh medyagh added the help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. label Jun 8, 2020
@franck102
Copy link

I am using VirtualBox on OSX.
The "remote" access is simply trying to be able to read cadvisor metrics from Prometheus, as described here:

https://github.com/coreos/prometheus-operator/blob/master/Documentation/troubleshooting.md#prometheus-kubelet-metrics-server-returned-http-status-403-forbidden

Franck

@Harkishen-Singh
Copy link
Contributor

Harkishen-Singh commented Jun 19, 2020

Will like to take this up. Working on it.

@Harkishen-Singh
Copy link
Contributor

Well, I think even for a new cluster, it doesn't work. I tried out the statements mentioned in the issue while creating a new cluster, that too didn't give any mention for the supplied api-server and controller-manager flags.

@sendrex
Copy link

sendrex commented Aug 13, 2020

If there's missing data, please let me know and I'll update the issue.

As requested by @priyawadhwa in #8979, I'm showing here everything I could gather from my situation related to this problem. I always seem to hit this problem even after a clean install of this environment.

Environment:

Debian 10.5, installed via netinst ISO inside a VirtualBox VM.

root@minikube:~# uname -a
Linux minikube 4.19.0-10-amd64 #1 SMP Debian 4.19.132-1 (2020-07-24) x86_64 GNU/Linux

root@minikube:~# env # Some values have been removed as they're not relevant for this issue
SHELL=/bin/bash
PWD=/root
XDG_SESSION_TYPE=tty
HOME=/root
LANG=es_ES.UTF-8
TERM=xterm-256color
SHLVL=2
PATH=/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games
SSH_TTY=/dev/pts/1
_=/usr/bin/env

root@minikube:~# cat install_script.sh # How I installed everything
# Install Docker (latest version)
curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
apt-get update
apt-get install -y docker-ce docker-ce-cli containerd.io

# Install kubectl (latest version, no hypervisor)
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list
apt-get update
apt-get install -y kubectl

# Install Minikube (latest version)
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x minikube
cp minikube /usr/local/bin/
rm minikube

Docker, Minikube and kubectl.

root@minikube:~# docker version
Client: Docker Engine - Community
 Version:           19.03.12
 API version:       1.40
 Go version:        go1.13.10
 Git commit:        48a66213fe
 Built:             Mon Jun 22 15:45:50 2020
 OS/Arch:           linux/amd64
 Experimental:      false

Server: Docker Engine - Community
 Engine:
  Version:          19.03.12
  API version:      1.40 (minimum version 1.12)
  Go version:       go1.13.10
  Git commit:       48a66213fe
  Built:            Mon Jun 22 15:44:21 2020
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.2.13
  GitCommit:        7ad184331fa3e55e52b890ea95e65ba581ae3429
 runc:
  Version:          1.0.0-rc10
  GitCommit:        dc9208a3303feef5b3839f4323d9beb36df0a9dd
 docker-init:
  Version:          0.18.0
  GitCommit:        fec3683

root@minikube:~# minikube version
minikube version: v1.12.2
commit: be7c19d391302656d27f1f213657d925c4e1cfc2-dirty

root@minikube:~# kubectl version
Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:58:53Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.3", GitCommit:"2e7996e3e2712684bc73f0dec0200d64eec7fe40", GitTreeState:"clean", BuildDate:"2020-05-20T12:43:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

Steps to reproduce the issue:

  1. Start Minikube with --vm-driver=none and (through --extra-config) kubeadm init --skip-phases=addon/kube-proxy as described in official docs. This works as expected for the first time. Output is saved as minikube-start-ok.txt.
root@minikube:~# minikube start --vm-driver=none --extra-config=kubeadm.skip-phases=addon/kube-proxy --alsologtostderr
  1. Confirm that kube-proxy hasn't been created. This is also expected.
root@minikube:~# kubectl get pods -A
NAMESPACE     NAME                               READY   STATUS             RESTARTS   AGE
kube-system   coredns-66bff467f8-24knd           0/1     Running            0          3m22s
kube-system   etcd-minikube                      1/1     Running            0          3m21s
kube-system   kube-apiserver-minikube            1/1     Running            0          3m21s
kube-system   kube-controller-manager-minikube   1/1     Running            0          3m21s
kube-system   kube-scheduler-minikube            1/1     Running            0          3m21s
kube-system   storage-provisioner                0/1     CrashLoopBackOff   3          3m28s
  • Run minikube logs and save that output as minikube-logs-ok.txt.
root@minikube:~# minikube logs
  1. Stop Minikube.
root@minikube:~# minikube stop
✋  Stopping node "minikube"  ...
🛑  1 nodes stopped.
  1. Repeat step 1, but this time it doesn't work as expected, because kube-proxy is created. Output is saved as minikube-start-failed.txt.
root@minikube:~# minikube start --vm-driver=none --extra-config=kubeadm.skip-phases=addon/kube-proxy --alsologtostderr
  1. Repeat step 2. Output is not as expected, because kube-proxy shouldn't have been created, as expected in step 2.
root@minikube:~# kubectl get pods -A
NAMESPACE     NAME                               READY   STATUS    RESTARTS   AGE
kube-system   coredns-66bff467f8-24knd           1/1     Running   1          18m
kube-system   etcd-minikube                      1/1     Running   1          18m
kube-system   kube-apiserver-minikube            1/1     Running   1          18m
kube-system   kube-controller-manager-minikube   1/1     Running   1          18m
kube-system   kube-proxy-tbj29                   1/1     Running   0          2m22s
kube-system   kube-scheduler-minikube            1/1     Running   1          18m
kube-system   storage-provisioner                1/1     Running   8          19m
  • Run minikube logs and save that output as minikube-logs-failed.txt.
root@minikube:~# minikube logs

More details:

  • Behaviour is the same from steps 3 to 5 after running rm -rf ~/.minikube/cache.
  • Behaviour is the same from steps 1 to 5 after running minikube delete.
  • It should behave the same from steps 1 to 3, consistently.

Full output of failed command:

  • No failed command, but unexpected behaviour.

Full output of minikube start command used, if not already included:

Optional: Full output of minikube logs command:

@tstromberg tstromberg added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels Aug 31, 2020
@tstromberg tstromberg added the good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. label Oct 27, 2020
@tstromberg
Copy link
Contributor Author

tstromberg commented Oct 27, 2020

If anyone is interested in fixing this, let me know, and I would be happy to help them.

My recommendation on how to get started is:

  • First check that the new options are being written to $HOME/.minikube/profiles/minikube/config.json. My best guess is that this isn't happening for some reason, perhaps trying to preserve the behavior of the previous configuration without applying the new configuration. The field is called ExtraOptions.

  • Inspect the output of minikube start --alsologtostderr -v=1 for hints. For instance, there's code in kubeadm that is supposed to determine that the configuration has changed, and reset the cluster

  • Search the code for any special handling of ExtraOptions - it only has 15 mentions, so it should be easy to see which might be wrong: https://github.com/kubernetes/minikube/search?l=Go&q=ExtraOptions

@jot-hub
Copy link

jot-hub commented Oct 28, 2020

Hi @tstromberg Thank you for the hints! I quickly checked and as per bee6815#diff-0e864ab4025634664724909a47c34fbcae246ad52307eaaaa58153f0b256a8b4L381, the extra-config is not copied over - is there a specific reason for it?
also, @medyagh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
good first issue Denotes an issue ready for a new contributor, according to the "help wanted" guidelines. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/bug Categorizes issue or PR as related to a bug. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants