Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pods not started after minikube restart (minikube v1.25.X, docker driver) #13503

Closed
neptoon opened this issue Jan 27, 2022 · 12 comments · Fixed by #13506
Closed

Pods not started after minikube restart (minikube v1.25.X, docker driver) #13503

neptoon opened this issue Jan 27, 2022 · 12 comments · Fixed by #13506
Assignees
Milestone

Comments

@neptoon
Copy link

neptoon commented Jan 27, 2022

What Happened?

With minikube 1.25.0/1.25.1 and using the docker driver, deployments/pods are not existing anymore after minikube stop and minikube start:

vagrant@minikube-test:~$ ./minikube start --driver=docker
* minikube v1.25.1 on Ubuntu 21.04
* Using the docker driver based on user configuration
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Downloading Kubernetes v1.23.1 preload ...
    > preloaded-images-k8s-v16-v1...: 504.42 MiB / 504.42 MiB  100.00% 45.22 Mi
* Creating docker container (CPUs=2, Memory=2900MB) ...
* Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
  - kubelet.housekeeping-interval=5m
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default


vagrant@minikube-test:~$ kubectl create deployment nginx --image=nginx
deployment.apps/nginx created


vagrant@minikube-test:~$ kubectl get pods
NAME                     READY   STATUS    RESTARTS   AGE
nginx-85b98978db-87hl9   1/1     Running   0          19s


vagrant@minikube-test:~$ ./minikube stop
* Stopping node "minikube"  ...
* Powering off "minikube" via SSH ...
* 1 node stopped.


vagrant@minikube-test:~$ ./minikube start
* minikube v1.25.1 on Ubuntu 21.04
* Using the docker driver based on existing profile
* Starting control plane node minikube in cluster minikube
* Pulling base image ...
* Restarting existing docker container for "minikube" ...
* Preparing Kubernetes v1.23.1 on Docker 20.10.12 ...
  - kubelet.housekeeping-interval=5m
  - Generating certificates and keys ...
  - Booting up control plane ...
  - Configuring RBAC rules ...
* Verifying Kubernetes components...
  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default


vagrant@minikube-test:~$ kubectl get pods
No resources found in default namespace.
vagrant@minikube-test:~$ 

The problem does not exist in version 1.24.0

Attach the log file

log.txt

Operating System

Ubuntu

Driver

Docker

@cdayjr
Copy link

cdayjr commented Jan 27, 2022

Noticing this myself on a macOS host.

@jwandrews
Copy link

Also experiencing this on macOS host with docker driver

@miriSch
Copy link

miriSch commented Jan 28, 2022

Also namespaces are gone after minikube stop minikube start (noticed on Ubuntu)

@gektor0856
Copy link

confirm problem on macos 12.1 minikube 1.25.1, minikube reinstall not solve the problem

@jwandrews
Copy link

another observation, all resources in the default namespace are gone. ConfigMaps, Secrets, etc

@gektor0856
Copy link

@jwandrews it seems minikube recreates components of cluster on stop and start actions instead of restore previous state

@z-yan
Copy link

z-yan commented Jan 28, 2022

Namespaces are gone after minikube stop and minikube start.

docker --version:

Docker version 20.10.12, build e91ed57

minikube version:

minikube version: v1.25.1
commit: 3e64b11ed75e56e4898ea85f96b2e4af0301f43d

OS: macOS 11.6.2 on M1

@jwandrews
Copy link

jwandrews commented Jan 28, 2022

@gektor0856 yeah i think in this case, that is the issue at hand. pre-1.25.x, minikube was able to stop/start no problem and would 'rememeber' the previous cluster state (deployments, configmaps, secrets, namespaces, etc). so somehow, for whatever reason, the previous state is forgotten between stop/start of minikube.

interestingly, it does remember which addons you had running. in the case of something like registry-creds though, because of the 'forgotten' secrets, the pod won't start because it's looking for secrets/config that aren't there.

@Guillaume-Mayer
Copy link

Persistent volumes are gone too after minikube stop and minikube start

@spowelljr
Copy link
Member

spowelljr commented Jan 31, 2022

This was introduced with my PR #13121 when trying to fix something else, but it ended up causing more headaches than it solved. I currently have a PR up that reverts this change #13506.

Here are some links to try out the binary to test the change out:
Intel Mac: https://storage.googleapis.com/minikube-builds/13506/minikube-darwin-amd64
M1 Mac: https://storage.googleapis.com/minikube-builds/13506/minikube-darwin-arm64
Linux: https://storage.googleapis.com/minikube-builds/13506/minikube-linux-amd64
Windows: https://storage.googleapis.com/minikube-builds/13506/minikube-windows-amd64.exe

Sorry for the inconvenience

@Guillaume-Mayer
Copy link

Guillaume-Mayer commented Feb 1, 2022

@spowelljr I tried with your linux binary and it worked for me, good work

@neptoon
Copy link
Author

neptoon commented Feb 1, 2022

@spowelljr Your fixed binary works for me as well. Thanks a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants