Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conflict between docker config and docker service with ssh driver #16231

Open
aishwarya-senthilkumar opened this issue Apr 4, 2023 · 11 comments
Labels
co/generic-driver kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@aishwarya-senthilkumar
Copy link

Unable to start, stop or delete minikube

minikube start --driver=ssh --ssh-ip-address=10.186.72.184 --native-ssh=false --service-cluster-ip-range='192.168.115.0/24' --extra-config=kubeadm.pod-network-cidr='192.168.114.0/24' --extra-config=kubeadm.pod-network-cidr=192.168.120.0/22 --cni=bridge --extra-config=kubelet.resolv-conf=/run/systemd/resolve/resolv.conf
😄  minikube v1.29.0 on Darwin 12.3.1
    ▪ KUBECONFIG=/Users/sai/.kube/config
✨  Using the ssh driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🔥  Deleting "minikube" in ssh ...
🤦  StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
😿  Failed to start ssh bare metal machine. Running "minikube delete" may fix it: creating host: create host timed out in 360.000000 seconds

❌  Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds
💡  Suggestion: Try 'minikube delete', and disable any conflicting VPN or firewall software
🍿  Related issue: https://github.com/kubernetes/minikube/issues/7072

sai@saiEMD6M ~ % minikube stop
✋  Stopping node "minikube"  ...
✋  Stopping node "minikube"  ...
✋  Stopping node "minikube"  ...
✋  Stopping node "minikube"  ...
✋  Stopping node "minikube"  ...
✋  Stopping node "minikube"  ...

❌  Exiting due to GUEST_STOP_TIMEOUT: Temporary Error: stop: containers: docker: docker ps -a --filter=name=k8s_ --format=: Process exited with status 1
stdout:

stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?


╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                                                       │
│    😿  If the above advice does not help, please let us know:                                                         │
│    👉  https://github.com/kubernetes/minikube/issues/new/choose                                                       │
│                                                                                                                       │
│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
│    Please also attach the following file to the GitHub issue:                                                         │
│    - /var/folders/n1/7_bs1z291n9dx8vy62dg873h0000gq/T/minikube_stop_bc7f49f349e822641d09c5fb48f410196b822884_0.log    │
│                                                                                                                       │
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
sai@saiEMD6M ~ % minikube delete
🔄  Uninstalling Kubernetes v1.25.3 using kubeadm ...
E0404 10:17:19.593080   13318 delete.go:516] unpause failed: list paused: docker: docker ps --filter status=paused --filter=name=k8s_ --format={{.ID}}: Process exited with status 1
stdout:

stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
💣  error deleting profile "minikube": failed to delete cluster: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.25.3:$PATH" kubeadm reset --cri-socket /var/run/cri-dockerd.sock --force": Process exited with status 127
stdout:

stderr:
env: ‘kubeadm’: No such file or directory

DETAILS :

OS : MacOS Monterey
Version 12.3.1 (21E258)

sai@saiEMD6M ~ % minikube version
minikube version: v1.28.0
commit: 986b1ebd987211ed16f8cc10aed7d2c42fc8392f
sai@saiEMD6M ~ % docker version
Client:
 Cloud integration: v1.0.29
 Version:           20.10.22
 API version:       1.41
 Go version:        go1.18.9
 Git commit:        3a2c30b
 Built:             Thu Dec 15 22:28:41 2022
 OS/Arch:           darwin/amd64
 Context:           default
 Experimental:      true

Server: Docker Desktop 4.16.2 (95914)
 Engine:
  Version:          20.10.22
  API version:      1.41 (minimum version 1.12)
  Go version:       go1.18.9
  Git commit:       42c8b31
  Built:            Thu Dec 15 22:26:14 2022
  OS/Arch:          linux/amd64
  Experimental:     false
 containerd:
  Version:          1.6.14
  GitCommit:        9ba4b250366a5ddde94bb7c9d1def331423aa323
 runc:
  Version:          1.1.4
  GitCommit:        v1.1.4-0-g5fd4c4d
 docker-init:
  Version:          0.19.0
  GitCommit:        de40ad0

I managed to login to my VM, docker service is down

root@minikube:~# systemctl status docker
× docker.service - Docker Application Container Engine
     Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
    Drop-In: /etc/systemd/system/docker.service.d
             └─10-machine.conf
     Active: failed (Result: exit-code) since Mon 2023-04-03 21:00:01 PDT; 52min ago
TriggeredBy: × docker.socket
       Docs: https://docs.docker.com
    Process: 158216 ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --storage-driver overlay2 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=ssh --insecure-r>
   Main PID: 158216 (code=exited, status=1/FAILURE)
        CPU: 89ms

Apr 03 21:00:01 minikube systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Apr 03 21:00:01 minikube systemd[1]: Stopped Docker Application Container Engine.
Apr 03 21:00:01 minikube systemd[1]: docker.service: Start request repeated too quickly.
Apr 03 21:00:01 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 03 21:00:01 minikube systemd[1]: Failed to start Docker Application Container Engine.
lines 1-16/16 (END)

Is there something wrong with my minikube setup? or is it something simple I’m missing?

@aishwarya-senthilkumar
Copy link
Author

root@minikube:~# sudo journalctl -eu docker
~
~
~
~
~
~
~
~
~
~
~
~
~
~
Apr 04 00:06:52 minikube systemd[1]: Starting Docker Application Container Engine...
Apr 04 00:06:52 minikube dockerd[168313]: unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: storage-driver: (from flag: overlay2, from file: overlay2)
Apr 04 00:06:52 minikube systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 04 00:06:52 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 04 00:06:52 minikube systemd[1]: Failed to start Docker Application Container Engine.
Apr 04 00:06:52 minikube systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
Apr 04 00:06:52 minikube systemd[1]: Stopped Docker Application Container Engine.
Apr 04 00:06:52 minikube systemd[1]: Starting Docker Application Container Engine...
Apr 04 00:06:52 minikube dockerd[168324]: unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: storage-driver: (from flag: overlay2, from file: overlay2)
Apr 04 00:06:52 minikube systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 04 00:06:52 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 04 00:06:52 minikube systemd[1]: Failed to start Docker Application Container Engine.
Apr 04 00:06:52 minikube systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
Apr 04 00:06:52 minikube systemd[1]: Stopped Docker Application Container Engine.
Apr 04 00:06:52 minikube systemd[1]: Starting Docker Application Container Engine...
Apr 04 00:06:52 minikube dockerd[168335]: unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: storage-driver: (from flag: overlay2, from file: overlay2)
Apr 04 00:06:52 minikube systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 04 00:06:52 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 04 00:06:52 minikube systemd[1]: Failed to start Docker Application Container Engine.
Apr 04 00:06:52 minikube systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Apr 04 00:06:52 minikube systemd[1]: Stopped Docker Application Container Engine.
Apr 04 00:06:52 minikube systemd[1]: docker.service: Start request repeated too quickly.
Apr 04 00:06:52 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 04 00:06:52 minikube systemd[1]: Failed to start Docker Application Container Engine.
Apr 04 00:08:51 minikube systemd[1]: Starting Docker Application Container Engine...
Apr 04 00:08:51 minikube dockerd[168374]: unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: storage-driver: (from flag: overlay2, from file: overlay2)
Apr 04 00:08:51 minikube systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 04 00:08:51 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 04 00:08:51 minikube systemd[1]: Failed to start Docker Application Container Engine.
Apr 04 00:08:51 minikube systemd[1]: docker.service: Scheduled restart job, restart counter is at 1.
Apr 04 00:08:51 minikube systemd[1]: Stopped Docker Application Container Engine.
Apr 04 00:08:51 minikube systemd[1]: Starting Docker Application Container Engine...
Apr 04 00:08:51 minikube dockerd[168384]: unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: storage-driver: (from flag: overlay2, from file: overlay2)
Apr 04 00:08:51 minikube systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 04 00:08:51 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 04 00:08:51 minikube systemd[1]: Failed to start Docker Application Container Engine.
Apr 04 00:08:51 minikube systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
Apr 04 00:08:51 minikube systemd[1]: Stopped Docker Application Container Engine.
Apr 04 00:08:51 minikube systemd[1]: Starting Docker Application Container Engine...
Apr 04 00:08:51 minikube dockerd[168393]: unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: storage-driver: (from flag: overlay2, from file: overlay2)
Apr 04 00:08:51 minikube systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
Apr 04 00:08:51 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 04 00:08:51 minikube systemd[1]: Failed to start Docker Application Container Engine.
Apr 04 00:08:52 minikube systemd[1]: docker.service: Scheduled restart job, restart counter is at 3.
Apr 04 00:08:52 minikube systemd[1]: Stopped Docker Application Container Engine.
Apr 04 00:08:52 minikube systemd[1]: docker.service: Start request repeated too quickly.
Apr 04 00:08:52 minikube systemd[1]: docker.service: Failed with result 'exit-code'.
Apr 04 00:08:52 minikube systemd[1]: Failed to start Docker Application Container Engine.
root@minikube:~#

@aishwarya-senthilkumar
Copy link
Author

aishwarya-senthilkumar commented Apr 4, 2023

Issue:

unable to configure the Docker daemon with file /etc/docker/daemon.json: the following directives are specified both as a flag and in the configuration file: storage-driver: (from flag: overlay2, from file: overlay2)

There is a directive in /etc/docker/daemon.json, that is a duplicate of the one passed in ExecStart in /etc/systemd/system/docker.service.d/10-machine.conf (storage-driver)

Solution:

Inside vm, Remove --storage-driver overlay2 from /etc/systemd/system/docker.service.d/10-machine.conf
then sudo systemctl daemon-reload
then sudo systemctl restart docker

Then try running the same minikube start --driver=ssh ... on your machine

@aishwarya-senthilkumar aishwarya-senthilkumar changed the title Failed to start host: creating host: create host timed out in 360.000000 seconds Unable to start, stop or delete minikube Apr 4, 2023
@aishwarya-senthilkumar aishwarya-senthilkumar changed the title Unable to start, stop or delete minikube Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 secondsUnable to start, stop or delete minikube Apr 4, 2023
@aishwarya-senthilkumar aishwarya-senthilkumar changed the title Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 secondsUnable to start, stop or delete minikube Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds Exiting due to GUEST_STOP_TIMEOUT: Temporary Error: stop: containers: docker: docker ps -a --filter=name=k8s_ --format=: Process exited with status 1 Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Unable to start, stop or delete minikube Apr 4, 2023
@afbjorklund
Copy link
Collaborator

The "ssh" driver is for advanced users, it seems there might be some conflict between the minikube docker config and what was installed on the VM before. I would suggest using another driver, unless your workaround is enough for you.

@afbjorklund afbjorklund added co/generic-driver kind/bug Categorizes issue or PR as related to a bug. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Apr 4, 2023
@afbjorklund afbjorklund changed the title Exiting due to DRV_CREATE_TIMEOUT: Failed to start host: creating host: create host timed out in 360.000000 seconds Exiting due to GUEST_STOP_TIMEOUT: Temporary Error: stop: containers: docker: docker ps -a --filter=name=k8s_ --format=: Process exited with status 1 Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running? Unable to start, stop or delete minikube Conflict between docker config and docker service with ssh driver Apr 4, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 3, 2023
@vaibhav2107
Copy link
Member

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 3, 2023
@oxeye-mher
Copy link

I have the same issue with minikube v1.32.0, unfortunately the workaround is not working as minikube start rewrites the /etc/systemd/system/docker.service.d/10-machine.conf file and adds the --storage-driver overlay2 after I remove it.

Any idea how to make it work?

@chronicc
Copy link

This issue really is a problem when using the ssh driver. The problem does not arise from a wrong setup of the target machine but is reintroduced by minikube every time minikube start is run.

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 26, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 26, 2024
@chronicc
Copy link

chronicc commented Jun 5, 2024

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Jun 5, 2024
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 3, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/generic-driver kind/bug Categorizes issue or PR as related to a bug. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants