Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube delete should not get stuck, if Docker Desktop is stuck or responding too slow #12846

Open
medyagh opened this issue Nov 2, 2021 · 12 comments
Assignees
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/improvement Categorizes issue or PR as related to improving upon a current feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@medyagh
Copy link
Member

medyagh commented Nov 2, 2021

I took so long that I Ctrl-Ced it and I realized Docker Desktop itself is taking SUPER SUPER longtime to respond

$ time minikube delete


^C

real    0m39.681s
user    0m0.375s
sys     0m0.165s
^C

I re-ran it with --alsologtostderr

$ time minikube delete --alsologtostderr
I1102 11:27:20.694115   66916 out.go:298] Setting OutFile to fd 1 ...
I1102 11:27:20.694247   66916 out.go:350] isatty.IsTerminal(1) = true
I1102 11:27:20.694250   66916 out.go:311] Setting ErrFile to fd 2...
I1102 11:27:20.694254   66916 out.go:350] isatty.IsTerminal(2) = true
I1102 11:27:20.694324   66916 root.go:313] Updating PATH: /Users/medya/.minikube/bin
I1102 11:27:20.694585   66916 cli_runner.go:115] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}
W1102 11:27:43.406112   66916 cli_runner.go:162] docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}} returned with exit code 1
I1102 11:27:43.406162   66916 cli_runner.go:168] Completed: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}: (22.711776417s)
I1102 11:27:43.406745   66916 config.go:177] Loaded profile config "minikube": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
I1102 11:27:43.406906   66916 config.go:177] Loaded profile config "minikube": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
I1102 11:27:43.406915   66916 delete.go:229] DeleteProfiles
I1102 11:27:43.406920   66916 delete.go:257] Deleting minikube
I1102 11:27:43.406926   66916 delete.go:262] minikube configuration: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.27-1633027942-12081@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.2 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.22.2 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0}
I1102 11:27:43.407169   66916 host.go:66] Checking if "minikube" exists ...
I1102 11:27:43.407780   66916 ssh_runner.go:152] Run: systemctl --version
I1102 11:27:43.407844   66916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W1102 11:27:59.875588   66916 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I1102 11:27:59.875644   66916 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: (16.467888459s)
I1102 11:27:59.875821   66916 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[]}
I1102 11:27:59.875936   66916 ssh_runner.go:152] Run: sudo crictl ps -a --quiet
I1102 11:27:59.876013   66916 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
W1102 11:28:18.069656   66916 cli_runner.go:162] docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube returned with exit code 1
I1102 11:28:18.069698   66916 cli_runner.go:168] Completed: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: (18.193831084s)
W1102 11:28:18.069869   66916 delete.go:267] failed to unpause minikube : list paused: crictl list: NewSession: new client: new client: Error creating new ssh host from driver: Error getting ssh port for driver: get ssh host-port: get port 22 for "minikube": docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube: exit status 1
stdout:


stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I1102 11:28:18.087389   66916 out.go:177] 🔥  Deleting "minikube" in docker ...
🔥  Deleting "minikube" in docker ...
I1102 11:28:18.087562   66916 delete.go:48] deleting possible leftovers for minikube (driver=docker) ...
I1102 11:28:18.088752   66916 cli_runner.go:115] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Names}}
W1102 11:28:43.421355   66916 cli_runner.go:162] docker ps -a --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Names}} returned with exit code 1
I1102 11:28:43.421382   66916 cli_runner.go:168] Completed: docker ps -a --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Names}}: (25.332889666s)
I1102 11:28:43.421420   66916 volumes.go:79] trying to delete all docker volumes with label name.minikube.sigs.k8s.io=minikube
I1102 11:28:43.421502   66916 cli_runner.go:115] Run: docker volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}
W1102 11:29:04.470971   66916 cli_runner.go:162] docker volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}} returned with exit code 1
I1102 11:29:04.471011   66916 cli_runner.go:168] Completed: docker volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}: (21.049685667s)
W1102 11:29:04.471073   66916 delete.go:64] error deleting volumes (might be okay).
To see the list of volumes run: 'docker volume ls'
:[listing volumes by label "name.minikube.sigs.k8s.io=minikube": docker volume ls --filter label=name.minikube.sigs.k8s.io=minikube --format {{.Name}}: exit status 1
stdout:

stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
]
I1102 11:29:04.471245   66916 cli_runner.go:115] Run: docker network ls --filter=label=created_by.minikube.sigs.k8s.io --format {{.Name}}
W1102 11:29:22.728639   66916 cli_runner.go:162] docker network ls --filter=label=created_by.minikube.sigs.k8s.io --format {{.Name}} returned with exit code 1
I1102 11:29:22.728674   66916 cli_runner.go:168] Completed: docker network ls --filter=label=created_by.minikube.sigs.k8s.io --format {{.Name}}: (18.257580708s)
W1102 11:29:22.728724   66916 delete.go:69] error deleting leftover networks (might be okay).
To see the list of networks: 'docker network ls'
:[list all volume: docker network ls --filter=label=created_by.minikube.sigs.k8s.io --format {{.Name}}: exit status 1
stdout:

stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
]
I1102 11:29:22.728771   66916 volumes.go:101] trying to prune all docker volumes with label name.minikube.sigs.k8s.io=minikube
I1102 11:29:22.728878   66916 cli_runner.go:115] Run: docker volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube
W1102 11:29:39.066160   66916 cli_runner.go:162] docker volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube returned with exit code 1
I1102 11:29:39.066204   66916 cli_runner.go:168] Completed: docker volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube: (16.33744525s)
W1102 11:29:39.066271   66916 delete.go:79] error pruning volume (might be okay):
[prune volume by label name.minikube.sigs.k8s.io=minikube: docker volume prune -f --filter label=name.minikube.sigs.k8s.io=minikube: exit status 1
stdout:

stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
]
I1102 11:29:39.066566   66916 config.go:177] Loaded profile config "minikube": Driver=docker, ContainerRuntime=containerd, KubernetesVersion=v1.22.2
I1102 11:29:39.067167   66916 cli_runner.go:115] Run: docker container inspect minikube --format={{.State.Status}}
W1102 11:29:57.314205   66916 out.go:242] ❗  Executing "docker container inspect minikube --format={{.State.Status}}" took an unusually long time: 18.246988833s
❗  Executing "docker container inspect minikube --format={{.State.Status}}" took an unusually long time: 18.246988833s
W1102 11:29:57.314301   66916 out.go:242] 💡  Restarting the docker service may improve performance.
💡  Restarting the docker service may improve performance.
W1102 11:29:57.314317   66916 cli_runner.go:162] docker container inspect minikube --format={{.State.Status}} returned with exit code 1
I1102 11:29:57.314343   66916 cli_runner.go:168] Completed: docker container inspect minikube --format={{.State.Status}}: (18.246988833s)
I1102 11:29:57.314411   66916 delete.go:82] Unable to get host status for minikube, assuming it has already been deleted: state: unknown state "minikube": docker container inspect minikube --format={{.State.Status}}: exit status 1
stdout:


stderr:
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
I1102 11:29:57.332722   66916 out.go:177] 🔥  Removing /Users/medya/.minikube/machines/minikube ...
🔥  Removing /Users/medya/.minikube/machines/minikube ...
I1102 11:29:57.344163   66916 lock.go:36] WriteFile acquiring /Users/medya/.kube/config: {Name:mk9fd218cbc52506c8b67871ae522c88260d21af Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I1102 11:29:57.363009   66916 out.go:177] 💀  Removed all traces of the "minikube" cluster.
💀  Removed all traces of the "minikube" cluster.

real    2m40.381s
user    0m1.359s
sys     0m0.601s
...

(and still going pasted half way)

the right thing to do is minikube should inform the user that Docker Desktop is taking a long time (if it takes more than 10 seconds and suggest them to either keep waiting or restart docker

@medyagh medyagh added kind/improvement Categorizes issue or PR as related to improving upon a current feature. help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. labels Nov 2, 2021
@medyagh medyagh changed the title minikube delete should inform user in case of Docker service taking SUPER long time minikube delete should not get stuck, if Docker Desktop is stuck or responding too slow Nov 2, 2021
@yayaha
Copy link
Contributor

yayaha commented Nov 2, 2021

/assign

@medyagh
Copy link
Member Author

medyagh commented Nov 2, 2021

the least it should do is, if we get "Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" error we should out.Warn to the user, this might take a long time,...and tell them we are stuck at Docker part so they dont be bored waiting for nothing

@spowelljr spowelljr added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Nov 3, 2021
@spowelljr spowelljr added this to the 1.25.0-candidate milestone Nov 3, 2021
@yayaha
Copy link
Contributor

yayaha commented Nov 9, 2021

I guess we can always check the existence of /var/run/docker.sock before running each docker command. But I'm not sure if this is sufficient since it's specific to docker.

OTOH, there is already functionality to warn the user of a slow command: https://github.com/kubernetes/minikube/blob/master/pkg/drivers/kic/oci/cli_runner.go#L148-L152. I can just enable it by default. The caveat is that enabling warning also enables timeout though. Otherwise, since warnSlow doesn't seem to be used by anything yet, we can repurpose warnSlow to killSlow, which is off by default, and then enable warning by default. WDYT?

@medyagh
Copy link
Member Author

medyagh commented Dec 13, 2021

we could do it in a verity of ways ! I would for dissussion if u make a PR to see a POC and I beleive it would be a cheap call to check if driver is Docker in oci.Cli Runner that pasess the ociBin to it

@medyagh
Copy link
Member Author

medyagh commented Jan 31, 2022

@yayaha are u still working on this

@yayaha
Copy link
Contributor

yayaha commented Feb 1, 2022

Hi @medyagh, sorry for the delay. No, I'm not working on this anymore.

@yayaha
Copy link
Contributor

yayaha commented Feb 1, 2022

/unassign

@spowelljr spowelljr self-assigned this Feb 2, 2022
@spowelljr spowelljr removed their assignment Feb 23, 2022
@spowelljr
Copy link
Member

Hi @chungjin, this is an issue you can work on if you're interested.

@chungjin
Copy link
Contributor

/assign

@chungjin
Copy link
Contributor

Hi @medyagh , when I running it locally, I never faced this issue, do you know how often we may face this situation? Maybe it depends on the docker version?

If it is rare, maybe we don't need to deal with it?

If we must process it, maybe we can save the stderr to a tmp file and analysis the log, but i am not sure the overhead of it.

@spowelljr spowelljr added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels May 4, 2022
@spowelljr spowelljr modified the milestones: 1.26.0, 1.27.0-candidate Jun 24, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 22, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 22, 2022
@k8s-ci-robot k8s-ci-robot added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Oct 22, 2022
@spowelljr spowelljr modified the milestones: 1.27.0-previous, 1.29.0 Nov 28, 2022
@spowelljr spowelljr modified the milestones: 1.31.0, 1.32.0 Jul 19, 2023
@spowelljr spowelljr added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Jun 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/improvement Categorizes issue or PR as related to improving upon a current feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

7 participants