Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker run with bind mount: stat /bin/bash: no such file or directory: unknown #3378

Closed
oleksiys opened this issue Nov 29, 2018 · 8 comments
Closed
Labels
co/docker-env docker-env issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2

Comments

@oleksiys
Copy link

Environment:

Minikube version: v0.30.0

  • OS: MacOS 10.14.1 (18B75)
  • VM Driver: virtualbox
  • ISO version: minikube-v0.30.0.iso

What happened:
I had a problem deploying CSI driver pod on minikube and was able to trace it down to a simple usecase. When I try to deploy this in minikube's docker:

$ docker run --name alpine -d -v /var/lib/kubelet/pods:/var/lib/kubelet/pods:rshared alpine:latest /bin/bash

The result docker returns is:

docker: Error response from daemon: OCI runtime create failed: container_linux.go:348: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown.

After that docker is no longer responsible via unix:///var/run/docker.sock:

docker ps
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

Even though I can still reach it via http:

docker ps -a                                                                                                                                                                                            
CONTAINER ID        IMAGE                                     COMMAND                  CREATED             STATUS              PORTS               NAMES
fc98acbd6fb4        alpine:latest                             "/bin/bash"              23 minutes ago      Created                                 alpine

As you can see the container I was trying to deploy is still there in "Created" state.

Relevant docker logs are here:

Nov 29 18:40:08 minikube dockerd[2440]: time="2018-11-29T18:40:08Z" level=info msg="shim reaped" id=fc98acbd6fb41a90ffb0f3fbff5e41ecefc1afef9c67549bb7f7cfe021d1404e module="containerd/tasks"
Nov 29 18:40:08 minikube dockerd[2440]: time="2018-11-29T18:40:08.106770534Z" level=error msg="stream copy error: reading from a closed fifo"
Nov 29 18:40:08 minikube dockerd[2440]: time="2018-11-29T18:40:08.106994334Z" level=error msg="stream copy error: reading from a closed fifo"
Nov 29 18:40:08 minikube dockerd[2440]: time="2018-11-29T18:40:08.155131140Z" level=error msg="fc98acbd6fb41a90ffb0f3fbff5e41ecefc1afef9c67549bb7f7cfe021d1404e cleanup: failed to delete container from containerd: no such container"

What you expected to happen:

I'd expect the volume to be mounted without error, or if there's an error I'd expect it to not corrupt VM environment.

How to reproduce it (as minimally and precisely as possible):

docker run --name alpine -d -v /var/lib/kubelet/pods:/var/lib/kubelet/pods:rshared alpine:latest /bin/bash
@mattsmithdatera
Copy link

mattsmithdatera commented Dec 5, 2018

I've hit a similar issue using the minikube kvm2 plugin on Ubuntu Xenial and my own CSI plugin. Both the Controller and the Node pieces hit the same issue.

Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  107s  default-scheduler  Successfully assigned kube-system/csi-node-jrhrq to minikube
  Normal   Pulling    107s  kubelet, minikube  pulling image "quay.io/k8scsi/driver-registrar:v1.0.0"
  Normal   Pulled     96s   kubelet, minikube  Successfully pulled image "quay.io/k8scsi/driver-registrar:v1.0.0"
  Normal   Created    96s   kubelet, minikube  Created container
  Normal   Started    96s   kubelet, minikube  Started container
  Normal   Pulling    96s   kubelet, minikube  pulling image "dateraiodev/iscsi:latest"
  Normal   Pulled     75s   kubelet, minikube  Successfully pulled image "dateraiodev/iscsi:latest"
  Normal   Created    75s   kubelet, minikube  Created container
  Normal   Started    75s   kubelet, minikube  Started container
  Normal   Pulling    75s   kubelet, minikube  pulling image "dateraiodev/dat-csi-plugin:latest"
  Normal   Pulled     67s   kubelet, minikube  Successfully pulled image "dateraiodev/dat-csi-plugin:latest"
  Normal   Created    67s   kubelet, minikube  Created container
  Warning  Failed     67s   kubelet, minikube  Error: failed to start container "dat-csi-plugin-node": Error response from daemon: OCI runtime create failed: open /var/run/docker/runtime-runc/moby/dat-csi-plugin-node/state.json: no such file or directory: unknown
Events:
  Type     Reason     Age   From               Message
  ----     ------     ----  ----               -------
  Normal   Scheduled  67s   default-scheduler  Successfully assigned kube-system/csi-provisioner-0 to minikube
  Normal   Pulling    66s   kubelet, minikube  pulling image "quay.io/k8scsi/csi-provisioner:v1.0.0"
  Normal   Created    61s   kubelet, minikube  Created container
  Normal   Started    61s   kubelet, minikube  Started container
  Normal   Pulling    61s   kubelet, minikube  pulling image "quay.io/k8scsi/csi-attacher:v1.0.0"
  Normal   Pulled     61s   kubelet, minikube  Successfully pulled image "quay.io/k8scsi/csi-provisioner:v1.0.0"
  Normal   Pulled     52s   kubelet, minikube  Successfully pulled image "quay.io/k8scsi/csi-attacher:v1.0.0"
  Normal   Created    52s   kubelet, minikube  Created container
  Normal   Pulling    51s   kubelet, minikube  pulling image "quay.io/k8scsi/csi-snapshotter:v1.0.0"
  Normal   Started    51s   kubelet, minikube  Started container
  Normal   Pulled     31s   kubelet, minikube  Successfully pulled image "quay.io/k8scsi/csi-snapshotter:v1.0.0"
  Normal   Created    31s   kubelet, minikube  Created container
  Normal   Started    31s   kubelet, minikube  Started container
  Normal   Pulling    31s   kubelet, minikube  pulling image "dateraiodev/dat-csi-plugin:latest"
  Normal   Pulled     25s   kubelet, minikube  Successfully pulled image "dateraiodev/dat-csi-plugin:latest"
  Normal   Created    25s   kubelet, minikube  Created container
  Warning  Failed     25s   kubelet, minikube  Error: failed to start container "dat-csi-plugin-controller": Error response from daemon: OCI runtime create failed: unable to retrieve OCI runtime error (open /run/docker/containerd/daemon/io.containerd.runtime.v1.linux/moby/dat-csi-plugin-controller/log.json: no such file or directory): docker-runc did not terminate sucessfully: unknown

@tstromberg tstromberg added the priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. label Jan 23, 2019
@tstromberg tstromberg changed the title Volume mount causes minikube vm to become corrupted 2 docker run with bind mount: stat /bin/bash: no such file or directory: unknown Jan 23, 2019
@tstromberg tstromberg added co/docker-env docker-env issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. and removed co/virtualbox os/macos labels Jan 23, 2019
@tstromberg
Copy link
Contributor

FWIW, I don't yet know enough about Docker and bind mounts to say if this should even work. This command behaves the same way for me outside of minikube.

@oleksiys
Copy link
Author

AFAIR it worked well when I ran minikube outside of VM, so I guess this issue might be related to an OS running inside VM.

@doprdele
Copy link

This also appears to happen to me

@tstromberg
Copy link
Contributor

Correct, this only happens if you run docker from inside of minikube.

This issue still exists in minikube v1.1.

@tstromberg tstromberg added priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2 and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels May 22, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 20, 2019
@tstromberg
Copy link
Contributor

Fixed by minikube v1.4, probably with rootfs -> tmpfs migration.

$ docker run --name alpine -d -v /var/lib/kubelet/pods:/var/lib/kubelet/pods:rshared alpine:latest /bin/bash
...
docker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/bin/bash\": stat /bin/bash: no such file or directory": unknown.

Starting with /bin/sh works perfectly.

@AKovtunov
Copy link

> docker run --name alpine -d -v /var/lib/kubelet/pods:/var/lib/kubelet/pods:rshared alpine:latest /bin/bash
Unable to find image 'alpine:latest' locally
latest: Pulling from library/alpine
ba3557a56b15: Already exists 
Digest: sha256:a75afd8b57e7f34e4dad8d65e2c7ba2e1975c795ce1ee22fa34f8cf46f96a3be
Status: Downloaded newer image for alpine:latest
75a85b9774ad7016883c3b5e770f9cff36741b037a3af0f777d4dde16915e8ac
docker: Error response from daemon: OCI runtime create failed: container_linux.go:370: starting container process caused: exec: "/bin/bash": stat /bin/bash: no such file or directory: unknown.
> minikube version
minikube version: v1.18.1
commit: 09ee84d530de4a92f00f1c5dbc34cead092b95bc
> docker -v
Docker version 20.10.3, build 48d30b5

Any help?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-env docker-env issues help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. priority/backlog Higher priority than priority/awaiting-more-evidence. r/2019q2 Issue was last reviewed 2019q2
Projects
None yet
Development

No branches or pull requests

7 participants