Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

inotify not working with xhyve and minikube-iso combination #821

Closed
numbsafari opened this issue Nov 15, 2016 · 6 comments
Closed

inotify not working with xhyve and minikube-iso combination #821

numbsafari opened this issue Nov 15, 2016 · 6 comments
Labels
co/xhyve kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@numbsafari
Copy link

numbsafari commented Nov 15, 2016

Is this a BUG REPORT or FEATURE REQUEST? (choose one):
BUG REPORT

Minikube version (use minikube version):

21:33 $ minikube version
minikube version: v0.12.2

Environment:

  • OS: Darwin /redacted/ 16.1.0 Darwin Kernel Version 16.1.0: Thu Oct 13 21:26:57 PDT 2016; root:xnu-3789.21.3~60/RELEASE_X86_64 x86_64
  • VM Driver: "DriverName": "xhyve"
  • Docker version: Docker version 1.12.3, build 6b644ec
  • Install tools: homebrew
  • Others:

What happened:
I launched a new minikube instance using the following command:

21:36 $ minikube start --vm-driver=xhyve --iso-url=http://storage.googleapis.com/minikube/iso/buildroot/minikube-v0.0.6.iso
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.

The first time I did this, the /Users volume was not even mounted into the minikube VM. I ran minikube stop and minikube delete, then repeated the above command to create the driver, and got this error, instead:

21:34 $ minikube start --vm-driver=xhyve --iso-url=http://storage.googleapis.com/minikube/iso/buildroot/minikube-v0.0.6.iso
Starting local Kubernetes cluster...
E1114 21:35:20.188417   34831 start.go:92] Error starting host: Error creating host: Error creating machine: Error running provisioning: Something went wrong running an SSH command!
command : printf '%s' '-----BEGIN CERTIFICATE-----
/redacted/have it in a note, so let me know if you need it/
-----END CERTIFICATE-----
' | sudo tee /etc/docker/ca.pem
err     : exit status 1
output  : -----BEGIN CERTIFICATE-----
/redacted/
-----END CERTIFICATE-----
tee: /etc/docker/ca.pem: No such file or directory

. Retrying.
E1114 21:35:20.190463   34831 start.go:98] Error starting host:  Error creating host: Error creating machine: Error running provisioning: Something went wrong running an SSH command!
command : printf '%s' '-----BEGIN CERTIFICATE-----
/redacted/
-----END CERTIFICATE-----
' | sudo tee /etc/docker/ca.pem
err     : exit status 1
output  : -----BEGIN CERTIFICATE-----
/redacted/
-----END CERTIFICATE-----
tee: /etc/docker/ca.pem: No such file or directory

Third time was the charm:

21:35 $ minikube delete
Deleting local Kubernetes cluster...
Machine deleted.
21:36 $ minikube start --vm-driver=xhyve --iso-url=http://storage.googleapis.com/minikube/iso/buildroot/minikube-v0.0.6.iso
Starting local Kubernetes cluster...
Kubectl is now configured to use the cluster.
21:37 $ cat ~/.minikube/machines/minikube/config.json | grep DriverName
    "DriverName": "xhyve",
21:37 $ minikube ssh
$ mount | grep Users
host on /Users type 9p (rw,relatime,sync,dirsync,version=9p2000,trans=virtio,uname=/redacted/,dfltuid=1000,dfltgid=50,access=any)

I then deployed my application using helm, the goal of which was the following deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: portal
  labels:
    app: portal
spec:
  replicas: 1
  template:
    metadata:
      labels:
        app: portal
    spec:
      containers:
      - name: portal
        image: /redacted/
        command:
        - /usr/local/bin/http-server
        - -p 8080
        - .
        ports:
        - containerPort: 8080
        volumeMounts:
          - mountPath: /usr/src/app
            name: source-volume
      volumes:
      - name: source-volume
        hostPath:
          path: /Users/redacted/src/app

That is, I want it to use a hostPath volume mounted into my container from the VM (helm templating makes sure I get the user's homedir correct).

On my image I have inotify-tools installed. In one console I run the following:

root@portal-2725875819-98zs4:/usr/src/app# inotifywait -m .
Setting up watches.
Watches established.

In another console, I run:

22:03 $ kubectl exec -it portal-2725875819-98zs4 /bin/bash
root@portal-2725875819-98zs4:/usr/src/app# touch index.html

In the original console, I see:

./ OPEN index.html
./ ATTRIB index.html
./ CLOSE_WRITE,CLOSE index.html

Back in the original console, in the native directory mounted into the container inside the VM, I run:

root@portal-2725875819-98zs4:/usr/src/app# exit
22:06 $ touch index.html

Unfortunately, I see no further output in the inotify console.

What you expected to happen:

I expected to see inotify events propagated through to the inner container.

How to reproduce it (as minimally and precisely as possible):

I'll see if I can come up with something more concise.

Anything else do we need to know:

I'm not entirely familiar with the arrangements of the minikube ISO, but I did find this file:

https://github.com/kubernetes/minikube/blob/master/deploy/iso/minikube-iso/board/coreos/minikube/linux-4.7_defconfig

and I also found this info:

https://cateee.net/lkddb/web-lkddb/INOTIFY.html
and
https://cateee.net/lkddb/web-lkddb/INOTIFY_USER.html

I'm wondering if it could be as easy as setting those values to =y in that file and rebuilding the ISO. Lemme know if you need a guinea pig.

Also, inotify is not working with the xhyve + boot2docker iso combination, either.

Thanks @r2d4 for help in the minikube slack.

@r2d4 r2d4 added co/xhyve kind/bug Categorizes issue or PR as related to a bug. iso/minikube-iso labels Nov 15, 2016
@numbsafari
Copy link
Author

Hrm. Probably an issue with how /Users is being mounted into the VM. If I minikube ssh into the VM and then cd /Users/me/src/... and run touch index.html I see inotify events propagate into the pod.

@imathews
Copy link

Was there any update / resolution on this? I'm running into the same problem on minikube v0.15.0

@AlexGilleran
Copy link

Me too on minikube 14/virtualbox.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 20, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/xhyve kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

6 participants