Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support DOCKER_HOST user override from within minikube pod container #8219

Closed
O1ahmad opened this issue May 20, 2020 · 12 comments
Closed

Support DOCKER_HOST user override from within minikube pod container #8219

O1ahmad opened this issue May 20, 2020 · 12 comments
Labels
co/docker-driver Issues related to kubernetes in container kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@O1ahmad
Copy link

O1ahmad commented May 20, 2020

Overview:

My team and I are currently looking for a way to launch minikube instances on Kubernetes pods using the DinD as a sidecar method for CI related purposes. We're having issues getting minikube running in one pod container to communicate with a Docker daemon running in another pod container (and listening at 0.0.0.0:2375).

Seems as though the DOCKER_HOST envvar is not being respected by minikube nor is the --docker-opt[=-H tcp://localhost:2375] cmd-flag.

System Information:

# Linux, Fedora32, Centos7

$ minikube version
minikube version: v1.9.2
commit: 93af9c1e43cab9618e301bc9fa720c63d5efa393

Related Issue(s): #7420

re: unix:///var/run/docker.sock error despite supposed override -#7420 (comment)

Steps to reproduce the issue:

  1. minikube start --driver=docker && kubectl config use-context minikube
  2. helm install example buildkite/agent --set agent_token --set dind.enabled=true --set extraEnv='[{"name": "DOCKER_HOST", "value": "localhost:2375"}]'
  3. kubectl exec -it example-buildkite-agent-... --namespace example --container agent -- {command-to-install-minikube}
  4. kubectl exec -it example-buildkite-agent-... --namespace example --container agent -- minikube start --driver=docker

Full output of failed command:

Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
@afbjorklund
Copy link
Collaborator

afbjorklund commented May 20, 2020

Hopefully this use case will also be covered by #8164 by not resetting the DOCKER_HOST.

But it could have a problem with the KIC driver, trying to run docker-in-docker-in-docker...

@O1ahmad
Copy link
Author

O1ahmad commented May 20, 2020

Nice, agreed on #8164 for a fix and there's no doubt bound to be interesting interplays between virtualization layers (mostly experimenting with the capabilities at the moment).

@priyawadhwa priyawadhwa added kind/feature Categorizes issue or PR as related to a new feature. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels May 20, 2020
@afbjorklund
Copy link
Collaborator

afbjorklund commented May 21, 2020

there's no doubt bound to be interesting interplays between virtualization layers (mostly experimenting with the capabilities at the moment).

The more straight-forward approach would be to deploy multiple kubernetes clusters.

Perhaps even a namespace would suffice, depending on how much isolation is needed...

i.e. give it they keys to the k8s cluster, rather than the keys to the container runtime ?

This is similar to where people try to run nested VMs, instead of just deploying two. [#4730]


See also: https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/

This gets even worse, when the original cluster is running in virtual machines:

Physical server -> Virtual Server(s) -> System container -> Pod container(s)

Throw in a Java virtual machine at the end, and you have a Matryoshka doll ?

@O1ahmad
Copy link
Author

O1ahmad commented May 26, 2020

Haha for sure @afbjorklund - more straightforward though more expensive and perhaps less efficient (especially for very short lived clusters/test environments).

Also, seems like there's a bit of confusion in terms of dind - familiar with this article: https://applatix.com/case-docker-docker-kubernetes-part-2/? (inspiration for the experimentation we're trying now)

@medyagh
Copy link
Member

medyagh commented Jun 10, 2020

Haha for sure @afbjorklund - more straightforward though more expensive and perhaps less efficient (especially for very short lived clusters/test environments).

Also, seems like there's a bit of confusion in terms of dind - familiar with this article: https://applatix.com/case-docker-docker-kubernetes-part-2/? (inspiration for the experimentation we're trying now)

@0x0i
there isa PR for this,
here is the link to the binary from a PR that I think might fix this issue, do you mind trying it out ?

http://storage.googleapis.com/minikube-builds/8164/minikube-linux-amd64
http://storage.googleapis.com/minikube-builds/8164/minikube-darwin-amd64
http://storage.googleapis.com/minikube-builds/8164/minikube-windows-amd64.exe

mind that this PR is still waiting for fixing the integration tests, but you could give it a try now to see if that helps you

@O1ahmad
Copy link
Author

O1ahmad commented Jun 11, 2020

Hey, thanks and have tried though am hitting a conntrack dependency issue with it.

bash-4.4# minikube-linux-amd64 start --driver=none
* minikube v1.11.0 on Alpine 3.8.5
* Using the none driver based on user configuration
X Sorry, Kubernetes 1.18.3 requires conntrack to be installed in root's path

Not sure if --driver should be set to docker but that path requires operation with a user besides root which can lead to somewhat of a mess of complexity.

Anyway, I've found stuff like this and thinking it could be because we're currently using alpine linux. Will try with something like ubuntu or centos.

@afbjorklund
Copy link
Collaborator

You will find it in conntrack-tools. When running an unsupported OS, there might be some extra steps needed. We have the same issues for the minikube ISO, so it is (usually) not unsolvable. Just work.

/ # apk add conntrack-tools
(1/7) Installing libmnl (1.0.4-r0)
(2/7) Installing libnfnetlink (1.0.1-r1)
(3/7) Installing libnetfilter_conntrack (1.0.6-r0)
(4/7) Installing libnetfilter_cthelper (1.0.0-r0)
(5/7) Installing libnetfilter_cttimeout (1.0.0-r0)
(6/7) Installing libnetfilter_queue (1.0.3-r0)
(7/7) Installing conntrack-tools (1.4.4-r0)
Executing busybox-1.30.1-r2.trigger
OK: 6 MiB in 21 packages
/ # which conntrack
/usr/sbin/conntrack

It also needs to be documented better, for the "none" driver. #7905

You will find that it is also looking for some other tools, like iptables.

@jared-mackey
Copy link

jared-mackey commented Jun 14, 2020

@medyagh I tried it out and it was able to start the containers but wasn't able to connect.

❯ ./minikube-darwin-amd64 start --driver=docker
😄  minikube v1.11.0 on Darwin 10.15.5
✨  Using the docker driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
💾  Downloading Kubernetes v1.18.3 preload ...
    > preloaded-images-k8s-v3-v1.18.3-docker-overlay2-amd64.tar.lz4: 526.01 MiB
🔥  Creating docker container (CPUs=2, Memory=4000MB) ...

If I connect to my docker host, I see them running but they bound the IPs to 127.0.0.1 instead of 0.0.0.0

For some context: I am trying to run this from my Mac to a remote docker host using DOCKER_HOST=ssh://my-machine

@priyawadhwa priyawadhwa added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Sep 8, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 7, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 6, 2021
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/feature Categorizes issue or PR as related to a new feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

7 participants