Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Document minikube to minikube network communication (testing) #2602

Open
jimmiebtlr opened this issue Mar 13, 2018 · 12 comments
Open

Document minikube to minikube network communication (testing) #2602

jimmiebtlr opened this issue Mar 13, 2018 · 12 comments
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@jimmiebtlr
Copy link

jimmiebtlr commented Mar 13, 2018

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

Feature request

I'd like to run minikube in one docker container and connect to it from another.

I'd like to be able to run an integration test layer using minikube, and have tests connect to minikube from the container, but the config minikube generates seems to use a random ip that complicates it.

version: "2"                                                                                                                        
services:
  minikube:                                                                                                                         
    image: minikube                                                                                                                 
    build:                                                                                                                          
      context: "."                                                                                                                  
      dockerfile: "docker/minikube"                                                                                                 
    volumes:                                                                                                                        
      - "/etc/ssl/certs:/etc/ssl/certs"                                                                                             
      - "/var/run/docker.sock:/var/run/docker.sock"                                                                                 
    privileged: true
  golang-tests:
    ...

Please provide the following details:

Environment:

Trying to run minikube in a docker container and connect to it from another docker container.

minikube version: v0.25.0
VM Driver = none
ISO Version = v1.9.0

docker-compose version 1.18.0, build 8dd22a9
docker version: 1.13.1

What happened:

I'm unable to connect to minikube from another linked container via

kubectl get pods --server=minikube:8080

The connection to the server minikube:8080 was refused - did you specify the right host or port?

What you expected to happen:

A list of pods returned.

How to reproduce it (as minimally and precisely as possible):

With the docker file

FROM alpine:latest               

ADD https://storage.googleapis.com/kubernetes-release/release/v1.7.5/bin/linux/amd64/kubectl kubectl                                
RUN chmod +x kubectl             
ADD https://storage.googleapis.com/minikube/releases/v0.25.0/minikube-linux-amd64 minikube                                          
RUN chmod +x minikube            
ADD https://download.docker.com/linux/static/stable/x86_64/docker-17.09.0-ce.tgz docker-17.09.0-ce.tgz                              
RUN tar xzvf docker-17.09.0-ce.tgz                                

FROM debian:stable-slim          
COPY --from=0 kubectl minikube docker/docker /usr/local/bin/      
COPY start.sh start.sh                           
CMD ["sh", "./start.sh"]

start.sh

#!/bin/sh
/usr/local/bin/minikube start --vm-driver=none
/usr/local/bin/minikube logs -f

With the following docker-compose.yml

version: "2"                                                                                                                        
services:
  minikube:                                                                                                                         
    image: minikube                                                                                                                 
    build:                                                                                                                          
      context: "."                                                                                                                  
      dockerfile: "docker/minikube"                                                                                                 
    volumes:                                                                                                                        
      - "/etc/ssl/certs:/etc/ssl/certs"                                                                                             
      - "/var/run/docker.sock:/var/run/docker.sock"                                                                                 
    privileged: true
  get-pods:                                                                                                                           
    image: "google/cloud-sdk:190.0.1"                
    links: 
      - "minikube"                                                                                              
    command: ["kubectl","get","pods","--server=minikube:8080"]

run docker-compose run get-pods

@afbjorklund
Copy link
Collaborator

Sounds like you are trying to do docker-in-docker, which is not what minikube does...

But you should be able to use minikube from other containers running in the same VM

@jimmiebtlr
Copy link
Author

jimmiebtlr commented Mar 25, 2018

I'm trying to connect to minikube to test some stuff that relies on kubernetes functionality.

The simplest version of what I'm trying to do is

minikube start
in one container, and
kubectl get pods
in another.

But I believe the port is randomized on minikube start, and the kube config file would need to be shared with the other pod as well?

I'm probably missing something obvious?

@afbjorklund
Copy link
Collaborator

I'm still trying to parse what "run minikube in a container" means. Normally you will run it in a VM ?

@jimmiebtlr
Copy link
Author

Meaning just the controller, using an existing docker daemon and vm driver none. Anything minikube runs in terms of pods run's via the existing docker daemon, but the actual kube service is in its own docker container rather than on the host machine.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 27, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jul 27, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@hadrien-toma
Copy link

hadrien-toma commented Sep 12, 2018

May I /reopen ? @jimmiebtlr explanations of what he wants sounds good to me and I would like to have this feature too 😊.

@dlorenc dlorenc reopened this Sep 12, 2018
@tstromberg tstromberg added kind/documentation Categorizes issue or PR as related to documentation. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Sep 19, 2018
@tstromberg tstromberg changed the title Run minikube in one docker container and connect in another Document minikube to minikube network communication (testing) Sep 19, 2018
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 18, 2018
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 17, 2019
@tstromberg tstromberg added help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Jan 23, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 29, 2019
@tstromberg tstromberg added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. r/2019q2 Issue was last reviewed 2019q2 and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 22, 2019
@tstromberg tstromberg added priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. r/2019q2 Issue was last reviewed 2019q2 labels Sep 20, 2019
@tstromberg
Copy link
Contributor

If someone is willing to write up a tutorial, one of these would make a perfect home for it:

https://minikube.sigs.k8s.io/docs/tutorials/
https://minikube.sigs.k8s.io/docs/reference/networking/

@tstromberg tstromberg added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Dec 16, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
help wanted Denotes an issue that needs help from a contributor. Must meet "help wanted" guidelines. kind/documentation Categorizes issue or PR as related to documentation. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

7 participants