Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

minikube start fails in docker-from-docker context #13950

Closed
B1tVect0r opened this issue Apr 12, 2022 · 15 comments
Closed

minikube start fails in docker-from-docker context #13950

B1tVect0r opened this issue Apr 12, 2022 · 15 comments
Labels
co/docker-driver Issues related to kubernetes in container kind/improvement Categorizes issue or PR as related to improving upon a current feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@B1tVect0r
Copy link

B1tVect0r commented Apr 12, 2022

What Happened?

Running minikube start --driver=docker from inside a container that has the host docker socket mounted (i.e., Docker-from-Docker setup) fails to complete. The Docker container starts properly and is visible from both the host and the container but the minikube start process fails to proceed as it seems to be hard-coded to dial 127.0.0.1:{minikube container port} for some other post-container-start process. I've tried fiddling with all of the IP-related options for the minikube start command that I can find to no avail. Is there a way to do this that I'm missing?

Attach the log file

* 
* ==> Audit <==
* |--------------|--------|----------|--------|---------|-------------------------------|-------------------------------|
|   Command    |  Args  | Profile  |  User  | Version |          Start Time           |           End Time            |
|--------------|--------|----------|--------|---------|-------------------------------|-------------------------------|
| delete       |        | minikube | vscode | v1.25.2 | Tue, 12 Apr 2022 00:00:52 UTC | Tue, 12 Apr 2022 00:00:54 UTC |
| delete       |        | minikube | vscode | v1.25.2 | Tue, 12 Apr 2022 00:05:00 UTC | Tue, 12 Apr 2022 00:05:05 UTC |
| update-check |        | minikube | vscode | v1.25.2 | Tue, 12 Apr 2022 12:08:26 UTC | Tue, 12 Apr 2022 12:08:26 UTC |
| logs         |        | minikube | vscode | v1.25.2 | Tue, 12 Apr 2022 12:09:22 UTC | Tue, 12 Apr 2022 12:10:01 UTC |
| config       |        | minikube | vscode | v1.25.2 | Tue, 12 Apr 2022 12:10:08 UTC | Tue, 12 Apr 2022 12:10:08 UTC |
| config       | view   | minikube | vscode | v1.25.2 | Tue, 12 Apr 2022 12:10:26 UTC | Tue, 12 Apr 2022 12:10:26 UTC |
| logs         |        | minikube | vscode | v1.25.2 | Tue, 12 Apr 2022 12:13:09 UTC | Tue, 12 Apr 2022 12:13:49 UTC |
| delete       |        | minikube | vscode | v1.25.2 | Tue, 12 Apr 2022 12:21:18 UTC | Tue, 12 Apr 2022 12:21:22 UTC |
| delete       |        | minikube | vscode | v1.25.2 | Tue, 12 Apr 2022 12:26:15 UTC | Tue, 12 Apr 2022 12:26:18 UTC |
| delete       |        | minikube | vscode | v1.25.2 | Tue, 12 Apr 2022 12:28:09 UTC | Tue, 12 Apr 2022 12:28:11 UTC |
| start        | --help | minikube | vscode | v1.25.2 | Tue, 12 Apr 2022 12:31:42 UTC | Tue, 12 Apr 2022 12:31:42 UTC |
| completion   | bash   | minikube | vscode | v1.25.2 | Tue, 12 Apr 2022 12:42:07 UTC | Tue, 12 Apr 2022 12:42:07 UTC |
| delete       |        | minikube | vscode | v1.25.2 | Tue, 12 Apr 2022 12:42:41 UTC | Tue, 12 Apr 2022 12:42:41 UTC |
| delete       |        | minikube | vscode | v1.25.2 | Tue, 12 Apr 2022 13:51:10 UTC | Tue, 12 Apr 2022 13:51:13 UTC |
|--------------|--------|----------|--------|---------|-------------------------------|-------------------------------|

* 
* ==> Last Start <==
* Log file created at: 2022/04/12 16:30:44
Running on machine: 567a1ab6b358
Binary: Built with gc go1.17.7 for linux/amd64
Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
I0412 16:30:44.560055   54927 out.go:297] Setting OutFile to fd 1 ...
I0412 16:30:44.560112   54927 out.go:349] isatty.IsTerminal(1) = true
I0412 16:30:44.560114   54927 out.go:310] Setting ErrFile to fd 2...
I0412 16:30:44.560116   54927 out.go:349] isatty.IsTerminal(2) = true
I0412 16:30:44.560182   54927 root.go:315] Updating PATH: /home/vscode/.minikube/bin
I0412 16:30:44.560406   54927 out.go:304] Setting JSON to false
I0412 16:30:44.581530   54927 start.go:112] hostinfo: {"hostname":"567a1ab6b358","uptime":16717,"bootTime":1649764327,"procs":22,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"11.3","kernelVersion":"5.10.102.1-microsoft-standard-WSL2","kernelArch":"x86_64","virtualizationSystem":"docker","virtualizationRole":"guest","hostId":"4433ab11-04e6-46d9-acfe-ae563956b66f"}
I0412 16:30:44.581628   54927 start.go:122] virtualization: docker guest
I0412 16:30:44.584344   54927 out.go:176] 😄  minikube v1.25.2 on Debian 11.3 (docker/amd64)
I0412 16:30:44.584521   54927 notify.go:193] Checking for updates...
I0412 16:30:44.584550   54927 driver.go:344] Setting default libvirt URI to qemu:///system
I0412 16:30:44.616961   54927 docker.go:132] docker version: linux-20.10.13
I0412 16:30:44.617044   54927 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0412 16:30:44.684655   54927 info.go:263] docker info: {ID:PJHM:J5NI:XCQZ:437J:DQQB:NJRB:FAAO:IPSE:3VRQ:H34X:6OGN:F275 Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:122 OomKillDisable:true NGoroutines:139 SystemTime:2022-04-12 16:30:44.63872321 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33354448896 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:0.8.2+azure-1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.4.1+azure-1]] Warnings:<nil>}}
I0412 16:30:44.684718   54927 docker.go:237] overlay module found
I0412 16:30:44.687035   54927 out.go:176] ✨  Using the docker driver based on user configuration
I0412 16:30:44.687127   54927 start.go:281] selected driver: docker
I0412 16:30:44.687134   54927 start.go:798] validating driver "docker" against <nil>
I0412 16:30:44.687152   54927 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0412 16:30:44.687586   54927 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0412 16:30:44.756255   54927 info.go:263] docker info: {ID:PJHM:J5NI:XCQZ:437J:DQQB:NJRB:FAAO:IPSE:3VRQ:H34X:6OGN:F275 Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:6 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:122 OomKillDisable:true NGoroutines:139 SystemTime:2022-04-12 16:30:44.710793886 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:4 KernelVersion:5.10.102.1-microsoft-standard-WSL2 OperatingSystem:Docker Desktop OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:16 MemTotal:33354448896 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy:http.docker.internal:3128 HTTPSProxy:http.docker.internal:3128 NoProxy:hubproxy.docker.internal Name:docker-desktop Labels:[] ExperimentalBuild:false ServerVersion:20.10.13 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc Expected:2a1d4dbdb2a1030dc5b01e96fb110a9d9f150ecc} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No blkio throttle.read_bps_device support WARNING: No blkio throttle.write_bps_device support WARNING: No blkio throttle.read_iops_device support WARNING: No blkio throttle.write_iops_device support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:0.8.2+azure-1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:2.4.1+azure-1]] Warnings:<nil>}}
I0412 16:30:44.756325   54927 start_flags.go:288] no existing cluster config was found, will generate one from the flags 
I0412 16:30:44.756903   54927 start_flags.go:369] Using suggested 7900MB memory alloc based on sys=31809MB, container=31809MB
I0412 16:30:44.756968   54927 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m
I0412 16:30:44.756974   54927 start_flags.go:813] Wait components to verify : map[apiserver:true system_pods:true]
I0412 16:30:44.756983   54927 cni.go:93] Creating CNI manager for ""
I0412 16:30:44.756985   54927 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0412 16:30:44.756989   54927 start_flags.go:302] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/vscode:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false}
I0412 16:30:44.759267   54927 out.go:176] 👍  Starting control plane node minikube in cluster minikube
I0412 16:30:44.759327   54927 cache.go:120] Beginning downloading kic base image for docker with docker
I0412 16:30:44.761181   54927 out.go:176] 🚜  Pulling base image ...
I0412 16:30:44.761264   54927 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
I0412 16:30:44.761295   54927 preload.go:148] Found local preload: /home/vscode/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4
I0412 16:30:44.761299   54927 cache.go:57] Caching tarball of preloaded images
I0412 16:30:44.761348   54927 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon
I0412 16:30:44.761490   54927 preload.go:174] Found /home/vscode/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
I0412 16:30:44.761497   54927 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.3 on docker
I0412 16:30:44.761933   54927 profile.go:148] Saving config to /home/vscode/.minikube/profiles/minikube/config.json ...
I0412 16:30:44.761957   54927 lock.go:35] WriteFile acquiring /home/vscode/.minikube/profiles/minikube/config.json: {Name:mk4353aad69d548cc2b11b408c0437ab6546c82a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
I0412 16:30:44.788450   54927 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull
I0412 16:30:44.788461   54927 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load
I0412 16:30:44.788486   54927 cache.go:208] Successfully downloaded all kic artifacts
I0412 16:30:44.788507   54927 start.go:313] acquiring machines lock for minikube: {Name:mk3430de3d789b5b5950227560fb3bc20cc52342 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0412 16:30:44.788619   54927 start.go:317] acquired machines lock for "minikube" in 101.801µs
I0412 16:30:44.788638   54927 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:7900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/vscode:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}
I0412 16:30:44.788696   54927 start.go:126] createHost starting for "" (driver="docker")
I0412 16:30:44.790792   54927 out.go:203] 🔥  Creating docker container (CPUs=2, Memory=7900MB) ...
I0412 16:30:44.790992   54927 start.go:160] libmachine.API.Create for "minikube" (driver="docker")
I0412 16:30:44.791023   54927 client.go:168] LocalClient.Create starting
I0412 16:30:44.791074   54927 main.go:130] libmachine: Reading certificate data from /home/vscode/.minikube/certs/ca.pem
I0412 16:30:44.791093   54927 main.go:130] libmachine: Decoding PEM data...
I0412 16:30:44.791102   54927 main.go:130] libmachine: Parsing certificate...
I0412 16:30:44.791135   54927 main.go:130] libmachine: Reading certificate data from /home/vscode/.minikube/certs/cert.pem
I0412 16:30:44.791141   54927 main.go:130] libmachine: Decoding PEM data...
I0412 16:30:44.791146   54927 main.go:130] libmachine: Parsing certificate...
I0412 16:30:44.791483   54927 cli_runner.go:133] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0412 16:30:44.816722   54927 cli_runner.go:180] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0412 16:30:44.816893   54927 network_create.go:254] running [docker network inspect minikube] to gather additional debugging logs...
I0412 16:30:44.816912   54927 cli_runner.go:133] Run: docker network inspect minikube
W0412 16:30:44.840271   54927 cli_runner.go:180] docker network inspect minikube returned with exit code 1
I0412 16:30:44.840286   54927 network_create.go:257] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1
stdout:
[]

stderr:
Error: No such network: minikube
I0412 16:30:44.840294   54927 network_create.go:259] output of [docker network inspect minikube]: -- stdout --
[]

-- /stdout --
** stderr ** 
Error: No such network: minikube

** /stderr **
I0412 16:30:44.840358   54927 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0412 16:30:44.863749   54927 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000388510] misses:0}
I0412 16:30:44.863780   54927 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0412 16:30:44.863789   54927 network_create.go:106] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
I0412 16:30:44.863844   54927 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube
I0412 16:30:44.916164   54927 network_create.go:90] docker network minikube 192.168.49.0/24 created
I0412 16:30:44.916178   54927 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container
I0412 16:30:44.916254   54927 cli_runner.go:133] Run: docker ps -a --format {{.Names}}
I0412 16:30:44.947783   54927 cli_runner.go:133] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0412 16:30:44.973441   54927 oci.go:102] Successfully created a docker volume minikube
I0412 16:30:44.973510   54927 cli_runner.go:133] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib
I0412 16:30:46.000599   54927 cli_runner.go:186] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib: (1.027065248s)
I0412 16:30:46.000614   54927 oci.go:106] Successfully prepared a docker volume minikube
I0412 16:30:46.000664   54927 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker
I0412 16:30:46.000682   54927 kic.go:179] Starting extracting preloaded images to volume ...
I0412 16:30:46.000763   54927 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/vscode/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir
W0412 16:30:47.161631   54927 cli_runner.go:180] docker run --rm --entrypoint /usr/bin/tar -v /home/vscode/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 2
I0412 16:30:47.161644   54927 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/vscode/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (1.160842476s)
I0412 16:30:47.161659   54927 kic.go:186] Unable to extract preloaded tarball to volume: docker run --rm --entrypoint /usr/bin/tar -v /home/vscode/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: exit status 2
stdout:

stderr:
tar (child): /preloaded.tar: Cannot read: Is a directory
tar (child): At beginning of tape, quitting now
tar (child): Error is not recoverable: exiting now
/usr/bin/tar: Child returned status 2
/usr/bin/tar: Error is not recoverable: exiting now
W0412 16:30:47.161707   54927 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
W0412 16:30:47.161712   54927 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
I0412 16:30:47.161764   54927 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'"
I0412 16:30:47.231336   54927 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2
I0412 16:30:48.058066   54927 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Running}}
I0412 16:30:48.088742   54927 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}}
I0412 16:30:48.118684   54927 cli_runner.go:133] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables
I0412 16:30:48.264113   54927 oci.go:281] the created container "minikube" has a running status.
I0412 16:30:48.264133   54927 kic.go:210] Creating ssh key for kic: /home/vscode/.minikube/machines/minikube/id_rsa...
I0412 16:30:48.400809   54927 kic_runner.go:191] docker (temp): /home/vscode/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0412 16:30:48.467857   54927 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}}
I0412 16:30:48.494851   54927 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0412 16:30:48.494858   54927 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0412 16:30:48.739545   54927 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}}
I0412 16:30:48.770199   54927 machine.go:88] provisioning docker machine ...
I0412 16:30:48.770225   54927 ubuntu.go:169] provisioning hostname "minikube"
I0412 16:30:48.770307   54927 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0412 16:30:48.798892   54927 main.go:130] libmachine: Using SSH client type: native
I0412 16:30:48.799007   54927 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a12c0] 0x7a43a0 <nil>  [] 0s} 127.0.0.1 55273 <nil> <nil>}
I0412 16:30:48.799012   54927 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
I0412 16:30:48.799112   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:30:51.799879   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:30:54.800982   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:30:57.801738   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:00.802890   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:03.803606   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:06.804223   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:09.804988   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:12.805784   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:15.806757   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:18.808366   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:21.809065   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:24.810645   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:27.811613   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:30.812760   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:33.813393   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:36.814692   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:39.815254   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:42.816134   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:45.816944   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:48.818069   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:51.819361   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:54.820253   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:31:57.821282   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:00.821857   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:03.823082   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:06.823526   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:09.824281   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:12.825446   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:15.826475   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:18.827565   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:21.828816   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:24.829700   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:27.830963   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:30.832377   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:33.832881   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:36.834238   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:39.835607   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:42.836922   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:45.838176   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:48.839182   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:51.840289   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:54.841216   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:32:57.842585   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:00.843404   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:03.844357   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:06.845313   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:09.846686   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:12.847983   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:15.849236   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:18.850584   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:21.852338   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:24.852655   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:27.853323   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:30.854595   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:33.855818   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:36.857129   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:39.857782   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:42.859688   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:45.860700   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:48.861869   54927 main.go:130] libmachine: SSH cmd err, output: <nil>: 
I0412 16:33:48.861947   54927 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0412 16:33:48.890928   54927 main.go:130] libmachine: Using SSH client type: native
I0412 16:33:48.891028   54927 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x7a12c0] 0x7a43a0 <nil>  [] 0s} 127.0.0.1 55273 <nil> <nil>}
I0412 16:33:48.891035   54927 main.go:130] libmachine: About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
I0412 16:33:48.891203   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:51.892124   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:54.893072   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:33:57.894118   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:00.895273   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:03.895625   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:06.896762   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:09.898029   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:12.898437   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:15.899079   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:18.900165   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:21.901538   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:24.902581   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:27.903999   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:30.905238   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:33.906244   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:36.907316   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:39.908624   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:42.908827   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:45.910127   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:48.910473   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:51.910960   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:54.911400   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused
I0412 16:34:57.912615   54927 main.go:130] libmachine: Error dialing TCP: dial tcp 127.0.0.1:55273: connect: connection refused

* 

Operating System

Other

Driver

Docker

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 12, 2022

The current setup only supports tcp or ssh in the DOCKER_HOST, it doesn't support the scenario where the socket is tunneled or mounted from another machine.

Possibly there could be a way (env var?) to pass the remote host, even when using a Unix socket. Before assuming it is publishing at localhost

@afbjorklund afbjorklund added the co/docker-driver Issues related to kubernetes in container label Apr 12, 2022
@B1tVect0r
Copy link
Author

B1tVect0r commented Apr 12, 2022

Possibly there could be a way (env var?) to pass the remote host, even when using a Unix socket

This is what I'm after; I believe I should be able to resolve the remote host both from the context that I'm running minikube start and from the resulting container, so presumably if it were dialing 192.168.65.2:{minikube port} (or whatever the remote host ends up resolving to) rather than localhost it would at least get further than it is (not sure if that would be sufficient to get it completely across the finish line, though)

@afbjorklund
Copy link
Collaborator

@afbjorklund afbjorklund added kind/improvement Categorizes issue or PR as related to improving upon a current feature. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Apr 12, 2022
@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 12, 2022

Something else looks broken on that docker host:

tar (child): /preloaded.tar: Cannot read: Is a directory
tar (child): At beginning of tape, quitting now

Looks like bind mounts aren't working, in your setup.

-v /home/vscode/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro


Since this will try to find the file on the docker host, and fails.

Docker falls back to just creating an empty dir, and mounting that.

Should look for a remote host, and skip preload in that case.

I think the cache will fail in similar ways, "docker load" won't find it.

EDIT: The cache will still work, since the load is done by the client

It would be possible to do a similar workaround, also for the preload.

@j2udev
Copy link

j2udev commented Jul 5, 2022

Thought I'd add a little extra evidence here. We're noticing something strange when bind mounting our docker.sock file in different ways within our vscode devcontainer (I don't believe the issue is one with vscode's devcontainer setup, just adding that the below is configuration from one of our devcontainer.json files)

  "mounts": [
    "source=/var/run/docker.sock,target=/var/run/docker.sock,type=bind"
  ]

vs

  "runArgs": [
    "-v=/var/run/docker.sock/:/var/run/docker.sock"
  ]

The former causes Minikube to not start up successfully (with a host timeout error I believe) and leverages the docker --mounts flag under the hood.

For the latter, Minikube does indeed start successfully when we manually use the --volume flag.

I haven't tested taking the vscode abstraction out of the picture, but I imagine we would see the same thing.

Docker calls out some subtle differences between the two, but I haven't noticed any differences between the mounts from one example to the other.

From the Docker documentation:

Differences between “--mount” and “--volume”
The --mount flag supports most options that are supported by the -v or --volume flag for docker run, with some important exceptions:

The --mount flag allows you to specify a volume driver and volume driver options per volume, without creating the volumes in advance. In contrast, docker run allows you to specify a single volume driver which is shared by all volumes, using the --volume-driver flag.

The --mount flag allows you to specify custom metadata (“labels”) for a volume, before the volume is created.

When you use --mount with type=bind, the host-path must refer to an existing path on the host. The path will not be created for you and the service will fail with an error if the path does not exist.

The --mount flag does not allow you to relabel a volume with Z or z flags, which are used for selinux labeling.

If I can provider further information, let me know. We would really like to use the mounts section for vscode devcontainers for bind mounting our docker socket as it gets around some annoying permissions issues and the --mount flag is recommended over --volume by Docker.

@afbjorklund
Copy link
Collaborator

The current code doesn't know about remote servers with local sockets, it assumes that remote servers use tcp and local servers use unix...

It should probably have a Boolean to override, similar to allowing both docker engine and docker desktop (on Linux, that is)

@j2udev
Copy link

j2udev commented Jul 5, 2022

For the use case of development containers, we try to avoid the docker-in-docker situation and instead just install the docker cli and bind mount the docker.sock. We also mount ~/.kube/config and ~/.minikube which allows us to communicate to the same minikube cluster from different dev containers.

Uses cases and specifics aside, minikube successfully starts within a devcontainer that uses --volume to bind mount the docker.sock but not when using --mount. When I try to start minikube with more verbose logging inside of the devcontainer that uses --mount to bind mount the docker.sock see a similar output to the OP.

@j2udev
Copy link

j2udev commented Jul 8, 2022

interestingly I didn't realize I had a trailing slash on my docker.sock --volume mount (definitely work)... when I removed that trailing slash it stopped working so this works:

  "runArgs": [
    "-v=/var/run/docker.sock/:/var/run/docker.sock"
  ]

but this does not:

  "runArgs": [
    "-v=/var/run/docker.sock:/var/run/docker.sock"
  ]

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 6, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 5, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 5, 2022
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@lucasfcnunes
Copy link

@B1tVect0r did you find a solution?

@lucasfcnunes
Copy link

The current code doesn't know about remote servers with local sockets, it assumes that remote servers use tcp and local servers use unix...

It should probably have a Boolean to override, similar to allowing both docker engine and docker desktop (on Linux, that is)

@afbjorklund this issue was closed but we don't have a solution :(

@naferok
Copy link

naferok commented Oct 18, 2023

My solution is:

  • set DOCKER_HOST env as host/remote docker daemon on tcp addr
    export DOCKER_HOST="tcp://$(dig +short host.docker.internal):2375" (as Docker Desktop)
  • when starting minikube, we should provide --listen-address=0.0.0.0
  • [bug] when we creating tunnel for LoadBalancer as minikube tunnel --alsologtostderr --v=2, we received
    ssh: connect to host 127.0.0.1 port ####: Connection refused
    because our ssh target address is dig +short host.docker.internal
    solution should be: fix this line
    "docker@127.0.0.1",

    with same logics as this
    ip, err := d.GetSSHHostname()

    my workaround is (Unfortunately, I can’t propose PR, due to my low level of golang)
func createSSHConn(name, sshPort, sshKey, bindAddress string, resourcePorts []int32, resourceIP string, resourceName string) *sshConn {
	driver := registry.Driver(driver.Docker)
	sshConnUserHost := "docker@127.0.0.1"
	if !driver.Empty() {
		kic := driver.Init()
		// sshConn
		ip, err := kic.GetSSHHostname()
		if err == nil {
			sshConnUserHost = kic.GetSSHUsername() + "@" + ip
		}
	}
	// extract sshArgs
	sshArgs := []string{
		// TODO: document the options here
		"-o", "UserKnownHostsFile=/dev/null",
		"-o", "StrictHostKeyChecking=no",
		"-o", "IdentitiesOnly=yes",
		"-N",
		sshConnUserHost,
		"-p", sshPort,
		"-i", sshKey,
	}

but you can use NodePort as well

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/improvement Categorizes issue or PR as related to improving upon a current feature. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

No branches or pull requests

7 participants