Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

mac/hyperkit: /usr/bin/kubeadm alpha phase addon kube-dns .: Process exited with status 1 #3264

Closed
parhamdoustdar opened this issue Oct 19, 2018 · 11 comments
Labels
co/hyperkit Hyperkit related issues ev/kubeadm-exited-status-1 kubeadm exited with status 1 os/macos triage/obsolete Bugs that no longer occur in the latest stable release

Comments

@parhamdoustdar
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (choose one):

This is a bug report.

Please provide the following details:

Environment:

Minikube version: 0.30.0

  • OS (e.g. from /etc/os-release): Mac OS 10.13.6
  • VM Driver: hyperkit
  • ISO version: 0.30.0

What happened:

I'm trying to run qinikube on my machine, and here is the output:

$ minikube start --vm-driver=hyperkit --disk-size 2g -v10
NO_PROXY=* minikube start --vm-driver=hyperkit --disk-size 2g -v10
Aliases:
map[string]string{}
Override:
map[string]interface {}{"v":"10"}
PFlags:
map[string]viper.FlagValue{"profile":viper.pflagValue{flag:(*pflag.Flag)(0xc42001adc0)}, "disable-driver-mounts":viper.pflagValue{flag:(*pflag.Flag)(0xc42001b9a0)}, "kvm-network":viper.pflagValue{flag:(*pflag.Flag)(0xc42001bea0)}, "network-plugin":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a780)}, "uuid":viper.pflagValue{flag:(*pflag.Flag)(0xc42039aa00)}, "vm-driver":viper.pflagValue{flag:(*pflag.Flag)(0xc42001bae0)}, "docker-env":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a140)}, "gpu":viper.pflagValue{flag:(*pflag.Flag)(0xc42039abe0)}, "insecure-registry":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a500)}, "keep-context":viper.pflagValue{flag:(*pflag.Flag)(0xc42001b7c0)}, "memory":viper.pflagValue{flag:(*pflag.Flag)(0xc42001bb80)}, "cache-images":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a8c0)}, "cpus":viper.pflagValue{flag:(*pflag.Flag)(0xc42001bc20)}, "dns-domain":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a460)}, "hyperv-virtual-switch":viper.pflagValue{flag:(*pflag.Flag)(0xc42001be00)}, "xhyve-disk-driver":viper.pflagValue{flag:(*pflag.Flag)(0xc42001bf40)}, "registry-mirror":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a5a0)}, "apiserver-ips":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a3c0)}, "apiserver-name":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a280)}, "docker-opt":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a1e0)}, "host-only-cidr":viper.pflagValue{flag:(*pflag.Flag)(0xc42001bd60)}, "nfs-shares-root":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a0a0)}, "mount":viper.pflagValue{flag:(*pflag.Flag)(0xc42001b860)}, "mount-string":viper.pflagValue{flag:(*pflag.Flag)(0xc42001b900)}, "disk-size":viper.pflagValue{flag:(*pflag.Flag)(0xc42001bcc0)}, "extra-config":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a960)}, "feature-gates":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a820)}, "hyperkit-vsock-ports":viper.pflagValue{flag:(*pflag.Flag)(0xc42039ab40)}, "kubernetes-version":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a6e0)}, "bootstrapper":viper.pflagValue{flag:(*pflag.Flag)(0xc42001ae60)}, "container-runtime":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a640)}, "iso-url":viper.pflagValue{flag:(*pflag.Flag)(0xc42001ba40)}, "nfs-share":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a000)}, "apiserver-names":viper.pflagValue{flag:(*pflag.Flag)(0xc42039a320)}, "hyperkit-vpnkit-sock":viper.pflagValue{flag:(*pflag.Flag)(0xc42039aaa0)}}
Env:
map[string]string{}
Key/Value Store:
map[string]interface {}{}
Config:
map[string]interface {}{"wantreporterror":true}
Defaults:
map[string]interface {}{"wantupdatenotification":true, "wantreporterror":false, "wantreporterrorprompt":true, "wantkubectldownloadmsg":true, "showbootstrapperdeprecationnotification":true, "log_dir":"", "reminderwaitperiodinhours":24, "wantnonedriverwarning":true, "showdriverdeprecationnotification":true, "v":"0", "alsologtostderr":"false"}
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Found binary path at /usr/local/bin/docker-machine-driver-hyperkit
Launching plugin server for driver hyperkit
Plugin server listening at address 127.0.0.1:62500
() Calling .GetVersion
Using API Version  1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minikube) Calling .GetMachineName
(minikube) Calling .DriverName
Reading certificate data from /Users/pdoustdar/.minikube/certs/ca.pem
Decoding PEM data...
Parsing certificate...
Reading certificate data from /Users/pdoustdar/.minikube/certs/cert.pem
Decoding PEM data...
Parsing certificate...
Running pre-create checks...
(minikube) Calling .PreCreateCheck
(minikube) Calling .GetConfigRaw
Creating machine...
(minikube) Calling .Create
(minikube) Downloading /Users/pdoustdar/.minikube/cache/boot2docker.iso from file:///Users/pdoustdar/.minikube/cache/iso/minikube-v0.30.0.iso...
(minikube) DBG | 2018/10/19 11:23:55 [INFO] Creating ssh key...
(minikube) DBG | 2018/10/19 11:23:55 [INFO] Creating raw disk image...
(minikube) DBG | Writing magic tar header
(minikube) DBG | Writing SSH key tar header
(minikube) DBG | Mounting boot2docker.iso
(minikube) DBG | executing: &{/usr/bin/hdiutil [hdiutil attach /Users/pdoustdar/.minikube/machines/minikube/boot2docker.iso -mountpoint /Users/pdoustdar/.minikube/machines/minikube/b2d-image] []  <nil> 0xc4200b0008 0xc4200b0010 [] <nil> <nil> <nil> <nil> <nil> false [] [] [] [] <nil> <nil>} attach /Users/pdoustdar/.minikube/machines/minikube/boot2docker.iso -mountpoint /Users/pdoustdar/.minikube/machines/minikube/b2d-image
(minikube) /dev/disk2          	                               	/Users/pdoustdar/.minikube/machines/minikube/b2d-image
(minikube) DBG | Extracting Kernel Options...
(minikube) DBG | Extracted Options "loglevel=3 user=docker console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes base host=minikube"
(minikube) DBG | Extracting /Users/pdoustdar/.minikube/machines/minikube/b2d-image/boot/bzImage into /Users/pdoustdar/.minikube/machines/minikube/bzImage
(minikube) DBG | Extracting /Users/pdoustdar/.minikube/machines/minikube/b2d-image/boot/initrd into /Users/pdoustdar/.minikube/machines/minikube/initrd
(minikube) DBG | Unmounting boot2docker.iso
(minikube) DBG | executing: &{/usr/bin/hdiutil [hdiutil detach /Users/pdoustdar/.minikube/machines/minikube/b2d-image] []  <nil> 0xc4200b0008 0xc4200b0010 [] <nil> <nil> <nil> <nil> <nil> false [] [] [] [] <nil> <nil>} detach /Users/pdoustdar/.minikube/machines/minikube/b2d-image
(minikube) "disk2" unmounted.
(minikube) "disk2" ejected.
(minikube) Using UUID b317d688-d380-11e8-ae5e-c4b301c057b3
(minikube) Generated MAC 2e:8e:bd:9c:f4:a1
(minikube) Starting with cmdline: loglevel=3 user=docker console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes base host=minikube
(minikube) Calling .GetConfigRaw
(minikube) Calling .DriverName
(minikube) Calling .DriverName
Waiting for machine to be running, this may take a few minutes...
(minikube) Calling .GetState
Detecting operating system of created instance...
Waiting for SSH to be available...
Getting to WaitForSSH function...
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x140f940] 0x140f910  [] 0s} 192.168.64.19 22 <nil> <nil>}
About to run SSH command:
exit 0
SSH cmd err, output: <nil>: 
Detecting the provisioner...
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x140f940] 0x140f910  [] 0s} 192.168.64.19 22 <nil> <nil>}
About to run SSH command:
cat /etc/os-release
SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2018.05
ID=buildroot
VERSION_ID=2018.05
PRETTY_NAME="Buildroot 2018.05"

found compatible host: buildroot
Provisioning with buildroot...
(minikube) Calling .GetMachineName
setting hostname "minikube"
(minikube) Calling .GetMachineName
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x140f940] 0x140f910  [] 0s} 192.168.64.19 22 <nil> <nil>}
About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
SSH cmd err, output: <nil>: minikube

(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x140f940] 0x140f910  [] 0s} 192.168.64.19 22 <nil> <nil>}
About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else 
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; 
			fi
		fi
SSH cmd err, output: <nil>: 
set auth options {CertDir:/Users/pdoustdar/.minikube CaCertPath:/Users/pdoustdar/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/pdoustdar/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/pdoustdar/.minikube/machines/server.pem ServerKeyPath:/Users/pdoustdar/.minikube/machines/server-key.pem ClientKeyPath:/Users/pdoustdar/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/pdoustdar/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/pdoustdar/.minikube}
setting up certificates
(minikube) Calling .GetMachineName
(minikube) Calling .GetIP
generating server cert: /Users/pdoustdar/.minikube/machines/server.pem ca-key=/Users/pdoustdar/.minikube/certs/ca.pem private-key=/Users/pdoustdar/.minikube/certs/ca-key.pem org=pdoustdar.minikube san=[192.168.64.19 localhost]
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
(minikube) Calling .DriverName
Setting Docker configuration on the remote daemon...
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x140f940] 0x140f910  [] 0s} 192.168.64.19 22 <nil> <nil>}
About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket
Requires=docker.socket

[Service]
Type=notify

# DOCKER_RAMDISK disables pivot_root in Docker, using MS_MOVE instead.
Environment=DOCKER_RAMDISK=yes


# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service
SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket
Requires=docker.socket

[Service]
Type=notify

# DOCKER_RAMDISK disables pivot_root in Docker, using MS_MOVE instead.
Environment=DOCKER_RAMDISK=yes


# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
ExecReload=/bin/kill -s HUP 

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x140f940] 0x140f910  [] 0s} 192.168.64.19 22 <nil> <nil>}
About to run SSH command:
sudo systemctl -f enable docker
SSH cmd err, output: <nil>: Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.

(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x140f940] 0x140f910  [] 0s} 192.168.64.19 22 <nil> <nil>}
About to run SSH command:
sudo systemctl daemon-reload
SSH cmd err, output: <nil>: 
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x140f940] 0x140f910  [] 0s} 192.168.64.19 22 <nil> <nil>}
About to run SSH command:
sudo systemctl -f restart docker
SSH cmd err, output: <nil>: 
setting minikube options for container-runtime
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x140f940] 0x140f910  [] 0s} 192.168.64.19 22 <nil> <nil>}
About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube
SSH cmd err, output: <nil>: 
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '

(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x140f940] 0x140f910  [] 0s} 192.168.64.19 22 <nil> <nil>}
About to run SSH command:
sudo systemctl daemon-reload
SSH cmd err, output: <nil>: 
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x140f940] 0x140f910  [] 0s} 192.168.64.19 22 <nil> <nil>}
About to run SSH command:
sudo systemctl -f restart crio
SSH cmd err, output: <nil>: 
Checking connection to Docker...
(minikube) Calling .GetURL
Docker is up and running!
Reticulating splines...
(minikube) Calling .GetConfigRaw
Getting VM IP address...
(minikube) Calling .GetIP
Found binary path at /usr/local/bin/docker-machine-driver-hyperkit
Launching plugin server for driver hyperkit
Plugin server listening at address 127.0.0.1:62572
() Calling .GetVersion
Using API Version  1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minikube) Calling .DriverName
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Moving files into cluster...
Setting up certs...
Connecting to cluster...
(minikube) Calling .GetURL
Setting up kubeconfig...
Starting cluster components...
E1019 11:25:41.820479    5996 start.go:297] Error starting cluster:  kubeadm init error 
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns
 running command: : running command: 
sudo /usr/bin/kubeadm init --config /var/lib/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests --ignore-preflight-errors=DirAvailable--data-minikube --ignore-preflight-errors=Port-10250 --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml --ignore-preflight-errors=FileAvailable--etc-kubernetes-manifests-etcd.yaml --ignore-preflight-errors=Swap --ignore-preflight-errors=CRI  &&
sudo /usr/bin/kubeadm alpha phase addon kube-dns

.: Process exited with status 1

What you expected to happen:

I expected the machine to be generated, and I got an error instead.

How to reproduce it (as minimally and precisely as possible):

Run minikube start --vm-driver=hyperkit on Mac OS

Anything else do we need to know:

This issue exists for a while now, and previously I was getting around it by using the --bootstrapper=localkube option. However, ow that --bootstrapper is removed, I have no idea how to fix this.

@kvokka
Copy link

kvokka commented Oct 19, 2018

https://github.com/kvokka/run-minikube may help

tested with minikube 0.30 && macOS 10.14

@afbjorklund
Copy link
Collaborator

Everything in that process is supposed to work (assuming the use of --cache-images, which is no longer the default), so what is needed is the kubeadm log to know why the init command is failing...

@allanchau
Copy link

I currently use VMWare, but have tried Hyperkit too.

A fresh install rm -rf ~/.minikube ~/.kube and minikube start --vm-driver=hyperkit minikube start --vm-driver=vmwarefusion has been broken since v0.26.0.
Running macOS 10.13 and now 10.14.

There seems to be a heap of related issues since v0.26.0. e.g. #2886

@afbjorklund how do we get the kubeadm log? Is there anything else that would help?

@bokjo
Copy link

bokjo commented Oct 24, 2018

Had the same issue on Windows with HyperV after upgrade to v0.30.0...
Uninstalled and installed everything from scratch and works now.
NOTE: there are two DNS providers now (CoreDNS and kube-dns), and surprisingly it works just fine

kubectl version
Client Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.3", GitCommit:"2bba0127d85d5a46ab4b778548be28623b32d0b0", GitTreeState:"clean", BuildDate:"2018-05-21T09:17:39Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"windows/amd64"}
Server Version: version.Info{Major:"1", Minor:"10", GitVersion:"v1.10.0", GitCommit:"fc32d2f3698e36b93322a3465f63a14e9f0eaead", GitTreeState:"clean", BuildDate:"2018-03-26T16:44:10Z", GoVersion:"go1.9.3", Compiler:"gc", Platform:"linux/amd64"}
kubectl get pods --all-namespaces
NAMESPACE     NAME                                        READY     STATUS    RESTARTS   AGE
kube-system   coredns-c4cffd6dc-jzj6m                     1/1       Running   1          1h
kube-system   default-http-backend-544569b6d7-jjpms       1/1       Running   0          1h
kube-system   etcd-minikube                               1/1       Running   0          1h
kube-system   heapster-gd4kc                              1/1       Running   0          1h
kube-system   influxdb-grafana-5hs4z                      2/2       Running   0          1h
kube-system   kube-addon-manager-minikube                 1/1       Running   1          1h
kube-system   kube-apiserver-minikube                     1/1       Running   0          1h
kube-system   kube-controller-manager-minikube            1/1       Running   0          1h
kube-system   kube-dns-86f4d74b45-jrwt9                   3/3       Running   4          1h
kube-system   kube-proxy-csmd9                            1/1       Running   0          1h
kube-system   kube-scheduler-minikube                     1/1       Running   0          1h
kube-system   kubernetes-dashboard-6f4cfc5d87-mwfkk       1/1       Running   3          1h
kube-system   nginx-ingress-controller-8566746984-d8jf8   1/1       Running   0          1h
kube-system   storage-provisioner                         1/1       Running   3          1h
kube-system   tiller-deploy-6fd8d857bc-49kv9              1/1       Running   1          1h

@parhamdoustdar
Copy link
Author

parhamdoustdar commented Oct 24, 2018 via email

@afbjorklund
Copy link
Collaborator

I'm not sure what the "official" way is... I normally just run the same command over minikube ssh. It might be a bug, if it is not saving kubeadm' output anywhere ?

ping @tstromberg

@cbrewster
Copy link

I am also facing this issue, as a workaround I am using the virtualbox VM driver. Also, if I downgrade to minikube v0.25 I do not see this issue.

@kvokka
Copy link

kvokka commented Oct 29, 2018

@cbrewster Can you please say, why exactly that version (and that version)? Did you tried 0.26? or any other?
Thank you!

@cbrewster
Copy link

I tried 0.26 and it didn't work as mentioned in #3264 (comment) (it hangs on Starting cluster components... just like v0.30.0)

I tried running with -v10 but it doesn't show what is hanging:

➜ minikube start --vm-driver=hyperkit -v10
Aliases:
map[string]string{}
Override:
map[string]interface {}{"v":"10"}
PFlags:
map[string]viper.FlagValue{"xhyve-disk-driver":viper.pflagValue{flag:(*pflag.Flag)(0xc42044c280)}, "disk-size":viper.pflagValue{flag:(*pflag.Flag)(0xc4204cfe00)}, "mount":viper.pflagValue{flag:(*pflag.Flag)(0xc4204cf9a0)}, "vm-driver":viper.pflagValue{flag:(*pflag.Flag)(0xc4204cfc20)}, "apiserver-name":viper.pflagValue{flag:(*pflag.Flag)(0xc42044c8c0)}, "extra-config":viper.pflagValue{flag:(*pflag.Flag)(0xc42044cfa0)}, "nfs-shares-root":viper.pflagValue{flag:(*pflag.Flag)(0xc42044c500)}, "registry-mirror":viper.pflagValue{flag:(*pflag.Flag)(0xc42044cbe0)}, "insecure-registry":viper.pflagValue{flag:(*pflag.Flag)(0xc42044cb40)}, "disable-driver-mounts":viper.pflagValue{flag:(*pflag.Flag)(0xc4204cfae0)}, "feature-gates":viper.pflagValue{flag:(*pflag.Flag)(0xc42044ce60)}, "host-only-cidr":viper.pflagValue{flag:(*pflag.Flag)(0xc4204cfea0)}, "memory":viper.pflagValue{flag:(*pflag.Flag)(0xc4204cfcc0)}, "network-plugin":viper.pflagValue{flag:(*pflag.Flag)(0xc42044cdc0)}, "nfs-share":viper.pflagValue{flag:(*pflag.Flag)(0xc42044c3c0)}, "bootstrapper":viper.pflagValue{flag:(*pflag.Flag)(0xc4204cef00)}, "apiserver-names":viper.pflagValue{flag:(*pflag.Flag)(0xc42044c960)}, "keep-context":viper.pflagValue{flag:(*pflag.Flag)(0xc4204cf900)}, "apiserver-ips":viper.pflagValue{flag:(*pflag.Flag)(0xc42044ca00)}, "container-runtime":viper.pflagValue{flag:(*pflag.Flag)(0xc42044cd20)}, "hyperv-virtual-switch":viper.pflagValue{flag:(*pflag.Flag)(0xc42044c000)}, "dns-domain":viper.pflagValue{flag:(*pflag.Flag)(0xc42044caa0)}, "mount-string":viper.pflagValue{flag:(*pflag.Flag)(0xc4204cfa40)}, "docker-env":viper.pflagValue{flag:(*pflag.Flag)(0xc42044c640)}, "docker-opt":viper.pflagValue{flag:(*pflag.Flag)(0xc42044c780)}, "iso-url":viper.pflagValue{flag:(*pflag.Flag)(0xc4204cfb80)}, "kubernetes-version":viper.pflagValue{flag:(*pflag.Flag)(0xc42044cc80)}, "kvm-network":viper.pflagValue{flag:(*pflag.Flag)(0xc42044c140)}, "profile":viper.pflagValue{flag:(*pflag.Flag)(0xc4204cee60)}, "cache-images":viper.pflagValue{flag:(*pflag.Flag)(0xc42044cf00)}, "cpus":viper.pflagValue{flag:(*pflag.Flag)(0xc4204cfd60)}}
Env:
map[string]string{}
Key/Value Store:
map[string]interface {}{}
Config:
map[string]interface {}{"ingress":true}
Defaults:
map[string]interface {}{"wantkubectldownloadmsg":true, "wantnonedriverwarning":true, "showbootstrapperdeprecationnotification":true, "v":"0", "log_dir":"", "wantupdatenotification":true, "wantreporterrorprompt":true, "showdriverdeprecationnotification":true, "alsologtostderr":"false", "reminderwaitperiodinhours":24, "wantreporterror":false}
Starting local Kubernetes v1.10.0 cluster...
Starting VM...
Found binary path at /usr/local/bin/docker-machine-driver-hyperkit
Launching plugin server for driver hyperkit
Plugin server listening at address 127.0.0.1:50470
() Calling .GetVersion
Using API Version  1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minikube) Calling .GetMachineName
(minikube) Calling .DriverName
Running pre-create checks...
(minikube) Calling .PreCreateCheck
(minikube) Calling .GetConfigRaw
Creating machine...
(minikube) Calling .Create
(minikube) Downloading /Users/connor/.minikube/cache/boot2docker.iso from file:///Users/connor/.minikube/cache/iso/minikube-v0.26.0.iso...
(minikube) DBG | Writing magic tar header
(minikube) DBG | Writing SSH key tar header
(minikube) Using UUID ef50e784-db8d-11e8-a2d6-acbc329df5ff
(minikube) Generated MAC 8a:24:cc:19:7d:b0
(minikube) Starting with cmdline: loglevel=3 user=docker console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes base host=minikube
(minikube) Calling .GetConfigRaw
(minikube) Calling .DriverName
(minikube) Calling .DriverName
Waiting for machine to be running, this may take a few minutes...
(minikube) Calling .GetState
Detecting operating system of created instance...
Waiting for SSH to be available...
Getting to WaitForSSH function...
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x143a220] 0x143a1d0  [] 0s} 192.168.64.11 22 <nil> <nil>}
About to run SSH command:
exit 0
SSH cmd err, output: <nil>:
Detecting the provisioner...
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x143a220] 0x143a1d0  [] 0s} 192.168.64.11 22 <nil> <nil>}
About to run SSH command:
cat /etc/os-release
SSH cmd err, output: <nil>: NAME=Buildroot
VERSION=2017.11
ID=buildroot
VERSION_ID=2017.11
PRETTY_NAME="Buildroot 2017.11"

found compatible host: buildroot
Provisioning with buildroot...
(minikube) Calling .GetMachineName
setting hostname "minikube"
(minikube) Calling .GetMachineName
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x143a220] 0x143a1d0  [] 0s} 192.168.64.11 22 <nil> <nil>}
About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
SSH cmd err, output: <nil>: minikube

(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x143a220] 0x143a1d0  [] 0s} 192.168.64.11 22 <nil> <nil>}
About to run SSH command:

		if ! grep -xq '.*\sminikube' /etc/hosts; then
			if grep -xq '127.0.1.1\s.*' /etc/hosts; then
				sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts;
			else
				echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts;
			fi
		fi
SSH cmd err, output: <nil>:
set auth options {CertDir:/Users/connor/.minikube CaCertPath:/Users/connor/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/connor/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/connor/.minikube/machines/server.pem ServerKeyPath:/Users/connor/.minikube/machines/server-key.pem ClientKeyPath:/Users/connor/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/connor/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/connor/.minikube}
setting up certificates
(minikube) Calling .GetMachineName
(minikube) Calling .GetIP
generating server cert: /Users/connor/.minikube/machines/server.pem ca-key=/Users/connor/.minikube/certs/ca.pem private-key=/Users/connor/.minikube/certs/ca-key.pem org=connor.minikube san=[192.168.64.11 localhost]
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
(minikube) Calling .DriverName
Setting Docker configuration on the remote daemon...
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x143a220] 0x143a1d0  [] 0s} 192.168.64.11 22 <nil> <nil>}
About to run SSH command:
sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket
Requires=docker.socket

[Service]
Type=notify

# DOCKER_RAMDISK disables pivot_root in Docker, using MS_MOVE instead.
Environment=DOCKER_RAMDISK=yes


# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP $MAINPID

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target
" | sudo tee /lib/systemd/system/docker.service
SSH cmd err, output: <nil>: [Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
After=network.target docker.socket
Requires=docker.socket

[Service]
Type=notify

# DOCKER_RAMDISK disables pivot_root in Docker, using MS_MOVE instead.
Environment=DOCKER_RAMDISK=yes


# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
# The base configuration already specifies an 'ExecStart=...' command. The first directive
# here is to clear out that command inherited from the base configuration. Without this,
# the command from the base configuration and the command specified here are treated as
# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
# will catch this invalid input and refuse to start the service with an error like:
#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
ExecStart=
ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12
ExecReload=/bin/kill -s HUP

# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity

# Uncomment TasksMax if your systemd version supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
TimeoutStartSec=0

# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes

# kill only the docker process, not all processes in the cgroup
KillMode=process

[Install]
WantedBy=multi-user.target

(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x143a220] 0x143a1d0  [] 0s} 192.168.64.11 22 <nil> <nil>}
About to run SSH command:
sudo systemctl -f enable docker
SSH cmd err, output: <nil>: Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /lib/systemd/system/docker.service.

(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x143a220] 0x143a1d0  [] 0s} 192.168.64.11 22 <nil> <nil>}
About to run SSH command:
sudo systemctl daemon-reload
SSH cmd err, output: <nil>:
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x143a220] 0x143a1d0  [] 0s} 192.168.64.11 22 <nil> <nil>}
About to run SSH command:
sudo systemctl -f restart docker
SSH cmd err, output: <nil>:
setting minikube options for container-runtime
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x143a220] 0x143a1d0  [] 0s} 192.168.64.11 22 <nil> <nil>}
About to run SSH command:
sudo mkdir -p /etc/sysconfig && printf %s "
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
" | sudo tee /etc/sysconfig/crio.minikube
SSH cmd err, output: <nil>:
CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '

(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x143a220] 0x143a1d0  [] 0s} 192.168.64.11 22 <nil> <nil>}
About to run SSH command:
sudo systemctl daemon-reload
SSH cmd err, output: <nil>:
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Using SSH client type: native
&{{{<nil> 0 [] [] []} docker [0x143a220] 0x143a1d0  [] 0s} 192.168.64.11 22 <nil> <nil>}
About to run SSH command:
sudo systemctl -f restart crio
SSH cmd err, output: <nil>:
Checking connection to Docker...
(minikube) Calling .GetURL
Docker is up and running!
Reticulating splines...
(minikube) Calling .GetConfigRaw
Getting VM IP address...
(minikube) Calling .GetIP
Found binary path at /usr/local/bin/docker-machine-driver-hyperkit
Launching plugin server for driver hyperkit
Plugin server listening at address 127.0.0.1:50567
() Calling .GetVersion
Using API Version  1
() Calling .SetConfigRaw
() Calling .GetMachineName
(minikube) Calling .DriverName
(minikube) Calling .GetSSHHostname
(minikube) Calling .GetSSHPort
(minikube) Calling .GetSSHKeyPath
(minikube) Calling .GetSSHUsername
Moving files into cluster...
Setting up certs...
Connecting to cluster...
(minikube) Calling .GetURL
Setting up kubeconfig...
Starting cluster components...

@tstromberg tstromberg added os/macos co/hyperkit Hyperkit related issues labels Oct 29, 2018
@tstromberg tstromberg changed the title Minikube 0.30.0 fails to start on Mac OS with the hyper kit driver mac/hyperkit: /usr/bin/kubeadm alpha phase addon kube-dns .: Process exited with status 1 Oct 29, 2018
@jamilbk
Copy link

jamilbk commented Nov 1, 2018

Also having this exact same issue

OS X 10.14.1
minikube 0.30.0
Kubernetes v1.12.2 (also tried v1.10.0 and v1.11.4)

Hangs at Starting cluster components.... Seems to be a bug with the hyperkit driver not creating a large enough volume to store the downloaded images?

...
Nov 01 19:17:35 minikube kubelet[2579]: E1101 19:17:35.663403    2579 reflector.go:134] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Get https://localhost:8443/api/v1/nodes?fieldSelector=metadata.name%3Dminikube&limit=500&resourceVersion=0: dial tcp 127.0.0.1:8443: connect: connection refused
Nov 01 19:17:35 minikube kubelet[2579]: E1101 19:17:35.690333    2579 remote_image.go:112] PullImage "k8s.gcr.io/kube-apiserver:v1.12.2" from image service failed: rpc error: code = Unknown desc = failed to register layer: Error processing tar file(exit status 1): write /usr/local/bin/kube-apiserver: no space left on device
Nov 01 19:17:35 minikube kubelet[2579]: E1101 19:17:35.690364    2579 kuberuntime_image.go:51] Pull image "k8s.gcr.io/kube-apiserver:v1.12.2" failed: rpc error: code = Unknown desc = failed to register layer: Error processing tar file(exit status 1): write /usr/local/bin/kube-apiserver: no space left on device
Nov 01 19:17:35 minikube kubelet[2579]: E1101 19:17:35.690411    2579 kuberuntime_manager.go:744] container start failed: ErrImagePull: rpc error: code = Unknown desc = failed to register layer: Error processing tar file(exit status 1): write /usr/local/bin/kube-apiserver: no space left on device
Nov 01 19:17:35 minikube kubelet[2579]: E1101 19:17:35.690437    2579 pod_workers.go:186] Error syncing pod 3c400510a3121cb8250be36d825e5115 ("kube-apiserver-minikube_kube-system(3c400510a3121cb8250be36d825e5115)"), skipping: failed to "StartContainer" for "kube-apiserver" with ErrImagePull: "rpc error: code = Unknown desc = failed to register layer: Error processing tar file(exit status 1): write /usr/local/bin/kube-apiserver: no space left on device"
...

@tstromberg
Copy link
Contributor

minikube v0.33 now shows why kubeadm failed, so if you run into this again - please open a new bug.

@tstromberg tstromberg added triage/obsolete Bugs that no longer occur in the latest stable release ev/kubeadm-exited-status-1 kubeadm exited with status 1 labels Jan 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/hyperkit Hyperkit related issues ev/kubeadm-exited-status-1 kubeadm exited with status 1 os/macos triage/obsolete Bugs that no longer occur in the latest stable release
Projects
None yet
Development

No branches or pull requests

8 participants