Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TestStartStop/group/newest-cni/serial/Pause: contains no runlevels, aborting. #11813

Closed
medyagh opened this issue Jun 29, 2021 · 0 comments · Fixed by #11815
Closed

TestStartStop/group/newest-cni/serial/Pause: contains no runlevels, aborting. #11813

medyagh opened this issue Jun 29, 2021 · 0 comments · Fixed by #11815
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.

Comments

@medyagh
Copy link
Member

medyagh commented Jun 29, 2021

Acording to our flake chart table https://storage.googleapis.com/minikube-flake-rate/flake_chart.html?env=Docker_Linux

the error message update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.

https://storage.googleapis.com/minikube-builds/logs/master/32c08ad/Docker_Linux.html#fail_TestStartStop%2fgroup%2fold-k8s-version%2fserial%2fPause


-- stdout --
	* Pausing node old-k8s-version-20210625224134-118735 ... 
	
	
-- /stdout --
** stderr ** 
	I0625 22:54:37.259073  512433 out.go:286] Setting OutFile to fd 1 ...
	I0625 22:54:37.259194  512433 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0625 22:54:37.259206  512433 out.go:299] Setting ErrFile to fd 2...
	I0625 22:54:37.259211  512433 out.go:333] TERM=,COLORTERM=, which probably does not support color
	I0625 22:54:37.259322  512433 root.go:311] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-master-115019-32c08ad6b789c18aa094e3ba1fc0b7102e3e603f/.minikube/bin
	I0625 22:54:37.259515  512433 out.go:293] Setting JSON to false
	I0625 22:54:37.259540  512433 mustload.go:65] Loading cluster: old-k8s-version-20210625224134-118735
	I0625 22:54:37.260210  512433 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210625224134-118735 --format={{.State.Status}}
	I0625 22:54:37.300333  512433 host.go:66] Checking if "old-k8s-version-20210625224134-118735" exists ...
	I0625 22:54:37.301515  512433 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:%!s(int=2) cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.99.1/24 h
ost-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/11632/minikube-v1.21.0-1623378770-11632.iso https://github.com/kubernetes/minikube/releases/download/v1.21.0-1623378770-11632/minikube-v1.21.0-1623378770-11632.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.21.0-1623378770-11632.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfs
shares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-20210625224134-118735 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)]="(MISSING)"
	I0625 22:54:37.304028  512433 out.go:165] * Pausing node old-k8s-version-20210625224134-118735 ... 
	I0625 22:54:37.304065  512433 host.go:66] Checking if "old-k8s-version-20210625224134-118735" exists ...
	I0625 22:54:37.304284  512433 ssh_runner.go:149] Run: systemctl --version
	I0625 22:54:37.304319  512433 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210625224134-118735
	I0625 22:54:37.344236  512433 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-master-115019-32c08ad6b789c18aa094e3ba1fc0b7102e3e603f/.minikube/machines/old-k8s-version-20210625224134-118735/id_rsa Username:docker}
	I0625 22:54:37.433251  512433 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0625 22:54:37.442912  512433 pause.go:50] kubelet running: true
	I0625 22:54:37.442961  512433 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0625 22:54:37.557594  512433 retry.go:31] will retry after 276.165072ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0625 22:54:37.834069  512433 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0625 22:54:37.843875  512433 pause.go:50] kubelet running: true
	I0625 22:54:37.843926  512433 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0625 22:54:37.944106  512433 retry.go:31] will retry after 540.190908ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0625 22:54:38.484827  512433 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0625 22:54:38.494839  512433 pause.go:50] kubelet running: true
	I0625 22:54:38.494897  512433 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0625 22:54:38.595224  512433 retry.go:31] will retry after 655.06503ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0625 22:54:39.251040  512433 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0625 22:54:39.260712  512433 pause.go:50] kubelet running: true
	I0625 22:54:39.260773  512433 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0625 22:54:39.362278  512433 retry.go:31] will retry after 791.196345ms: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0625 22:54:40.153685  512433 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0625 22:54:40.163480  512433 pause.go:50] kubelet running: true
	I0625 22:54:40.163545  512433 ssh_runner.go:149] Run: sudo systemctl disable --now kubelet
	I0625 22:54:40.270414  512433 out.go:165] 
	W0625 22:54:40.270596  512433 out.go:230] X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable --now: sudo systemctl disable --now kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
@medyagh medyagh added kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. labels Jun 29, 2021
@medyagh medyagh changed the title TestStartStop/group/newest-cni/serial/Pause Flake TestStartStop/group/newest-cni/serial/Pause: update-rc.d: error: kubelet Default-Start contains no runlevels, aborting. Jun 29, 2021
@medyagh medyagh changed the title TestStartStop/group/newest-cni/serial/Pause: update-rc.d: error: kubelet Default-Start contains no runlevels, aborting. TestStartStop/group/newest-cni/serial/Pause: contains no runlevels, aborting. Jun 29, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/flake Categorizes issue or PR as related to a flaky test. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

1 participant