Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

apiserver: stopped - connection to the server x:8443 was refused #3649

Closed
kamilgregorczyk opened this issue Feb 11, 2019 · 13 comments · Fixed by #3671
Closed

apiserver: stopped - connection to the server x:8443 was refused #3649

kamilgregorczyk opened this issue Feb 11, 2019 · 13 comments · Fixed by #3671
Labels
priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/needs-information Indicates an issue needs more information in order to work on it.
Milestone

Comments

@kamilgregorczyk
Copy link

kamilgregorczyk commented Feb 11, 2019

Minikube is constantly reseting/crashing/doing something with kubernetes api server. After running minikube, increasing cores to 8 and ram to 20 gb, installing helm and istio I'm getting random crashes which are super frustrating here are minikube logs: https://pastebin.com/fiPNnjz9

➜  ~ minikube status
host: Running
kubelet: Running
apiserver: Stopped
kubectl: Correctly Configured: pointing to minikube-vm at 192.168.99.100

The connection to the server 192.168.99.100:8443 was refused - did you specify the right host or port?

Environment:

Minikube version: minikube version: v0.33.1

  • OS: MacOS
  • VM Driver: VirtualBox
  • ISO version: file:///Users/kgregorczyk/.minikube/cache/iso/minikube-v0.33.1.iso"
  • Install tools: helm & istio from helm
@kamilgregorczyk kamilgregorczyk changed the title Unusable minikube: apiServer stopped Unusable minikube apiserver: stopped Feb 11, 2019
@tstromberg tstromberg changed the title Unusable minikube apiserver: stopped apiserver: stopped - connection to the server x:8443 was refused Feb 12, 2019
@tstromberg tstromberg added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Feb 12, 2019
@tstromberg tstromberg added this to the v1.0.0-candidate milestone Feb 12, 2019
@tstromberg
Copy link
Contributor

Hey @kamilgregorczyk - sorry about minikube not working out for you so far. I suspect that the apiserver is being evicted, due to running out of resources for the footprint you are attempting to deploy, but we don't do a good job of showing it. Do you mind outputting some logs for me, and perhaps steps to replicate? It'd really help us to stabilize this.

minikube logs

kubectl get pods --all-namespaces

and:

minikube ssh 'docker logs $(docker ps -a -f name=k8s_kube-api --format={{.ID}})'

Thanks so much for your bug report!

@tstromberg tstromberg added the triage/needs-information Indicates an issue needs more information in order to work on it. label Feb 12, 2019
@fabianbaier
Copy link

fabianbaier commented Feb 13, 2019

Same thing happens to me when running skaffold dev on a 6 core 8gb minikube cluster with hyperkit.

kubectl get pods --all-namespaces

Won't work as the apiserver is not running.

minikube ssh 'docker logs $(docker ps -a -f name=k8s_kube-api --format={{.ID}})'

Returns

$ minikube ssh 'docker logs $(docker ps -a -f name=k8s_kube-api --format={{.ID}})'
"docker logs" requires exactly 1 argument.
See 'docker logs --help'.

Usage:  docker logs [OPTIONS] CONTAINER

Fetch the logs of a container

When looking further into by minikube ssh in a healthy minikube, then running skaffold dev and have the issue replicate I am starting to see eviction messages in journalctl

Feb 13 07:16:43 minikube kubelet[2410]: E0213 07:16:43.769888    2410 reflector.go:251] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to watch *v1.Pod: Get https://localhost:8443/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&resourceVersion=17030&timeoutSeconds=347&watch=true: dial tcp 127.0.0.1:8443: connect: connection refused
Feb 13 07:16:44 minikube kubelet[2410]: I0213 07:16:44.041581    2410 eviction_manager.go:563] eviction manager: pod kube-apiserver-minikube_kube-system(87f41e2e0629c3deb5c2239e08d8045d) is evicted successfully
Feb 13 07:16:44 minikube kubelet[2410]: I0213 07:16:44.041623    2410 eviction_manager.go:187] eviction manager: pods kube-apiserver-minikube_kube-system(87f41e2e0629c3deb5c2239e08d8045d) evicted, waiting for pod to be cleaned up
.
.
.
Feb 13 07:17:14 minikube kubelet[2410]: W0213 07:17:14.043001    2410 eviction_manager.go:392] eviction manager: timed out waiting for pods kube-apiserver-minikube_kube-system(87f41e2e0629c3deb5c2239e08d8045d) to be cleaned up
.
.
.
Feb 13 07:18:30 minikube kubelet[2410]: E0213 07:18:30.460108    2410 event.go:147] Unable to write event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-minikube.1582db2c05e7b870", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Initializers:(*v1.Initializers)(nil), Finalizers:[]string(nil), ClusterName:""}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-minikube", UID:"87f41e2e0629c3deb5c2239e08d8045d", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Killing", Message:"Killing container with id docker://kube-apiserver:Need to kill Pod", Source:v1.EventSource{Component:"kubelet", Host:"minikube"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xbf110ef6f539ea70, ext:141711610374, loc:(*time.Location)(0x71d3440)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xbf110ef6f539ea70, ext:141711610374, loc:(*time.Location)(0x71d3440)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}' (retry limit exceeded!)

Why do we see evictions with pod kube-apiserver-minikube_kube-system(87f41e2e0629c3deb5c2239e08d8045d) is evicted successfully ?

Whats also maybe helpful is

Feb 13 07:14:22 minikube kubelet[2410]: E0213 07:14:22.330199    2410 kubelet.go:1308] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Feb 13 07:14:22 minikube kubelet[2410]: E0213 07:14:22.481301    2410 eviction_manager.go:243] eviction manager: failed to get summary stats: failed to get node info: node "minikube" not found

and

Feb 13 07:15:42 minikube kubelet[2410]: W0213 07:15:42.542247    2410 eviction_manager.go:329] eviction manager: attempting to reclaim ephemeral-storage
Feb 13 07:15:42 minikube kubelet[2410]: W0213 07:15:42.760098    2410 eviction_manager.go:414] eviction manager: unexpected error when attempting to reduce ephemeral-storage pressure: wanted to free 9223372036854775807 bytes, but freed 989661055 bytes space with errors in image deletion: [rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete ee867562a37d (must be forced) - image is being used by stopped container ca5d61bdb610, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 557d1c801957 (must be forced) - image is being used by stopped container 9ed6ed528ad3, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 3d7d78396776 (must be forced) - image is being used by stopped container a5c4e5c32fe6, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 77a332912e0d (must be forced) - image is being used by stopped container ff8ecbd50c79, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 213ee1099b0c (must be forced) - image is being used by stopped container 133abcf6c34a, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 1a8de823737d (must be forced) - image is being used by stopped container a6432e7fc825, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 52d894fca6d4 (cannot be forced) - image has dependent child images, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete bc8fb6e6e49d (cannot be forced) - image has dependent child images, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete a3040c587f52 (must be forced) - image is being used by stopped container 72a3783f5edf, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete c7c0e41e93f1 (must be forced) - image is being used by stopped container 3759185213a0, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 579404a7b407 (must be forced) - image is being used by stopped container c99f4a3c89e3, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 5a02f920193b (cannot be forced) - image has dependent child images, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete d4353dbbc50f (must be forced) - image is being used by stopped container d4c86f39d0b2, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete fe52e4bf5bc2 (must be forced) - image is being used by stopped container 02b7b443fe10, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 6e60582cf5d2 (must be forced) - image is being used by stopped container 1853de35a8e1, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete ed266c958895 (must be forced) - image is being used by stopped container 0fefcdb949fe, rpc error: code = Unknown desc = Error response from daemon: conflict: unable to delete 8cc61adda887 (must be forced) - image is being used by stopped container b096504f5e9b]
Feb 13 07:15:42 minikube kubelet[2410]: I0213 07:15:42.802271    2410 eviction_manager.go:358] eviction manager: pods ranked for eviction: kube-addon-manager-minikube_kube-system(5c72fb06dcdda608211b70d63c0ca488), kube-scheduler-minikube_kube-system(9729a196c4723b60ab401eaff722982d), kube-controller-manager-minikube_kube-system(a27cf7498262006bec6c9e79ae3ebd16), kube-apiserver-minikube_kube-system(87f41e2e0629c3deb5c2239e08d8045d), etcd-minikube_kube-system(75f0900b110d44d8e0930ffb50187ab7), trader-postgres-7687cd6f9c-td8nx_default(881670d9-2f4f-11e9-a92c-52406b8570cc), storage-provisioner_kube-system(35904133-2f2a-11e9-831d-52406b8570cc), trader-backend-f5dd485-mlg98_default(88c87f50-2f4f-11e9-a92c-52406b8570cc), coredns-86c58d9df4-pjz2f_kube-system(34bcf7d5-2f2a-11e9-831d-52406b8570cc), coredns-86c58d9df4-9brvs_kube-system(34bbf9fd-2f2a-11e9-831d-52406b8570cc), fabianbaier-prometheus-5dd7b6f9b7-sqs6f_default(91d739d4-2f4f-11e9-a92c-52406b8570cc), trader-scheduler-55d6df78ff-gnr7r_default(fa7e2756-2f53-11e9-a92c-52406b8570cc)

Is that maybe the clue? So we run out of storage and the eviction manager starts to evict pods, and the apiserver happens to be one of many? I think that would make sense...somehow...

I just don't get why storage is an issue:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          3.9G  638M  3.2G  17% /
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G   17M  3.9G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           3.9G  140K  3.9G   1% /tmp
/dev/vda1        16G   12G  2.5G  83% /mnt/vda1
$ sudo du -h -d 1 /var/lib/docker
20K	/var/lib/docker/plugins
4.0K	/var/lib/docker/runtimes
72K	/var/lib/docker/buildkit
12G	/var/lib/docker/overlay2
648K	/var/lib/docker/containerd
4.0K	/var/lib/docker/swarm
5.1M	/var/lib/docker/containers
2.3M	/var/lib/docker/volumes
18M	/var/lib/docker/image
20K	/var/lib/docker/builder
4.0K	/var/lib/docker/trust
4.0K	/var/lib/docker/tmp
108K	/var/lib/docker/network
12G	/var/lib/docker

I should have at least 4G available for my docker images.

@kamilgregorczyk
Copy link
Author

kamilgregorczyk commented Feb 13, 2019

@tstromberg You might be right, I had istio running. I can't give you logs as I deleted my VM and moved to GCP with my experiments. I did:

  1. Run minikube
  2. Install helm with tiller account
  3. Stop minikube and assign 8 cores and 21 gb of ram
  4. Start minikube
  5. Install istio with all the features helm install install/kubernetes/helm/istio --name istio --namespace istio-system --set prometheus.service.nodePort.enabled=true --set tracing.enabled=true --set tracing.service.type=NodePort --set kiali.enabled=true --set grafana.enabled=true --set grafana.service.type=NodePort

and after some time I started getting these dropped connections

@fabianbaier
Copy link

fabianbaier commented Feb 13, 2019

I was able to reproduce the issue. So it seems by default I have in /dev/vda1 16G and if that hits a threshold (around 12G) it sets off the eviction of those vital processes.

To replicate try this:

$ minikube ssh
                         _             _            
            _         _ ( )           ( )           
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __  
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          3.9G  638M  3.2G  17% /
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G  8.8M  3.9G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           3.9G   16K  3.9G   1% /tmp
/dev/vda1        16G  859M   14G   6% /mnt/vda1
$ sudo fallocate -l 12G /mnt/vda1/12gb
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          3.9G  638M  3.2G  17% /
devtmpfs        3.9G     0  3.9G   0% /dev
tmpfs           3.9G     0  3.9G   0% /dev/shm
tmpfs           3.9G  8.8M  3.9G   1% /run
tmpfs           3.9G     0  3.9G   0% /sys/fs/cgroup
tmpfs           3.9G   20K  3.9G   1% /tmp
/dev/vda1        16G   13G  1.6G  90% /mnt/vda1

The output of kubectl get events -w looks like this:

$ kubectl get events -w
LAST SEEN   TYPE     REASON                    KIND   MESSAGE
46s         Normal   NodeHasSufficientMemory   Node   Node minikube status is now: NodeHasSufficientMemory
46s         Normal   NodeHasNoDiskPressure     Node   Node minikube status is now: NodeHasNoDiskPressure
46s         Normal   NodeHasSufficientPID      Node   Node minikube status is now: NodeHasSufficientPID
18s         Normal   RegisteredNode            Node   Node minikube event: Registered Node minikube in Controller
16s         Normal   Starting                  Node   Starting kube-proxy.
0s    Warning   ImageGCFailed   Node   failed to garbage collect required amount of images. Wanted to free 1671698841 bytes, but freed 0 bytes
E0213 14:06:27.209682    5918 streamwatcher.go:109] Unable to decode an event from the watch stream: http2: server sent GOAWAY and closed the connection; LastStreamID=63, ErrCode=NO_ERROR, debug=""

And sometimes like this:

.
.
0s    Warning   EvictionThresholdMet   Node   Attempting to reclaim ephemeral-storage
0s    Normal   NodeHasDiskPressure   Node   Node minikube status is now: NodeHasDiskPressure
0s    Warning   EvictionThresholdMet   Node   Attempting to reclaim ephemeral-storage
0s    Warning   EvictionThresholdMet   Node   Attempting to reclaim ephemeral-storage
0s    Warning   EvictionThresholdMet   Node   Attempting to reclaim ephemeral-storage

This is when the apiserver got evicted. When then running kubectl get pods I get:

The connection to the server 192.168.64.9:8443 was refused - did you specify the right host or port?

@mayankpundir27
Copy link

Screenshot from 2019-06-13 22-12-32

@mayankpundir27
Copy link

minikube is constanty crashing. Please give some solutions.

@linkavich14
Copy link

linkavich14 commented Jul 8, 2019

What was the solution ?
My /dev/vda1 is only 18% and keeps crashing

@irrgit
Copy link

irrgit commented Jul 9, 2019

Had the same happen to me recently, minikube was running with vm-driver=none . It was up for over 28 days without issue, so I am not exactly sure what caused it to come down. Stopping and starting it again fixed the problem but it would be nice to know what to do to prevent it from happening again in the future.

@NanXuejiao
Copy link

NanXuejiao commented Jul 25, 2019

I faced the same thing, after restarting one my own pod, minikube apiserver turned to Stopped, serval times waiting for a long time apiserver restart well with "Running" status, but many times it always is Stopped. The following is its log.

$ minikube ssh 'docker logs $(docker ps -a -f name=k8s_kube-api --format={{.ID}})'
Flag --insecure-port has been deprecated, This flag will be removed in a future version.
I0725 08:34:13.148972       1 server.go:557] external host was not specified, using 192.168.99.101
I0725 08:34:13.149096       1 server.go:146] Version: v1.13.4
I0725 08:34:13.640528       1 initialization.go:91] enabled Initializers feature as part of admission plugin setup
I0725 08:34:13.641385       1 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,Initializers.
I0725 08:34:13.641459       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0725 08:34:13.642174       1 plugins.go:158] Loaded 10 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook,Initializers.
I0725 08:34:13.642301       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0725 08:34:13.772310       1 master.go:228] Using reconciler: lease
W0725 08:34:14.812254       1 genericapiserver.go:338] Skipping API batch/v2alpha1 because it has no resources.
W0725 08:34:14.960597       1 genericapiserver.go:338] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0725 08:34:14.968874       1 genericapiserver.go:338] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0725 08:34:14.980357       1 genericapiserver.go:338] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
W0725 08:34:15.074086       1 genericapiserver.go:338] Skipping API admissionregistration.k8s.io/v1alpha1 because it has no resources.
[restful] 2019/07/25 08:34:15 log.go:33: [restful/swagger] listing is available at https://192.168.99.101:8443/swaggerapi
[restful] 2019/07/25 08:34:15 log.go:33: [restful/swagger] https://192.168.99.101:8443/swaggerui/ is mapped to folder /swagger-ui/
$ df -h
Filesystem      Size  Used Avail Use% Mounted on
devtmpfs        934M     0  934M   0% /dev
tmpfs           996M     0  996M   0% /dev/shm
tmpfs           996M   22M  974M   3% /run
tmpfs           996M     0  996M   0% /sys/fs/cgroup
tmpfs           996M  300K  996M   1% /tmp
/dev/sda1        17G  2.7G   14G  17% /mnt/sda1
/Users          466G  122G  345G  27% /Users

@jpninanjohn
Copy link

You might be right, I had istio running. I can't give you logs as I deleted my VM and moved to GCP with my experiments. I did:

I also faced this issue after I installed istio. I had to delete my minikube and create it again. Fingers crossed it wont happen again.

@prabsdubey
Copy link

After installing , apiserver is not coming up(apiserver: Stopped)
$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

prabs@LAPTOP-HQ5LK73I ~
$ pwd
/home/prabs

prabs@LAPTOP-HQ5LK73I ~
$ kubectl
kubectl controls the Kubernetes cluster manager.

Find more information at:
https://kubernetes.io/docs/reference/kubectl/overview/

Basic Commands (Beginner):
create Create a resource from a file or from stdin.
expose Take a replication controller, service, deployment or pod and
expose it as a new Kubernetes Service
run Run a particular image on the cluster
set Set specific features on objects

Basic Commands (Intermediate):
explain Documentation of resources
get Display one or many resources
edit Edit a resource on the server
delete Delete resources by filenames, stdin, resources and names, or by

resources and label selector

Deploy Commands:
rollout Manage the rollout of a resource
scale Set a new size for a Deployment, ReplicaSet or Replication
Controller
autoscale Auto-scale a Deployment, ReplicaSet, or ReplicationController

Cluster Management Commands:
certificate Modify certificate resources.
cluster-info Display cluster info
top Display Resource (CPU/Memory/Storage) usage.
cordon Mark node as unschedulable
uncordon Mark node as schedulable
drain Drain node in preparation for maintenance
taint Update the taints on one or more nodes

Troubleshooting and Debugging Commands:
describe Show details of a specific resource or group of resources
logs Print the logs for a container in a pod
attach Attach to a running container
exec Execute a command in a container
port-forward Forward one or more local ports to a pod
proxy Run a proxy to the Kubernetes API server
cp Copy files and directories to and from containers.
auth Inspect authorization

Advanced Commands:
diff Diff live version against would-be applied version
apply Apply a configuration to a resource by filename or stdin
patch Update field(s) of a resource using strategic merge patch
replace Replace a resource by filename or stdin
wait Experimental: Wait for a specific condition on one or many
resources.
convert Convert config files between different API versions
kustomize Build a kustomization target from a directory or a remote url.

Settings Commands:
label Update the labels on a resource
annotate Update the annotations on a resource
completion Output shell completion code for the specified shell (bash or
zsh)

Other Commands:
alpha Commands for features in alpha
api-resources Print the supported API resources on the server
api-versions Print the supported API versions on the server, in the form of
"group/version"
config Modify kubeconfig files
plugin Provides utilities for interacting with plugins.
version Print the client and server version information

Usage:
kubectl [flags] [options]

Use "kubectl --help" for more information about a given command.
Use "kubectl options" for a list of global command-line options (applies to all
commands).

prabs@LAPTOP-HQ5LK73I ~
$ minikube
minikube provisions and manages local Kubernetes clusters optimized for
development workflows.

Basic Commands:
start Starts a local Kubernetes cluster
status Gets the status of a local Kubernetes cluster
stop Stops a running local Kubernetes cluster
delete Deletes a local Kubernetes cluster
dashboard Access the Kubernetes dashboard running within the minikube
cluster
pause pause Kubernetes
unpause unpause Kubernetes

Images Commands:
docker-env Configure environment to use minikube's Docker daemon
podman-env Configure environment to use minikube's Podman service
cache Add, delete, or push a local image into minikube

Configuration and Management Commands:
addons Enable or disable a minikube addon
config Modify persistent configuration values
profile Get or list the current profiles (clusters)
update-context Update kubeconfig in case of an IP or port change

Networking and Connectivity Commands:
service Returns a URL to connect to a service
tunnel Connect to LoadBalancer services

Advanced Commands:
mount Mounts the specified directory into minikube
ssh Log into the minikube environment (for debugging)
kubectl Run a kubectl binary matching the cluster version
node Add, remove, or list additional nodes

Troubleshooting Commands:
ssh-key Retrieve the ssh identity key path of the specified cluster
ip Retrieves the IP address of the running cluster
logs Returns logs to debug a local Kubernetes cluster
update-check Print current and latest version number
version Print the version of minikube

Other Commands:
completion Generate command completion for a shell

Use "minikube --help" for more information about a given command.

prabs@LAPTOP-HQ5LK73I ~
$ minikube
minikube provisions and manages local Kubernetes clusters optimized for
development workflows.

Basic Commands:
start Starts a local Kubernetes cluster
status Gets the status of a local Kubernetes cluster
stop Stops a running local Kubernetes cluster
delete Deletes a local Kubernetes cluster
dashboard Access the Kubernetes dashboard running within the minikube
cluster
pause pause Kubernetes
unpause unpause Kubernetes

Images Commands:
docker-env Configure environment to use minikube's Docker daemon
podman-env Configure environment to use minikube's Podman service
cache Add, delete, or push a local image into minikube

Configuration and Management Commands:
addons Enable or disable a minikube addon
config Modify persistent configuration values
profile Get or list the current profiles (clusters)
update-context Update kubeconfig in case of an IP or port change

Networking and Connectivity Commands:
service Returns a URL to connect to a service
tunnel Connect to LoadBalancer services

Advanced Commands:
mount Mounts the specified directory into minikube
ssh Log into the minikube environment (for debugging)
kubectl Run a kubectl binary matching the cluster version
node Add, remove, or list additional nodes

Troubleshooting Commands:
ssh-key Retrieve the ssh identity key path of the specified cluster
ip Retrieves the IP address of the running cluster
logs Returns logs to debug a local Kubernetes cluster
update-check Print current and latest version number
version Print the version of minikube

Other Commands:
completion Generate command completion for a shell

Use "minikube --help" for more information about a given command.

prabs@LAPTOP-HQ5LK73I ~
$ minikube start

  • minikube v1.14.1 on Microsoft Windows 10 Home Single Language 10.0.19041 Build
    19041
  • Using the virtualbox driver based on existing profile
  • Starting control plane node minikube in cluster minikube
  • virtualbox "minikube" VM is missing, will recreate.
  • Creating virtualbox VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
  • Preparing Kubernetes v1.19.2 on Docker 19.03.12 ...
  • Verifying Kubernetes components...
    ! Enabling 'default-storageclass' returned an error: running callbacks: [Error m
    aking standard the default storage class: Error listing StorageClasses: Get "htt
    ps://192.168.99.103:8443/apis/storage.k8s.io/v1/storageclasses": dial tcp 192.16
    8.99.103:8443: connectex: No connection could be made because the target machine
    actively refused it.]
    X Problems detected in kubelet:
    • Oct 25 06:13:16 minikube kubelet[4555]: E1025 06:13:16.082132 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:13:21 minikube kubelet[4555]: E1025 06:13:21.705715 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 10s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:13:24 minikube kubelet[4555]: E1025 06:13:24.364639 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 10s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:13:48 minikube kubelet[4555]: E1025 06:13:48.709706 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:13:51 minikube kubelet[4555]: E1025 06:13:51.703572 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
      X Problems detected in kubelet:
    • Oct 25 06:13:48 minikube kubelet[4555]: E1025 06:13:48.709706 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:13:51 minikube kubelet[4555]: E1025 06:13:51.703572 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:13:57 minikube kubelet[4555]: E1025 06:13:57.049762 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 20s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:14:03 minikube kubelet[4555]: E1025 06:14:03.637975 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 20s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:14:05 minikube kubelet[4555]: E1025 06:14:05.730360 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
      X Problems detected in kubelet:
    • Oct 25 06:13:51 minikube kubelet[4555]: E1025 06:13:51.703572 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:13:57 minikube kubelet[4555]: E1025 06:13:57.049762 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 20s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:14:03 minikube kubelet[4555]: E1025 06:14:03.637975 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 20s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:14:05 minikube kubelet[4555]: E1025 06:14:05.730360 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 20s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:14:14 minikube kubelet[4555]: E1025 06:14:14.725374 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 20s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
      X Problems detected in kubelet:
    • Oct 25 06:14:14 minikube kubelet[4555]: E1025 06:14:14.725374 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 20s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:14:37 minikube kubelet[4555]: E1025 06:14:37.775172 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:14:38 minikube kubelet[4555]: E1025 06:14:38.829145 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:14:41 minikube kubelet[4555]: E1025 06:14:41.701903 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:14:43 minikube kubelet[4555]: E1025 06:14:43.637554 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
      X Problems detected in kubelet:
    • Oct 25 06:14:38 minikube kubelet[4555]: E1025 06:14:38.829145 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:14:41 minikube kubelet[4555]: E1025 06:14:41.701903 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:14:43 minikube kubelet[4555]: E1025 06:14:43.637554 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:14:54 minikube kubelet[4555]: E1025 06:14:54.725615 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:14:57 minikube kubelet[4555]: E1025 06:14:57.735687 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
      ! Enabling 'storage-provisioner' returned an error: running callbacks: [sudo KUB
      ECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.19.2/kubectl
      apply -f /etc/kubernetes/addons/storage-provisioner.yaml: Process exited with st
      atus 1
      stdout:

stderr:
Unable to connect to the server: net/http: TLS handshake timeout
]

  • Enabled addons:
    X Problems detected in kubelet:
    • Oct 25 06:14:43 minikube kubelet[4555]: E1025 06:14:43.637554 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:14:54 minikube kubelet[4555]: E1025 06:14:54.725615 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:14:57 minikube kubelet[4555]: E1025 06:14:57.735687 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:15:06 minikube kubelet[4555]: E1025 06:15:06.725534 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:15:12 minikube kubelet[4555]: E1025 06:15:12.725982 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
      X Problems detected in kubelet:
    • Oct 25 06:15:06 minikube kubelet[4555]: E1025 06:15:06.725534 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 40s resta
      rting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1
      ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:15:12 minikube kubelet[4555]: E1025 06:15:12.725982 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 40s restarting failed container=kube-controller-manager pod=kube-contro
      ller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:15:38 minikube kubelet[4555]: E1025 06:15:38.181491 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
      tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
      b1ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:15:41 minikube kubelet[4555]: E1025 06:15:41.702214 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
      tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
      b1ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:15:46 minikube kubelet[4555]: E1025 06:15:46.368810 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
      X Problems detected in kubelet:
    • Oct 25 06:15:41 minikube kubelet[4555]: E1025 06:15:41.702214 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
      tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
      b1ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:15:46 minikube kubelet[4555]: E1025 06:15:46.368810 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:15:53 minikube kubelet[4555]: E1025 06:15:53.636966 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:15:56 minikube kubelet[4555]: E1025 06:15:56.753041 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
      tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
      b1ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:16:05 minikube kubelet[4555]: E1025 06:16:05.724682 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
      X Problems detected in kubelet:
    • Oct 25 06:15:53 minikube kubelet[4555]: E1025 06:15:53.636966 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:15:56 minikube kubelet[4555]: E1025 06:15:56.753041 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
      tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
      b1ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:16:05 minikube kubelet[4555]: E1025 06:16:05.724682 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:16:11 minikube kubelet[4555]: E1025 06:16:11.727503 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
      tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
      b1ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:16:17 minikube kubelet[4555]: E1025 06:16:17.724472 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
      X Problems detected in kubelet:
    • Oct 25 06:16:05 minikube kubelet[4555]: E1025 06:16:05.724682 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:16:11 minikube kubelet[4555]: E1025 06:16:11.727503 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
      tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
      b1ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:16:17 minikube kubelet[4555]: E1025 06:16:17.724472 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:16:25 minikube kubelet[4555]: E1025 06:16:25.725273 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
      tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
      b1ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:16:32 minikube kubelet[4555]: E1025 06:16:32.731285 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
      X Problems detected in kubelet:
    • Oct 25 06:16:17 minikube kubelet[4555]: E1025 06:16:17.724472 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:16:25 minikube kubelet[4555]: E1025 06:16:25.725273 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
      tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
      b1ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:16:32 minikube kubelet[4555]: E1025 06:16:32.731285 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:16:39 minikube kubelet[4555]: E1025 06:16:39.725255 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
      tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
      b1ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:16:45 minikube kubelet[4555]: E1025 06:16:45.723977 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
      X Problems detected in kubelet:
    • Oct 25 06:16:32 minikube kubelet[4555]: E1025 06:16:32.731285 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:16:39 minikube kubelet[4555]: E1025 06:16:39.725255 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
      tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
      b1ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:16:45 minikube kubelet[4555]: E1025 06:16:45.723977 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:16:54 minikube kubelet[4555]: E1025 06:16:54.728442 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
      tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
      b1ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:16:59 minikube kubelet[4555]: E1025 06:16:59.731056 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
      X Problems detected in kubelet:
    • Oct 25 06:16:32 minikube kubelet[4555]: E1025 06:16:32.731285 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:16:39 minikube kubelet[4555]: E1025 06:16:39.725255 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
      tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
      b1ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:16:45 minikube kubelet[4555]: E1025 06:16:45.723977 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"
    • Oct 25 06:16:54 minikube kubelet[4555]: E1025 06:16:54.728442 4555 pod_wo
      rkers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserve
      r-minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to
      "StartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 1m20s res
      tarting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(
      b1ef52506bd93c04ce27fa412a22c055)"
    • Oct 25 06:16:59 minikube kubelet[4555]: E1025 06:16:59.731056 4555 pod_wo
      rkers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controll
      er-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: f
      ailed to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "
      back-off 1m20s restarting failed container=kube-controller-manager pod=kube-cont
      roller-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"

X Exiting due to GUEST_START: wait 6m0s for node: wait for healthy API server: a
piserver healthz never reported healthy: timed out waiting for the condition
*

prabs@LAPTOP-HQ5LK73I ~
$ minikube status
minikube
type: Control Plane
host: Running
kubelet: Running
apiserver: Stopped
kubeconfig: Configured

prabs@LAPTOP-HQ5LK73I ~
$ minikube logs

  • ==> Docker <==
  • -- Logs begin at Sun 2020-10-25 06:09:38 UTC, end at Sun 2020-10-25 06:26:06 U
    TC. --
  • Oct 25 06:12:00 minikube dockerd[2726]: time="2020-10-25T06:12:00.020221768Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/66a
    583888c5044962da96b721db3188b7c2c9e6873c23aaa160126b0d369f1ee/shim.sock" debug=f
    alse pid=3565
  • Oct 25 06:12:00 minikube dockerd[2726]: time="2020-10-25T06:12:00.297757565Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ec8
    b64f4ed5f1d1ce7e81ae2d9f80a7fca90181abd0294e46a55ff04158861ea/shim.sock" debug=f
    alse pid=3596
  • Oct 25 06:12:00 minikube dockerd[2726]: time="2020-10-25T06:12:00.467490792Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/24d
    45b930473dab074164da7fd9da8e49c084833e2ba18179100d67a2d059851/shim.sock" debug=f
    alse pid=3616
  • Oct 25 06:12:00 minikube dockerd[2726]: time="2020-10-25T06:12:00.684569386Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/09e
    773dd2f7dc0044b93193fb7c5f8021b1e7e4f8cf4e1c81154747fd17d72ae/shim.sock" debug=f
    alse pid=3642
  • Oct 25 06:12:02 minikube dockerd[2726]: time="2020-10-25T06:12:02.736296964Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/f66
    f4eb65b91137193a0348c8f3ec7e060e946c337e04e4e9c3a0f36698fbefe/shim.sock" debug=f
    alse pid=3759
  • Oct 25 06:12:02 minikube dockerd[2726]: time="2020-10-25T06:12:02.905413703Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ee9
    9e49f93a5bb07ca1eb8955a5e93ec07589e5f2b31fff936c7319b90aff216/shim.sock" debug=f
    alse pid=3770
  • Oct 25 06:12:04 minikube dockerd[2726]: time="2020-10-25T06:12:04.570158222Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/bd1
    4a9b4bd1211b83959d91131fc32478580710c68effdb7ff76c56c232d81cd/shim.sock" debug=f
    alse pid=3895
  • Oct 25 06:12:10 minikube dockerd[2726]: time="2020-10-25T06:12:10.616190097Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/d90
    449cdc5368ad3211d3cf09b7bd08e7f165f850f09fa552fbaa35e85772d33/shim.sock" debug=f
    alse pid=4079
  • Oct 25 06:12:23 minikube dockerd[2726]: time="2020-10-25T06:12:23.360448819Z"
    level=info msg="shim reaped" id=f66f4eb65b91137193a0348c8f3ec7e060e946c337e04e4e
    9c3a0f36698fbefe
  • Oct 25 06:12:23 minikube dockerd[2719]: time="2020-10-25T06:12:23.401378110Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:12:25 minikube dockerd[2726]: time="2020-10-25T06:12:25.729134806Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/fe2
    19357f3760a4d774e04a5d4dd9266f43de704c524a0a7a1cc013d221950d6/shim.sock" debug=f
    alse pid=4273
  • Oct 25 06:12:25 minikube dockerd[2726]: time="2020-10-25T06:12:25.797715419Z"
    level=info msg="shim reaped" id=bd14a9b4bd1211b83959d91131fc32478580710c68effdb7
    ff76c56c232d81cd
  • Oct 25 06:12:25 minikube dockerd[2719]: time="2020-10-25T06:12:25.810695459Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:12:28 minikube dockerd[2726]: time="2020-10-25T06:12:28.131521661Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/20c
    d686b405e69a07fae76def6adb84563f0214e2a38692fb7d2b3f4d7b05c43/shim.sock" debug=f
    alse pid=4394
  • Oct 25 06:12:41 minikube dockerd[2726]: time="2020-10-25T06:12:41.837066776Z"
    level=info msg="shim reaped" id=fe219357f3760a4d774e04a5d4dd9266f43de704c524a0a7
    a1cc013d221950d6
  • Oct 25 06:12:41 minikube dockerd[2719]: time="2020-10-25T06:12:41.845668167Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:12:50 minikube dockerd[2726]: time="2020-10-25T06:12:50.984298125Z"
    level=info msg="shim reaped" id=20cd686b405e69a07fae76def6adb84563f0214e2a38692f
    b7d2b3f4d7b05c43
  • Oct 25 06:12:50 minikube dockerd[2719]: time="2020-10-25T06:12:50.996220647Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:12:58 minikube dockerd[2726]: time="2020-10-25T06:12:58.253293092Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/43f
    c764666335600796026d2096707c80663b7804851a8d942bf108ef68dcc6f/shim.sock" debug=f
    alse pid=4880
  • Oct 25 06:12:58 minikube dockerd[2726]: time="2020-10-25T06:12:58.285637778Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/40b
    73082472f7a0363ee9a5b810b14d3656febab1580e2568d12ea9d1664c718/shim.sock" debug=f
    alse pid=4885
  • Oct 25 06:13:15 minikube dockerd[2726]: time="2020-10-25T06:13:15.034936581Z"
    level=info msg="shim reaped" id=43fc764666335600796026d2096707c80663b7804851a8d9
    42bf108ef68dcc6f
  • Oct 25 06:13:15 minikube dockerd[2719]: time="2020-10-25T06:13:15.068143484Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:13:23 minikube dockerd[2726]: time="2020-10-25T06:13:23.653492942Z"
    level=info msg="shim reaped" id=40b73082472f7a0363ee9a5b810b14d3656febab1580e256
    8d12ea9d1664c718
  • Oct 25 06:13:23 minikube dockerd[2719]: time="2020-10-25T06:13:23.665446470Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:13:32 minikube dockerd[2726]: time="2020-10-25T06:13:32.965214332Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/663
    3a5694df78bcd58ecd5d36ad1dc09ac4eb853a66c3bb35603b855003424ac/shim.sock" debug=f
    alse pid=5166
  • Oct 25 06:13:35 minikube dockerd[2726]: time="2020-10-25T06:13:35.065584112Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/9b9
    5c363880438c4c22645edaaf8cb5465f1195bf30e07642f592b50985e611c/shim.sock" debug=f
    alse pid=5206
  • Oct 25 06:13:47 minikube dockerd[2726]: time="2020-10-25T06:13:47.918236780Z"
    level=info msg="shim reaped" id=6633a5694df78bcd58ecd5d36ad1dc09ac4eb853a66c3bb3
    5603b855003424ac
  • Oct 25 06:13:47 minikube dockerd[2719]: time="2020-10-25T06:13:47.931432268Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:13:56 minikube dockerd[2726]: time="2020-10-25T06:13:56.072905863Z"
    level=info msg="shim reaped" id=9b95c363880438c4c22645edaaf8cb5465f1195bf30e0764
    2f592b50985e611c
  • Oct 25 06:13:56 minikube dockerd[2719]: time="2020-10-25T06:13:56.084666494Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:14:17 minikube dockerd[2726]: time="2020-10-25T06:14:17.033380643Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/953
    6e6a81db94399860e073a9f3d37b165fa6e616be24b642b7aa1312f8d4253/shim.sock" debug=f
    alse pid=5604
  • Oct 25 06:14:31 minikube dockerd[2726]: time="2020-10-25T06:14:31.228445502Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/76b
    0e48ce6a09d05922aa184e1bbabe941e76fdded55037b03506acdd80a2294/shim.sock" debug=f
    alse pid=5715
  • Oct 25 06:14:37 minikube dockerd[2726]: time="2020-10-25T06:14:37.292840239Z"
    level=info msg="shim reaped" id=9536e6a81db94399860e073a9f3d37b165fa6e616be24b64
    2b7aa1312f8d4253
  • Oct 25 06:14:37 minikube dockerd[2719]: time="2020-10-25T06:14:37.303658218Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:14:38 minikube dockerd[2726]: time="2020-10-25T06:14:38.368520285Z"
    level=info msg="shim reaped" id=76b0e48ce6a09d05922aa184e1bbabe941e76fdded55037b
    03506acdd80a2294
  • Oct 25 06:14:38 minikube dockerd[2719]: time="2020-10-25T06:14:38.379326509Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:15:20 minikube dockerd[2726]: time="2020-10-25T06:15:20.174385961Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/be4
    ceff68755474f27153c4131c085b6b7bd2e773d3a469517b12903571c64b1/shim.sock" debug=f
    alse pid=6188
  • Oct 25 06:15:28 minikube dockerd[2726]: time="2020-10-25T06:15:28.197777084Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/92a
    5d6ba34e7d7e867e0454bfff6a9552e18c5c55612af390f7b68bf70ba41a9/shim.sock" debug=f
    alse pid=6306
  • Oct 25 06:15:37 minikube dockerd[2726]: time="2020-10-25T06:15:37.815812699Z"
    level=info msg="shim reaped" id=be4ceff68755474f27153c4131c085b6b7bd2e773d3a4695
    17b12903571c64b1
  • Oct 25 06:15:37 minikube dockerd[2719]: time="2020-10-25T06:15:37.825983466Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:15:46 minikube dockerd[2726]: time="2020-10-25T06:15:46.253220919Z"
    level=info msg="shim reaped" id=92a5d6ba34e7d7e867e0454bfff6a9552e18c5c55612af39
    0f7b68bf70ba41a9
  • Oct 25 06:15:46 minikube dockerd[2719]: time="2020-10-25T06:15:46.268137344Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:17:08 minikube dockerd[2726]: time="2020-10-25T06:17:08.059569467Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/a76
    e86f6940d325eba13721cf27732b21d38b027018f685f49daa17cf37db176/shim.sock" debug=f
    alse pid=7145
  • Oct 25 06:17:14 minikube dockerd[2726]: time="2020-10-25T06:17:14.380077682Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/49a
    c1c2769906fbc3b020e1fc93d4b3b3f123f6b0efdad7c2b8862a5af48a731/shim.sock" debug=f
    alse pid=7198
  • Oct 25 06:17:25 minikube dockerd[2726]: time="2020-10-25T06:17:25.350410289Z"
    level=info msg="shim reaped" id=a76e86f6940d325eba13721cf27732b21d38b027018f685f
    49daa17cf37db176
  • Oct 25 06:17:25 minikube dockerd[2719]: time="2020-10-25T06:17:25.362212905Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:17:31 minikube dockerd[2726]: time="2020-10-25T06:17:31.584644094Z"
    level=info msg="shim reaped" id=49ac1c2769906fbc3b020e1fc93d4b3b3f123f6b0efdad7c
    2b8862a5af48a731
  • Oct 25 06:17:31 minikube dockerd[2719]: time="2020-10-25T06:17:31.596091684Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:20:12 minikube dockerd[2726]: time="2020-10-25T06:20:12.041920456Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/85d
    07903c6b7f484cba5be26a994c56f38ad07f08ab99758dd12db3dc55530d2/shim.sock" debug=f
    alse pid=7660
  • Oct 25 06:20:12 minikube dockerd[2726]: time="2020-10-25T06:20:12.260767117Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/590
    17094cef039db98b55fe1ae6b25764ec9d54633357cf4d8e47aa771e477cf/shim.sock" debug=f
    alse pid=7684
  • Oct 25 06:20:28 minikube dockerd[2726]: time="2020-10-25T06:20:28.495390845Z"
    level=info msg="shim reaped" id=85d07903c6b7f484cba5be26a994c56f38ad07f08ab99758
    dd12db3dc55530d2
  • Oct 25 06:20:28 minikube dockerd[2719]: time="2020-10-25T06:20:28.507377594Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:20:36 minikube dockerd[2726]: time="2020-10-25T06:20:36.346696670Z"
    level=info msg="shim reaped" id=59017094cef039db98b55fe1ae6b25764ec9d54633357cf4
    d8e47aa771e477cf
  • Oct 25 06:20:36 minikube dockerd[2719]: time="2020-10-25T06:20:36.357051439Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:25:33 minikube dockerd[2726]: time="2020-10-25T06:25:33.030269288Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/ec0
    29b977f03285e6d4b2256243c079e03797a57f639a34c152a38247aa8c6b5/shim.sock" debug=f
    alse pid=8257
  • Oct 25 06:25:39 minikube dockerd[2726]: time="2020-10-25T06:25:39.442038530Z"
    level=info msg="shim containerd-shim started" address="/containerd-shim/moby/c81
    25a665e2bd3705eb5a7736ff5804d41ee8b683d8d14c45d3941dcb9c2f5ba/shim.sock" debug=f
    alse pid=8306
  • Oct 25 06:25:46 minikube dockerd[2726]: time="2020-10-25T06:25:46.196415250Z"
    level=info msg="shim reaped" id=ec029b977f03285e6d4b2256243c079e03797a57f639a34c
    152a38247aa8c6b5
  • Oct 25 06:25:46 minikube dockerd[2719]: time="2020-10-25T06:25:46.219187213Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • Oct 25 06:25:55 minikube dockerd[2726]: time="2020-10-25T06:25:55.063724220Z"
    level=info msg="shim reaped" id=c8125a665e2bd3705eb5a7736ff5804d41ee8b683d8d14c4
    5d3941dcb9c2f5ba
  • Oct 25 06:25:55 minikube dockerd[2719]: time="2020-10-25T06:25:55.075413861Z"
    level=info msg="ignoring event" module=libcontainerd namespace=moby topic=/tasks
    /delete type="*events.TaskDelete"
  • ==> container status <==
  • CONTAINER IMAGE CREATED STATE
    NAME ATTEMPT POD ID
  • c8125a665e2bd 8603821e1a7a5 28 seconds ago Exited
    kube-controller-manager 8 66a583888c504
  • ec029b977f032 607331163122e 34 seconds ago Exited
    kube-apiserver 8 ec8b64f4ed5f1
  • d90449cdc5368 0369cf4303ffd 13 minutes ago Running
    etcd 0 09e773dd2f7dc
  • ee99e49f93a5b 2f32d66b884f8 14 minutes ago Running
    kube-scheduler 0 24d45b930473d
  • ==> describe nodes <==
    E1025 11:56:07.366051 4672 logs.go:181] command /bin/bash -c "sudo /var/lib/m
    inikube/binaries/v1.19.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/k
    ubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.1
    9.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process e
    xited with status 1
    stdout:

stderr:
The connection to the server localhost:8443 was refused - did you specify the ri
ght host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refus
ed - did you specify the right host or port?\n\n** /stderr **"
*

  • ==> dmesg <==
  • [ +5.006848] hpet1: lost 318 rtc interrupts
  • [ +5.005396] hpet1: lost 319 rtc interrupts
  • [ +5.009981] hpet1: lost 318 rtc interrupts
  • [ +5.009094] hpet1: lost 319 rtc interrupts
  • [ +5.005143] hpet1: lost 318 rtc interrupts
  • [ +5.013739] hpet1: lost 319 rtc interrupts
  • [ +5.003834] hpet1: lost 318 rtc interrupts
  • [ +5.008545] hpet1: lost 319 rtc interrupts
  • [ +5.012241] hpet1: lost 319 rtc interrupts
  • [ +5.005051] hpet1: lost 318 rtc interrupts
  • [ +5.007206] hpet1: lost 318 rtc interrupts
  • [Oct25 06:22] hpet1: lost 319 rtc interrupts
  • [ +5.010479] hpet1: lost 319 rtc interrupts
  • [ +5.009737] hpet1: lost 318 rtc interrupts
  • [ +5.014631] hpet1: lost 319 rtc interrupts
  • [ +5.004600] hpet1: lost 318 rtc interrupts
  • [ +5.021970] hpet1: lost 320 rtc interrupts
  • [ +5.015169] hpet1: lost 319 rtc interrupts
  • [ +5.001754] hpet1: lost 318 rtc interrupts
  • [ +5.002750] hpet1: lost 318 rtc interrupts
  • [ +5.006306] hpet1: lost 318 rtc interrupts
  • [ +4.999647] hpet1: lost 318 rtc interrupts
  • [ +5.003289] hpet1: lost 319 rtc interrupts
  • [Oct25 06:23] hpet1: lost 318 rtc interrupts
  • [ +5.000948] hpet1: lost 318 rtc interrupts
  • [ +5.002684] hpet1: lost 318 rtc interrupts
  • [ +5.001893] hpet1: lost 318 rtc interrupts
  • [ +5.003523] hpet1: lost 318 rtc interrupts
  • [ +5.003352] hpet1: lost 319 rtc interrupts
  • [ +5.005414] hpet1: lost 319 rtc interrupts
  • [ +5.004002] hpet1: lost 318 rtc interrupts
  • [ +5.003522] hpet1: lost 318 rtc interrupts
  • [ +5.006099] hpet1: lost 319 rtc interrupts
  • [ +4.998740] hpet1: lost 318 rtc interrupts
  • [ +5.007079] hpet1: lost 318 rtc interrupts
  • [Oct25 06:24] hpet1: lost 318 rtc interrupts
  • [ +5.000726] hpet1: lost 318 rtc interrupts
  • [ +5.001146] hpet1: lost 318 rtc interrupts
  • [ +5.004422] hpet1: lost 319 rtc interrupts
  • [ +5.000742] hpet1: lost 318 rtc interrupts
  • [ +5.009486] hpet1: lost 318 rtc interrupts
  • [ +4.997366] hpet1: lost 319 rtc interrupts
  • [ +5.003636] hpet1: lost 318 rtc interrupts
  • [ +5.002427] hpet1: lost 318 rtc interrupts
  • [ +5.003132] hpet1: lost 319 rtc interrupts
  • [ +4.999895] hpet1: lost 318 rtc interrupts
  • [ +5.001474] hpet1: lost 318 rtc interrupts
  • [Oct25 06:25] hpet1: lost 318 rtc interrupts
  • [ +4.998311] hpet1: lost 318 rtc interrupts
  • [ +5.003937] hpet1: lost 318 rtc interrupts
  • [ +5.002298] hpet1: lost 318 rtc interrupts
  • [ +5.001572] hpet1: lost 318 rtc interrupts
  • [ +5.001586] hpet1: lost 319 rtc interrupts
  • [ +5.001776] hpet1: lost 318 rtc interrupts
  • [ +5.039605] hpet1: lost 320 rtc interrupts
  • [ +4.994218] hpet1: lost 165 rtc interrupts
  • [ +5.016257] hpet1: lost 472 rtc interrupts
  • [ +5.009939] hpet1: lost 318 rtc interrupts
  • [ +5.009795] hpet1: lost 319 rtc interrupts
  • [Oct25 06:26] hpet1: lost 319 rtc interrupts
  • ==> etcd [d90449cdc536] <==
  • 2020-10-25 06:16:38.850409 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:16:48.851664 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:16:58.849105 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:17:09.612398 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:17:19.285852 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:17:22.109334 W | etcdserver: read-only range request "key:"/reg
    istry/services/specs/" range_end:"/registry/services/specs0" " with result "r
    ange_response_count:2 size:1762" took too long (186.130059ms) to execute
  • 2020-10-25 06:17:22.151733 W | etcdserver: read-only range request "key:"/reg
    istry/priorityclasses/system-node-critical" " with result "range_response_count
    :1 size:441" took too long (150.516019ms) to execute
  • 2020-10-25 06:17:28.851800 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:17:38.849380 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:17:48.853454 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:17:58.852160 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:18:08.857215 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:18:18.848655 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:18:28.850208 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:18:38.849393 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:18:48.849654 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:18:58.852085 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:19:08.847330 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:19:18.849802 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:19:28.850522 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:19:38.849482 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:19:48.855718 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:19:58.853422 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:20:08.851258 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:20:18.884683 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:20:28.849606 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:20:38.854284 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:20:48.849588 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:20:58.850419 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:21:08.853284 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:21:18.848270 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:21:28.850423 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:21:38.855835 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:21:48.851098 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:21:58.852336 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:22:08.852726 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:22:18.850396 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:22:28.850769 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:22:38.854377 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:22:48.855099 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:22:58.848218 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:23:08.852174 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:23:18.851025 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:23:28.850720 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:23:38.855102 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:23:48.851750 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:23:58.847616 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:24:08.848815 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:24:18.848489 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:24:28.851778 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:24:38.852047 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:24:48.850050 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:24:58.851330 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:25:08.856704 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:25:18.851464 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:25:28.849598 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:25:39.029884 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:25:44.738274 W | etcdserver: read-only range request "key:"/reg
    istry/ranges/serviceips" " with result "range_response_count:1 size:118" took t
    oo long (162.156632ms) to execute
  • 2020-10-25 06:25:48.850480 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • 2020-10-25 06:25:58.848274 I | etcdserver/api/etcdhttp: /health OK (status cod
    e 200)
  • ==> kernel <==
  • 06:26:07 up 17 min, 0 users, load average: 0.66, 1.05, 1.07
  • Linux minikube 4.19.114 Need a reliable and low latency local cluster setup for Kubernetes  #1 SMP Mon Oct 12 16:32:58 PDT 2020 x86_64 GNU/Linux
  • PRETTY_NAME="Buildroot 2020.02.6"
  • ==> kube-apiserver [ec029b977f03] <==
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:279 +0xbd

  • created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.contextF
    orChannel
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:278 +0x8c
*

  • goroutine 1891 [select]:
  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.func1.1(0xc0
    11003800, 0xdf8475800, 0x0, 0xc011003740)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:588 +0x17b

  • created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.poller.f
    unc1
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:571 +0x8c
*

  • goroutine 2101 [chan receive]:
  • k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run
    .func1()
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/sh
ared_informer.go:772 +0x5d

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(
    0xc00be32760)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:155 +0x5f

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00c
    05df60, 0x503b6e0, 0xc0056cec00, 0x3ee5901, 0xc001c2fc20)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:156 +0xad

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00be
    32760, 0x3b9aca00, 0x0, 0x1, 0xc001c2fc20)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:133 +0x98

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(...)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:90

  • k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*processorListener).run
    (0xc008a1c680)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/sh
ared_informer.go:771 +0x95

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func
    1(0xc0087956b0, 0xc00bcf6b00)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:73 +0x51

  • created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group)
    .Start
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:71 +0x65
*

  • goroutine 1893 [chan receive]:
  • k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*sharedProcessor).run(0
    xc008795650, 0xc011003860)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/sh
ared_informer.go:628 +0x53

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithC
    hannel.func1()
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:56 +0x2e

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func
    1(0xc010f93c90, 0xc00bc7a8c0)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:73 +0x51

  • created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group)
    .Start
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:71 +0x65
*

  • goroutine 1894 [chan receive]:
  • k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller).Run.func1(
    0xc00aa2d9e0, 0xc008e046c0)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/co
ntroller.go:127 +0x34

  • created by k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*controller)
    .Run
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/co
ntroller.go:126 +0xa5
*

  • goroutine 1895 [select]:
  • k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).watchHandle
    r(0xc001396f70, 0xbfdd647a30fccce1, 0x2ac764ca7, 0x71fb2a0, 0x504d020, 0xc00c87d
    780, 0xc00c1efb88, 0xc00737df20, 0xc00aa2d9e0, 0x0, ...)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/re
flector.go:451 +0x1a5

  • k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).ListAndWatc
    h(0xc001396f70, 0xc00aa2d9e0, 0x0, 0x0)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/re
flector.go:415 +0x657

  • k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()

  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/re
flector.go:209 +0x38

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(
    0xc0025856e0)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:155 +0x5f

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00c
    1efee0, 0x503b6c0, 0xc0021a3b80, 0x1, 0xc00aa2d9e0)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:156 +0xad

  • k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0013
    96f70, 0xc00aa2d9e0)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/re
flector.go:208 +0x196

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithC
    hannel.func1()
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:56 +0x2e

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/u
  • ==> kube-controller-manager [c8125a665e2b] <==
  • internal/poll.(*pollDesc).waitRead(...)
  •   /usr/local/go/src/internal/poll/fd_poll_runtime.go:92
    
  • internal/poll.(*FD).Accept(0xc00117e980, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
  •   /usr/local/go/src/internal/poll/fd_unix.go:394 +0x1fc
    
  • net.(*netFD).accept(0xc00117e980, 0x203000, 0x203000, 0x45addb8)
  •   /usr/local/go/src/net/fd_unix.go:172 +0x45
    
  • net.(*TCPListener).accept(0xc000561320, 0xc000312280, 0x50, 0x50)
  •   /usr/local/go/src/net/tcpsock_posix.go:139 +0x32
    
  • net.(*TCPListener).Accept(0xc000561320, 0x30, 0x4067d20, 0x7f03fd5757d0, 0xc00
    006a400)
  •   /usr/local/go/src/net/tcpsock.go:261 +0x65
    
  • k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.tcpKeepAliveListener.Acce
    pt(0x4a5c2c0, 0xc000561320, 0x7f03fd5757d0, 0x0, 0x50, 0x3f484a0)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/sec
ure_serving.go:261 +0x35

  • crypto/tls.(*listener).Accept(0xc000322260, 0x4067d20, 0xc0003c0360, 0x3b5f660
    , 0x6a20c50)
  •   /usr/local/go/src/crypto/tls/tls.go:67 +0x37
    
  • net/http.(*Server).Serve(0xc00015efc0, 0x4a45a40, 0xc000322260, 0x0, 0x0)
  •   /usr/local/go/src/net/http/server.go:2937 +0x266
    
  • k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer.func2(0x4a5c2c0
    , 0xc000561320, 0xc00015efc0, 0xc0000920c0)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/sec
ure_serving.go:236 +0xe9

  • created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server.RunServer
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/sec
ure_serving.go:227 +0xc8
*

  • goroutine 131 [sync.Cond.Wait]:
  • runtime.goparkunlock(...)
  •   /usr/local/go/src/runtime/proc.go:312
    
  • sync.runtime_notifyListWait(0xc0005cd910, 0xc000000000)
  •   /usr/local/go/src/runtime/sema.go:513 +0xf8
    
  • sync.(*Cond).Wait(0xc0005cd900)
  •   /usr/local/go/src/sync/cond.go:56 +0x9d
    
  • k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue.(*Type).Get(0xc000bae
    2a0, 0x0, 0x0, 0x390db00)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/util/workqueue
/queue.go:145 +0x89

  • k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*Dyn
    amicServingCertificateController).processNextWorkItem(0xc00117f100, 0x203000)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dyn
amiccertificates/tlsconfig.go:263 +0x66

  • k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*Dyn
    amicServingCertificateController).runWorker(0xc00117f100)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dyn
amiccertificates/tlsconfig.go:258 +0x2b

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(
    0xc0002d4260)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:155 +0x5f

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000
    2d4260, 0x49f9cc0, 0xc0003c00c0, 0x45ac601, 0xc0000920c0)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:156 +0xad

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002
    d4260, 0x3b9aca00, 0x0, 0x1, 0xc0000920c0)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:133 +0x98

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0002d4260,
    0x3b9aca00, 0xc0000920c0)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:90 +0x4d

  • created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertifi
    cates.(*DynamicServingCertificateController).Run
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dyn
amiccertificates/tlsconfig.go:247 +0x1b3
*

  • goroutine 132 [select]:
  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000
    2d42b0, 0x49f9cc0, 0xc0003c0090, 0x45ac601, 0xc0000920c0)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:167 +0x149

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0002
    d42b0, 0xdf8475800, 0x0, 0x1, 0xc0000920c0)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:133 +0x98

  • k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0002d42b0,
    0xdf8475800, 0xc0000920c0)
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wa
it/wait.go:90 +0x4d

  • created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertifi
    cates.(*DynamicServingCertificateController).Run
  •   /workspace/anago-v1.19.2-rc.0.12+19706d90d87784/src/k8s.io/kubernetes/_o
    

utput/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dyn
amiccertificates/tlsconfig.go:250 +0x22b
*

  • goroutine 144 [runnable]:

  • net/http.setRequestCancel.func4(0x0, 0xc0009b2120, 0xc000ed6640, 0xc000854558,
    0xc000f909c0)

  •   /usr/local/go/src/net/http/client.go:398 +0xe5
    
  • created by net/http.setRequestCancel

  •   /usr/local/go/src/net/http/client.go:397 +0x337
    
  • ==> kube-scheduler [ee99e49f93a5] <==

  • E1025 06:22:40.299605 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get
    "https://192.168.99.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=
    0": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:22:49.362394 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-sch
    eduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "
    https://192.168.99.103:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2
    Cstatus.phase%21%3DSucceeded&resourceVersion=127": dial tcp 192.168.99.103:8443:
    connect: connection refused

  • E1025 06:22:50.085956 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https
    ://192.168.99.103:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=57": dial
    tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:22:59.211131 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.1
    68.99.103:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.99.103
    :8443: connect: connection refused

  • E1025 06:22:59.334235 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass:
    Get "https://192.168.99.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500
    &resourceVersion=0": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:23:01.170560 1 reflector.go:127] k8s.io/apiserver/pkg/server/dy
    namiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap
    : failed to list *v1.ConfigMap: Get "https://192.168.99.103:8443/api/v1/namespac
    es/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-auth
    entication&limit=500&resourceVersion=0": dial tcp 192.168.99.103:8443: connect:
    connection refused

  • E1025 06:23:08.613971 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1bet
    a1.PodDisruptionBudget: Get "https://192.168.99.103:8443/apis/policy/v1beta1/pod
    disruptionbudgets?resourceVersion=55": dial tcp 192.168.99.103:8443: connect: co
    nnection refused

  • E1025 06:23:10.704068 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.Persistent
    Volume: Get "https://192.168.99.103:8443/api/v1/persistentvolumes?resourceVersio
    n=236": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:23:13.094051 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Ge
    t "https://192.168.99.103:8443/apis/apps/v1/statefulsets?resourceVersion=55": di
    al tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:23:14.560488 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192
    .168.99.103:8443/api/v1/nodes?resourceVersion=312": dial tcp 192.168.99.103:8443
    : connect: connection refused

  • E1025 06:23:16.297333 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https
    ://192.168.99.103:8443/api/v1/services?resourceVersion=236": dial tcp 192.168.99
    .103:8443: connect: connection refused

  • E1025 06:23:18.655411 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.Repli
    cationController: Get "https://192.168.99.103:8443/api/v1/replicationcontrollers
    ?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:23:22.144820 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.Persi
    stentVolumeClaim: Get "https://192.168.99.103:8443/api/v1/persistentvolumeclaims
    ?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:23:28.907599 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get
    "https://192.168.99.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=
    0": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:23:38.691911 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-sch
    eduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "
    https://192.168.99.103:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2
    Cstatus.phase%21%3DSucceeded&resourceVersion=127": dial tcp 192.168.99.103:8443:
    connect: connection refused

  • E1025 06:23:43.833012 1 reflector.go:127] k8s.io/apiserver/pkg/server/dy
    namiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap
    : failed to list *v1.ConfigMap: Get "https://192.168.99.103:8443/api/v1/namespac
    es/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-auth
    entication&limit=500&resourceVersion=0": dial tcp 192.168.99.103:8443: connect:
    connection refused

  • E1025 06:23:47.638908 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.1
    68.99.103:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.99.103
    :8443: connect: connection refused

  • E1025 06:23:48.441555 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1bet
    a1.PodDisruptionBudget: Get "https://192.168.99.103:8443/apis/policy/v1beta1/pod
    disruptionbudgets?resourceVersion=55": dial tcp 192.168.99.103:8443: connect: co
    nnection refused

  • E1025 06:23:49.303192 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https
    ://192.168.99.103:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=57": dial
    tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:23:49.938954 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.Persistent
    Volume: Get "https://192.168.99.103:8443/api/v1/persistentvolumes?resourceVersio
    n=236": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:23:52.586260 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192
    .168.99.103:8443/api/v1/nodes?resourceVersion=312": dial tcp 192.168.99.103:8443
    : connect: connection refused

  • E1025 06:23:52.642910 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass:
    Get "https://192.168.99.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500
    &resourceVersion=0": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:23:53.956393 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Ge
    t "https://192.168.99.103:8443/apis/apps/v1/statefulsets?resourceVersion=55": di
    al tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:23:55.662306 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https
    ://192.168.99.103:8443/api/v1/services?resourceVersion=236": dial tcp 192.168.99
    .103:8443: connect: connection refused

  • E1025 06:24:02.117231 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.Persi
    stentVolumeClaim: Get "https://192.168.99.103:8443/api/v1/persistentvolumeclaims
    ?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:24:16.592248 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.Repli
    cationController: Get "https://192.168.99.103:8443/api/v1/replicationcontrollers
    ?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:24:20.118425 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-sch
    eduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "
    https://192.168.99.103:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2
    Cstatus.phase%21%3DSucceeded&resourceVersion=127": dial tcp 192.168.99.103:8443:
    connect: connection refused

  • E1025 06:24:25.697316 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get
    "https://192.168.99.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=
    0": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:24:26.866876 1 reflector.go:127] k8s.io/apiserver/pkg/server/dy
    namiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap
    : failed to list *v1.ConfigMap: Get "https://192.168.99.103:8443/api/v1/namespac
    es/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-auth
    entication&limit=500&resourceVersion=0": dial tcp 192.168.99.103:8443: connect:
    connection refused

  • E1025 06:24:27.989228 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https
    ://192.168.99.103:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=57": dial
    tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:24:28.257978 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192
    .168.99.103:8443/api/v1/nodes?resourceVersion=312": dial tcp 192.168.99.103:8443
    : connect: connection refused

  • E1025 06:24:34.135488 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.Persi
    stentVolumeClaim: Get "https://192.168.99.103:8443/api/v1/persistentvolumeclaims
    ?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:24:34.934225 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https
    ://192.168.99.103:8443/api/v1/services?resourceVersion=236": dial tcp 192.168.99
    .103:8443: connect: connection refused

  • E1025 06:24:38.423071 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Ge
    t "https://192.168.99.103:8443/apis/apps/v1/statefulsets?resourceVersion=55": di
    al tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:24:42.824505 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1bet
    a1.PodDisruptionBudget: Get "https://192.168.99.103:8443/apis/policy/v1beta1/pod
    disruptionbudgets?resourceVersion=55": dial tcp 192.168.99.103:8443: connect: co
    nnection refused

  • E1025 06:24:44.982234 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass:
    Get "https://192.168.99.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500
    &resourceVersion=0": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:24:45.226080 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.Persistent
    Volume: Get "https://192.168.99.103:8443/api/v1/persistentvolumes?resourceVersio
    n=236": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:24:45.755696 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.1
    68.99.103:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.99.103
    :8443: connect: connection refused

  • E1025 06:24:51.975378 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.Repli
    cationController: Get "https://192.168.99.103:8443/api/v1/replicationcontrollers
    ?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:24:55.474312 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-sch
    eduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "
    https://192.168.99.103:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2
    Cstatus.phase%21%3DSucceeded&resourceVersion=127": dial tcp 192.168.99.103:8443:
    connect: connection refused

  • E1025 06:24:59.943431 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get
    "https://192.168.99.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=
    0": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:25:08.776448 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https
    ://192.168.99.103:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=57": dial
    tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:25:11.829217 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.Persi
    stentVolumeClaim: Get "https://192.168.99.103:8443/api/v1/persistentvolumeclaims
    ?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:25:12.758816 1 reflector.go:127] k8s.io/apiserver/pkg/server/dy
    namiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap
    : failed to list *v1.ConfigMap: Get "https://192.168.99.103:8443/api/v1/namespac
    es/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-auth
    entication&limit=500&resourceVersion=0": dial tcp 192.168.99.103:8443: connect:
    connection refused

  • E1025 06:25:15.261733 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Ge
    t "https://192.168.99.103:8443/apis/apps/v1/statefulsets?resourceVersion=55": di
    al tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:25:17.404529 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https
    ://192.168.99.103:8443/api/v1/services?resourceVersion=236": dial tcp 192.168.99
    .103:8443: connect: connection refused

  • E1025 06:25:19.274860 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192
    .168.99.103:8443/api/v1/nodes?resourceVersion=312": dial tcp 192.168.99.103:8443
    : connect: connection refused

  • E1025 06:25:23.345698 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.1
    68.99.103:8443/api/v1/pods?limit=500&resourceVersion=0": dial tcp 192.168.99.103
    :8443: connect: connection refused

  • E1025 06:25:25.760842 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.Persistent
    Volume: Get "https://192.168.99.103:8443/api/v1/persistentvolumes?resourceVersio
    n=236": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:25:26.979709 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass:
    Get "https://192.168.99.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500
    &resourceVersion=0": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:25:30.030411 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get
    "https://192.168.99.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=
    0": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:25:44.713728 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1beta1.PodDisruptionBudget: failed to list *v1bet
    a1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:k
    ube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy"
    at the cluster scope

  • E1025 06:25:46.216233 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https
    ://192.168.99.103:8443/apis/storage.k8s.io/v1/csinodes?resourceVersion=57": dial
    tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:25:46.462074 1 reflector.go:127] k8s.io/kubernetes/cmd/kube-sch
    eduler/app/server.go:188: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "
    https://192.168.99.103:8443/api/v1/pods?fieldSelector=status.phase%21%3DFailed%2
    Cstatus.phase%21%3DSucceeded&resourceVersion=127": dial tcp 192.168.99.103:8443:
    connect: connection refused

  • E1025 06:25:47.254601 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.Repli
    cationController: Get "https://192.168.99.103:8443/api/v1/replicationcontrollers
    ?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:26:02.757835 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Ge
    t "https://192.168.99.103:8443/apis/apps/v1/statefulsets?resourceVersion=55": di
    al tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:26:03.561440 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.Persi
    stentVolumeClaim: Get "https://192.168.99.103:8443/api/v1/persistentvolumeclaims
    ?resourceVersion=236": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:26:03.902390 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass:
    Get "https://192.168.99.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500
    &resourceVersion=0": dial tcp 192.168.99.103:8443: connect: connection refused

  • E1025 06:26:04.818910 1 reflector.go:127] k8s.io/apiserver/pkg/server/dy
    namiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap
    : failed to list *v1.ConfigMap: Get "https://192.168.99.103:8443/api/v1/namespac
    es/kube-system/configmaps?fieldSelector=metadata.name%3Dextension-apiserver-auth
    entication&limit=500&resourceVersion=0": dial tcp 192.168.99.103:8443: connect:
    connection refused

  • E1025 06:26:08.338986 1 reflector.go:127] k8s.io/client-go/informers/fac
    tory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: Get "https
    ://192.168.99.103:8443/api/v1/services?resourceVersion=236": dial tcp 192.168.99
    .103:8443: connect: connection refused

  • ==> kubelet <==

  • -- Logs begin at Sun 2020-10-25 06:09:38 UTC, end at Sun 2020-10-25 06:26:08 U
    TC. --

  • Oct 25 06:25:30 minikube kubelet[4555]: E1025 06:25:30.433962 4555 reflecto
    r.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.Ru
    ntimeClass: failed to list *v1beta1.RuntimeClass: Get "https://control-plane.min
    ikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?resourceVersion=309"
    : dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:25:32 minikube kubelet[4555]: I1025 06:25:32.724568 4555 topology
    _manager.go:219] [topologymanager] RemoveContainer - Container ID: 85d07903c6b7f
    484cba5be26a994c56f38ad07f08ab99758dd12db3dc55530d2

  • Oct 25 06:25:33 minikube kubelet[4555]: E1025 06:25:33.202349 4555 controll
    er.go:136] failed to ensure node lease exists, will retry in 7s, error: Get "htt
    ps://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces
    /kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443: con
    nect: connection refused

  • Oct 25 06:25:33 minikube kubelet[4555]: E1025 06:25:33.457264 4555 event.go
    :273] Unable to write event: 'Patch "https://control-plane.minikube.internal:844
    3/api/v1/namespaces/kube-system/events/kube-controller-manager-minikube.16412787
    4bb183aa": dial tcp 192.168.99.103:8443: connect: connection refused' (may retry
    after sleeping)

  • Oct 25 06:25:38 minikube kubelet[4555]: I1025 06:25:38.727143 4555 topology
    _manager.go:219] [topologymanager] RemoveContainer - Container ID: 59017094cef03
    9db98b55fe1ae6b25764ec9d54633357cf4d8e47aa771e477cf

  • Oct 25 06:25:44 minikube kubelet[4555]: W1025 06:25:44.178623 4555 status_m
    anager.go:550] Failed to get status for pod "kube-apiserver-minikube_kube-system
    (b1ef52506bd93c04ce27fa412a22c055)": Get "https://control-plane.minikube.interna
    l:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube": net/http: TL
    S handshake timeout

  • Oct 25 06:25:46 minikube kubelet[4555]: W1025 06:25:46.037570 4555 reflecto
    r.go:424] k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: watch of *v1.Node ended
    with: very short watch: k8s.io/kubernetes/pkg/kubelet/kubelet.go:438: Unexpected
    watch close - watch lasted less than a second and no items received

  • Oct 25 06:25:46 minikube kubelet[4555]: W1025 06:25:46.726623 4555 status_m
    anager.go:550] Failed to get status for pod "kube-apiserver-minikube_kube-system
    (b1ef52506bd93c04ce27fa412a22c055)": Get "https://control-plane.minikube.interna
    l:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube": dial tcp 192
    .168.99.103:8443: connect: connection refused

  • Oct 25 06:25:46 minikube kubelet[4555]: W1025 06:25:46.729178 4555 status_m
    anager.go:550] Failed to get status for pod "kube-controller-manager-minikube_ku
    be-system(d421d4b6a0d0e042995d6d88d0637437)": Get "https://control-plane.minikub
    e.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-minik
    ube": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:25:46 minikube kubelet[4555]: W1025 06:25:46.729452 4555 status_m
    anager.go:550] Failed to get status for pod "kube-scheduler-minikube_kube-system
    (ff7d12f9e4f14e202a85a7c5534a3129)": Get "https://control-plane.minikube.interna
    l:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-minikube": dial tcp 192
    .168.99.103:8443: connect: connection refused

  • Oct 25 06:25:47 minikube kubelet[4555]: I1025 06:25:47.391298 4555 topology
    _manager.go:219] [topologymanager] RemoveContainer - Container ID: 85d07903c6b7f
    484cba5be26a994c56f38ad07f08ab99758dd12db3dc55530d2

  • Oct 25 06:25:47 minikube kubelet[4555]: I1025 06:25:47.392919 4555 topology
    _manager.go:219] [topologymanager] RemoveContainer - Container ID: ec029b977f032
    85e6d4b2256243c079e03797a57f639a34c152a38247aa8c6b5

  • Oct 25 06:25:47 minikube kubelet[4555]: E1025 06:25:47.394527 4555 pod_work
    ers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserver-
    minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to "S
    tartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 5m0s restar
    ting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1e
    f52506bd93c04ce27fa412a22c055)"

  • Oct 25 06:25:47 minikube kubelet[4555]: W1025 06:25:47.399875 4555 status_m
    anager.go:550] Failed to get status for pod "kube-apiserver-minikube_kube-system
    (b1ef52506bd93c04ce27fa412a22c055)": Get "https://control-plane.minikube.interna
    l:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube": dial tcp 192
    .168.99.103:8443: connect: connection refused

  • Oct 25 06:25:50 minikube kubelet[4555]: E1025 06:25:50.719404 4555 reflecto
    r.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service
    : failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/
    api/v1/services?resourceVersion=215": dial tcp 192.168.99.103:8443: connect: con
    nection refused

  • Oct 25 06:25:51 minikube kubelet[4555]: I1025 06:25:51.698651 4555 topology
    _manager.go:219] [topologymanager] RemoveContainer - Container ID: ec029b977f032
    85e6d4b2256243c079e03797a57f639a34c152a38247aa8c6b5

  • Oct 25 06:25:51 minikube kubelet[4555]: W1025 06:25:51.701468 4555 status_m
    anager.go:550] Failed to get status for pod "kube-apiserver-minikube_kube-system
    (b1ef52506bd93c04ce27fa412a22c055)": Get "https://control-plane.minikube.interna
    l:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube": dial tcp 192
    .168.99.103:8443: connect: connection refused

  • Oct 25 06:25:51 minikube kubelet[4555]: E1025 06:25:51.703557 4555 pod_work
    ers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserver-
    minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to "S
    tartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 5m0s restar
    ting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1e
    f52506bd93c04ce27fa412a22c055)"

  • Oct 25 06:25:54 minikube kubelet[4555]: E1025 06:25:54.521398 4555 reflecto
    r.go:127] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:46: Failed to watch
    *v1.Pod: failed to list *v1.Pod: Get "https://control-plane.minikube.internal:84
    43/api/v1/pods?fieldSelector=spec.nodeName%3Dminikube&resourceVersion=287": dial
    tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:25:54 minikube kubelet[4555]: E1025 06:25:54.939160 4555 controll
    er.go:178] failed to update node lease, error: Put "https://control-plane.miniku
    be.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/m
    inikube?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:25:54 minikube kubelet[4555]: E1025 06:25:54.945234 4555 controll
    er.go:178] failed to update node lease, error: Put "https://control-plane.miniku
    be.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/m
    inikube?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:25:54 minikube kubelet[4555]: E1025 06:25:54.947527 4555 controll
    er.go:178] failed to update node lease, error: Put "https://control-plane.miniku
    be.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/m
    inikube?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:25:54 minikube kubelet[4555]: E1025 06:25:54.950514 4555 controll
    er.go:178] failed to update node lease, error: Put "https://control-plane.miniku
    be.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/m
    inikube?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:25:54 minikube kubelet[4555]: E1025 06:25:54.951087 4555 controll
    er.go:178] failed to update node lease, error: Put "https://control-plane.miniku
    be.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/m
    inikube?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:25:54 minikube kubelet[4555]: I1025 06:25:54.951240 4555 controll
    er.go:106] failed to update lease using latest lease, fallback to ensure lease,
    err: failed 5 attempts to update node lease

  • Oct 25 06:25:54 minikube kubelet[4555]: E1025 06:25:54.952183 4555 controll
    er.go:136] failed to ensure node lease exists, will retry in 200ms, error: Get "
    https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespa
    ces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443:
    connect: connection refused

  • Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.164520 4555 controll
    er.go:136] failed to ensure node lease exists, will retry in 400ms, error: Get "
    https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespa
    ces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443:
    connect: connection refused

  • Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.494321 4555 kubelet_
    node_status.go:442] Error updating node status, will retry: error getting node "
    minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
    be?resourceVersion=0&timeout=10s": dial tcp 192.168.99.103:8443: connect: connec
    tion refused

  • Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.496514 4555 kubelet_
    node_status.go:442] Error updating node status, will retry: error getting node "
    minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
    be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.497428 4555 kubelet_
    node_status.go:442] Error updating node status, will retry: error getting node "
    minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
    be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.498167 4555 kubelet_
    node_status.go:442] Error updating node status, will retry: error getting node "
    minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
    be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.498668 4555 kubelet_
    node_status.go:442] Error updating node status, will retry: error getting node "
    minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
    be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.498993 4555 kubelet_
    node_status.go:429] Unable to update node status: update node status exceeds ret
    ry count

  • Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.567008 4555 controll
    er.go:136] failed to ensure node lease exists, will retry in 800ms, error: Get "
    https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespa
    ces/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443:
    connect: connection refused

  • Oct 25 06:25:55 minikube kubelet[4555]: I1025 06:25:55.609417 4555 topology
    _manager.go:219] [topologymanager] RemoveContainer - Container ID: 59017094cef03
    9db98b55fe1ae6b25764ec9d54633357cf4d8e47aa771e477cf

  • Oct 25 06:25:55 minikube kubelet[4555]: W1025 06:25:55.614550 4555 status_m
    anager.go:550] Failed to get status for pod "kube-controller-manager-minikube_ku
    be-system(d421d4b6a0d0e042995d6d88d0637437)": Get "https://control-plane.minikub
    e.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-minik
    ube": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:25:55 minikube kubelet[4555]: I1025 06:25:55.621599 4555 topology
    _manager.go:219] [topologymanager] RemoveContainer - Container ID: c8125a665e2bd
    3705eb5a7736ff5804d41ee8b683d8d14c45d3941dcb9c2f5ba

  • Oct 25 06:25:55 minikube kubelet[4555]: E1025 06:25:55.634199 4555 pod_work
    ers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controller
    -manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: fai
    led to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "ba
    ck-off 5m0s restarting failed container=kube-controller-manager pod=kube-control
    ler-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"

  • Oct 25 06:25:56 minikube kubelet[4555]: E1025 06:25:56.370632 4555 controll
    er.go:136] failed to ensure node lease exists, will retry in 1.6s, error: Get "h
    ttps://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespac
    es/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443: c
    onnect: connection refused

  • Oct 25 06:25:56 minikube kubelet[4555]: W1025 06:25:56.721888 4555 status_m
    anager.go:550] Failed to get status for pod "kube-apiserver-minikube_kube-system
    (b1ef52506bd93c04ce27fa412a22c055)": Get "https://control-plane.minikube.interna
    l:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube": dial tcp 192
    .168.99.103:8443: connect: connection refused

  • Oct 25 06:25:56 minikube kubelet[4555]: W1025 06:25:56.723041 4555 status_m
    anager.go:550] Failed to get status for pod "kube-controller-manager-minikube_ku
    be-system(d421d4b6a0d0e042995d6d88d0637437)": Get "https://control-plane.minikub
    e.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-minik
    ube": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:25:56 minikube kubelet[4555]: W1025 06:25:56.723386 4555 status_m
    anager.go:550] Failed to get status for pod "kube-scheduler-minikube_kube-system
    (ff7d12f9e4f14e202a85a7c5534a3129)": Get "https://control-plane.minikube.interna
    l:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-minikube": dial tcp 192
    .168.99.103:8443: connect: connection refused

  • Oct 25 06:25:57 minikube kubelet[4555]: E1025 06:25:57.977207 4555 controll
    er.go:136] failed to ensure node lease exists, will retry in 3.2s, error: Get "h
    ttps://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespac
    es/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443: c
    onnect: connection refused

  • Oct 25 06:26:01 minikube kubelet[4555]: E1025 06:26:01.185035 4555 controll
    er.go:136] failed to ensure node lease exists, will retry in 6.4s, error: Get "h
    ttps://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespac
    es/kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443: c
    onnect: connection refused

  • Oct 25 06:26:03 minikube kubelet[4555]: I1025 06:26:03.631421 4555 topology
    _manager.go:219] [topologymanager] RemoveContainer - Container ID: c8125a665e2bd
    3705eb5a7736ff5804d41ee8b683d8d14c45d3941dcb9c2f5ba

  • Oct 25 06:26:03 minikube kubelet[4555]: W1025 06:26:03.631405 4555 status_m
    anager.go:550] Failed to get status for pod "kube-controller-manager-minikube_ku
    be-system(d421d4b6a0d0e042995d6d88d0637437)": Get "https://control-plane.minikub
    e.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-minik
    ube": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:26:03 minikube kubelet[4555]: E1025 06:26:03.636478 4555 pod_work
    ers.go:191] Error syncing pod d421d4b6a0d0e042995d6d88d0637437 ("kube-controller
    -manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"), skipping: fai
    led to "StartContainer" for "kube-controller-manager" with CrashLoopBackOff: "ba
    ck-off 5m0s restarting failed container=kube-controller-manager pod=kube-control
    ler-manager-minikube_kube-system(d421d4b6a0d0e042995d6d88d0637437)"

  • Oct 25 06:26:03 minikube kubelet[4555]: I1025 06:26:03.722869 4555 topology
    _manager.go:219] [topologymanager] RemoveContainer - Container ID: ec029b977f032
    85e6d4b2256243c079e03797a57f639a34c152a38247aa8c6b5

  • Oct 25 06:26:03 minikube kubelet[4555]: E1025 06:26:03.725899 4555 pod_work
    ers.go:191] Error syncing pod b1ef52506bd93c04ce27fa412a22c055 ("kube-apiserver-
    minikube_kube-system(b1ef52506bd93c04ce27fa412a22c055)"), skipping: failed to "S
    tartContainer" for "kube-apiserver" with CrashLoopBackOff: "back-off 5m0s restar
    ting failed container=kube-apiserver pod=kube-apiserver-minikube_kube-system(b1e
    f52506bd93c04ce27fa412a22c055)"

  • Oct 25 06:26:05 minikube kubelet[4555]: E1025 06:26:05.502419 4555 kubelet_
    node_status.go:442] Error updating node status, will retry: error getting node "
    minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
    be?resourceVersion=0&timeout=10s": dial tcp 192.168.99.103:8443: connect: connec
    tion refused

  • Oct 25 06:26:05 minikube kubelet[4555]: E1025 06:26:05.519703 4555 kubelet_
    node_status.go:442] Error updating node status, will retry: error getting node "
    minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
    be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:26:05 minikube kubelet[4555]: E1025 06:26:05.530415 4555 kubelet_
    node_status.go:442] Error updating node status, will retry: error getting node "
    minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
    be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:26:05 minikube kubelet[4555]: E1025 06:26:05.531654 4555 kubelet_
    node_status.go:442] Error updating node status, will retry: error getting node "
    minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
    be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:26:05 minikube kubelet[4555]: E1025 06:26:05.535727 4555 kubelet_
    node_status.go:442] Error updating node status, will retry: error getting node "
    minikube": Get "https://control-plane.minikube.internal:8443/api/v1/nodes/miniku
    be?timeout=10s": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:26:05 minikube kubelet[4555]: E1025 06:26:05.546034 4555 kubelet_
    node_status.go:429] Unable to update node status: update node status exceeds ret
    ry count

  • Oct 25 06:26:05 minikube kubelet[4555]: E1025 06:26:05.735740 4555 reflecto
    r.go:127] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.Ru
    ntimeClass: failed to list *v1beta1.RuntimeClass: Get "https://control-plane.min
    ikube.internal:8443/apis/node.k8s.io/v1beta1/runtimeclasses?resourceVersion=309"
    : dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:26:06 minikube kubelet[4555]: W1025 06:26:06.722659 4555 status_m
    anager.go:550] Failed to get status for pod "kube-scheduler-minikube_kube-system
    (ff7d12f9e4f14e202a85a7c5534a3129)": Get "https://control-plane.minikube.interna
    l:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-minikube": dial tcp 192
    .168.99.103:8443: connect: connection refused

  • Oct 25 06:26:06 minikube kubelet[4555]: W1025 06:26:06.730870 4555 status_m
    anager.go:550] Failed to get status for pod "kube-apiserver-minikube_kube-system
    (b1ef52506bd93c04ce27fa412a22c055)": Get "https://control-plane.minikube.interna
    l:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-minikube": dial tcp 192
    .168.99.103:8443: connect: connection refused

  • Oct 25 06:26:06 minikube kubelet[4555]: W1025 06:26:06.753352 4555 status_m
    anager.go:550] Failed to get status for pod "kube-controller-manager-minikube_ku
    be-system(d421d4b6a0d0e042995d6d88d0637437)": Get "https://control-plane.minikub
    e.internal:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-minik
    ube": dial tcp 192.168.99.103:8443: connect: connection refused

  • Oct 25 06:26:07 minikube kubelet[4555]: E1025 06:26:07.593068 4555 controll
    er.go:136] failed to ensure node lease exists, will retry in 7s, error: Get "htt
    ps://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces
    /kube-node-lease/leases/minikube?timeout=10s": dial tcp 192.168.99.103:8443: con
    nect: connection refused

! unable to fetch logs for: describe nodes

prabs@LAPTOP-HQ5LK73I ~
$

@chrishna1
Copy link

@prabsdubey It's good to reference a pastebin link(or others) instead of pasting such large logs in the comment.

@RvKmR-WaGh
Copy link

I faced same issue. I just restarted minikube. and issue solved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. triage/needs-information Indicates an issue needs more information in order to work on it.
Projects
None yet
Development

Successfully merging a pull request may close this issue.