Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hyperkit: tcp: lookup k8s.gcr.io read: connection refused #4594

Closed
chaimleib opened this issue Jun 25, 2019 · 13 comments
Closed

hyperkit: tcp: lookup k8s.gcr.io read: connection refused #4594

chaimleib opened this issue Jun 25, 2019 · 13 comments
Labels
area/dns DNS issues cause/port-conflict Start failures due to port or other network conflict co/hyperkit Hyperkit related issues triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@chaimleib
Copy link

chaimleib commented Jun 25, 2019

I'm having trouble fetching docker images using hyperkit. This affects the initial minikube start command logged below, where it says "Unable to pull images..."

The exact command to reproduce the issue:

minikube start --vm-driver hyperkit

The full output of the command that failed:

😄  minikube v1.0.1 on darwin (amd64)
🤹  Downloading Kubernetes v1.14.1 images in the background ...
💡  Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
🔄  Restarting existing hyperkit VM for "minikube" ...
⌛  Waiting for SSH access ...
📶  "minikube" IP address is 192.168.64.2
🐳  Configuring Docker as the container runtime ...
🐳  Version of container runtime is 18.06.3-ce
⌛  Waiting for image downloads to complete ...
✨  Preparing Kubernetes environment ...
🚜  Pulling images required by Kubernetes v1.14.1 ...
❌  Unable to pull images, which may be OK: running cmd: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml: command failed: sudo kubeadm config images pull --config /var/lib/kubeadm.yaml
stdout:
stderr: failed to pull image "k8s.gcr.io/kube-apiserver:v1.14.1": output: Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp 192.168.64.2:33526->192.168.64.1:53: read: connection refused
, error: exit status 1
: Process exited with status 1
🔄  Relaunching Kubernetes v1.14.1 using kubeadm ...
⌛  Waiting for pods: apiserver proxy etcd scheduler controller dns
📯  Updating kube-proxy configuration ...
🤔  Verifying component health ......
💗  kubectl is now configured to use "minikube"
🏄  Done! Thank you for using minikube!

The output of the minikube logs command:

==> coredns <==
.:53
2019-06-25T01:44:07.950Z [INFO] CoreDNS-1.3.1
2019-06-25T01:44:07.950Z [INFO] linux/amd64, go1.11.4, 6b56a9c
CoreDNS-1.3.1
linux/amd64, go1.11.4, 6b56a9c
2019-06-25T01:44:07.950Z [INFO] plugin/reload: Running configuration MD5 = 599b9eb76b8c147408aed6a0bbe0f669
2019-06-25T01:44:09.959Z [ERROR] plugin/errors: 2 5004792868601405094.2867008054508251368. HINFO: read udp 172.17.0.4:38299->192.168.64.1:53: read: connection refused
2019-06-25T01:44:10.951Z [ERROR] plugin/errors: 2 5004792868601405094.2867008054508251368. HINFO: read udp 172.17.0.4:44579->192.168.64.1:53: read: connection refused

==> dmesg <==
[Jun25 01:35] ERROR: earlyprintk= earlyser already used
[  +0.000000] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xC0, should be 0x1D (20170831/tbprint-211)
[Jun25 01:36] ACPI Error: Could not enable RealTimeClock event (20170831/evxfevnt-218)
[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20170831/evxface-654)
[  +0.008393] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
[  +0.330199] systemd-fstab-generator[1041]: Ignoring "noauto" for root device
[  +0.004678] systemd[1]: File /usr/lib/systemd/system/systemd-journald.service:35 configures an IP firewall (IPAddressDeny=any), but the local system does not support BPF/cgroup based firewalling.
[  +0.000001] systemd[1]: Proceeding WITHOUT firewalling in effect! (This warning is only shown for the first loaded unit using IP firewalling.)
[  +0.549886] NFSD: the nfsdcld client tracking upcall will be removed in 3.10. Please transition to using nfsdcltrack.
[  +0.091971] vboxguest: loading out-of-tree module taints kernel.
[  +0.003203] vboxguest: PCI device not found, probably running on physical hardware.
[ +15.599608] systemd-fstab-generator[1801]: Ignoring "noauto" for root device
[Jun25 01:38] NFSD: Unable to end grace period: -110
[Jun25 01:42] systemd-fstab-generator[2318]: Ignoring "noauto" for root device
[Jun25 01:43] kauditd_printk_skb: 110 callbacks suppressed
[  +9.452931] kauditd_printk_skb: 20 callbacks suppressed
[ +22.995935] kauditd_printk_skb: 38 callbacks suppressed
[  +8.901640] kauditd_printk_skb: 2 callbacks suppressed
[Jun25 01:44] kauditd_printk_skb: 2 callbacks suppressed
[Jun25 01:45] systemd-fstab-generator[5218]: Ignoring "noauto" for root device
[Jun25 01:52] systemd-fstab-generator[8359]: Ignoring "noauto" for root device

==> kernel <==
 01:53:04 up 17 min,  0 users,  load average: 0.21, 0.19, 0.13
Linux minikube 4.15.0 #1 SMP Thu Apr 25 20:51:48 UTC 2019 x86_64 GNU/Linux

==> kube-addon-manager <==

==> kube-apiserver <==
I0625 01:43:14.258818       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0625 01:43:14.269365       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
W0625 01:43:14.500310       1 genericapiserver.go:344] Skipping API batch/v2alpha1 because it has no resources.
W0625 01:43:14.504950       1 genericapiserver.go:344] Skipping API node.k8s.io/v1alpha1 because it has no resources.
W0625 01:43:14.507708       1 genericapiserver.go:344] Skipping API rbac.authorization.k8s.io/v1alpha1 because it has no resources.
W0625 01:43:14.508356       1 genericapiserver.go:344] Skipping API scheduling.k8s.io/v1alpha1 because it has no resources.
W0625 01:43:14.509778       1 genericapiserver.go:344] Skipping API storage.k8s.io/v1alpha1 because it has no resources.
E0625 01:43:15.285201       1 prometheus.go:138] failed to register depth metric admission_quota_controller: duplicate metrics collector registration attempted
E0625 01:43:15.285406       1 prometheus.go:150] failed to register adds metric admission_quota_controller: duplicate metrics collector registration attempted
E0625 01:43:15.285690       1 prometheus.go:162] failed to register latency metric admission_quota_controller: duplicate metrics collector registration attempted
E0625 01:43:15.285972       1 prometheus.go:174] failed to register work_duration metric admission_quota_controller: duplicate metrics collector registration attempted
E0625 01:43:15.286144       1 prometheus.go:189] failed to register unfinished_work_seconds metric admission_quota_controller: duplicate metrics collector registration attempted
E0625 01:43:15.286349       1 prometheus.go:202] failed to register longest_running_processor_microseconds metric admission_quota_controller: duplicate metrics collector registration attempted
I0625 01:43:15.286621       1 plugins.go:158] Loaded 9 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,MutatingAdmissionWebhook.
I0625 01:43:15.286726       1 plugins.go:161] Loaded 6 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,Priority,PersistentVolumeClaimResize,ValidatingAdmissionWebhook,ResourceQuota.
I0625 01:43:15.288534       1 client.go:352] parsed scheme: ""
I0625 01:43:15.288692       1 client.go:352] scheme "" not registered, fallback to default scheme
I0625 01:43:15.288842       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0625 01:43:15.289108       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0625 01:43:15.296899       1 client.go:352] parsed scheme: ""
I0625 01:43:15.297350       1 client.go:352] scheme "" not registered, fallback to default scheme
I0625 01:43:15.297446       1 asm_amd64.s:1337] ccResolverWrapper: sending new addresses to cc: [{127.0.0.1:2379 0  <nil>}]
I0625 01:43:15.297232       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0625 01:43:15.297764       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0625 01:43:15.308489       1 asm_amd64.s:1337] balancerWrapper: got update addr from Notify: [{127.0.0.1:2379 <nil>}]
I0625 01:43:16.877486       1 secure_serving.go:116] Serving securely on [::]:8443
I0625 01:43:16.879150       1 autoregister_controller.go:139] Starting autoregister controller
I0625 01:43:16.879200       1 cache.go:32] Waiting for caches to sync for autoregister controller
I0625 01:43:16.881375       1 crd_finalizer.go:242] Starting CRDFinalizer
I0625 01:43:16.881429       1 apiservice_controller.go:94] Starting APIServiceRegistrationController
I0625 01:43:16.881440       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
I0625 01:43:16.881463       1 available_controller.go:320] Starting AvailableConditionController
I0625 01:43:16.881494       1 cache.go:32] Waiting for caches to sync for AvailableConditionController controller
I0625 01:43:16.881513       1 crdregistration_controller.go:112] Starting crd-autoregister controller
I0625 01:43:16.881547       1 controller_utils.go:1027] Waiting for caches to sync for crd-autoregister controller
I0625 01:43:16.886491       1 controller.go:81] Starting OpenAPI AggregationController
I0625 01:43:16.887733       1 customresource_discovery_controller.go:208] Starting DiscoveryController
I0625 01:43:16.887790       1 naming_controller.go:284] Starting NamingConditionController
I0625 01:43:16.887802       1 establishing_controller.go:73] Starting EstablishingController
E0625 01:43:16.901055       1 controller.go:148] Unable to remove old endpoints from kubernetes service: StorageError: key not found, Code: 1, Key: /registry/masterleases/192.168.64.2, ResourceVersion: 0, AdditionalErrorMsg: 
I0625 01:43:17.080468       1 cache.go:39] Caches are synced for autoregister controller
I0625 01:43:17.081808       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
I0625 01:43:17.083839       1 controller_utils.go:1034] Caches are synced for crd-autoregister controller
I0625 01:43:17.083910       1 cache.go:39] Caches are synced for AvailableConditionController controller
I0625 01:43:17.124329       1 controller.go:606] quota admission added evaluator for: leases.coordination.k8s.io
I0625 01:43:17.876165       1 controller.go:107] OpenAPI AggregationController: Processing item 
I0625 01:43:17.876237       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
I0625 01:43:17.876603       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
I0625 01:43:17.900624       1 storage_scheduling.go:122] all system priority classes are created successfully or already exist.
I0625 01:43:35.788580       1 controller.go:606] quota admission added evaluator for: endpoints

==> kube-proxy <==
W0625 01:43:21.032716       1 server_others.go:267] Flag proxy-mode="" unknown, assuming iptables proxy
I0625 01:43:21.056805       1 server_others.go:147] Using iptables Proxier.
W0625 01:43:21.059889       1 proxier.go:319] clusterCIDR not specified, unable to distinguish between internal and external traffic
I0625 01:43:21.060903       1 server.go:555] Version: v1.14.1
I0625 01:43:21.079304       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072
I0625 01:43:21.079633       1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0625 01:43:21.082135       1 conntrack.go:83] Setting conntrack hashsize to 32768
I0625 01:43:21.088026       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400
I0625 01:43:21.088186       1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600
I0625 01:43:21.088789       1 config.go:102] Starting endpoints config controller
I0625 01:43:21.088819       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0625 01:43:21.088848       1 config.go:202] Starting service config controller
I0625 01:43:21.088857       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0625 01:43:21.190729       1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0625 01:43:21.191299       1 controller_utils.go:1034] Caches are synced for service config controller

==> kube-scheduler <==
I0625 01:43:12.590137       1 serving.go:319] Generated self-signed cert in-memory
W0625 01:43:13.163543       1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0625 01:43:13.163591       1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0625 01:43:13.163619       1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
I0625 01:43:13.170682       1 server.go:142] Version: v1.14.1
I0625 01:43:13.171542       1 defaults.go:87] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
W0625 01:43:13.175358       1 authorization.go:47] Authorization is disabled
W0625 01:43:13.175404       1 authentication.go:55] Authentication is disabled
I0625 01:43:13.175428       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
I0625 01:43:13.180726       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
E0625 01:43:17.026406       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
E0625 01:43:17.026578       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
E0625 01:43:17.026816       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
E0625 01:43:17.026964       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
E0625 01:43:17.027008       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
E0625 01:43:17.027193       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
E0625 01:43:17.027292       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
E0625 01:43:17.027380       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
E0625 01:43:17.027430       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
E0625 01:43:17.027529       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
I0625 01:43:18.889219       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
I0625 01:43:18.989444       1 controller_utils.go:1034] Caches are synced for scheduler controller
I0625 01:43:18.989637       1 leaderelection.go:217] attempting to acquire leader lease  kube-system/kube-scheduler...
I0625 01:43:35.792092       1 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler

==> kubelet <==
-- Logs begin at Tue 2019-06-25 01:36:22 UTC, end at Tue 2019-06-25 01:53:04 UTC. --
Jun 25 01:43:18 minikube kubelet[2400]: E0625 01:43:18.278239    2400 configmap.go:203] Couldn't get configMap kube-system/kube-proxy: couldn't propagate object cache: timed out waiting for the condition
Jun 25 01:43:18 minikube kubelet[2400]: E0625 01:43:18.278273    2400 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/configmap/e86e30a9-93c5-11e9-91cb-ae1ef4d6adfd-kube-proxy\" (\"e86e30a9-93c5-11e9-91cb-ae1ef4d6adfd\")" failed. No retries permitted until 2019-06-25 01:43:18.778261143 +0000 UTC m=+9.462464932 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e86e30a9-93c5-11e9-91cb-ae1ef4d6adfd-kube-proxy\") pod \"kube-proxy-zrb65\" (UID: \"e86e30a9-93c5-11e9-91cb-ae1ef4d6adfd\") : couldn't propagate object cache: timed out waiting for the condition"
Jun 25 01:43:18 minikube kubelet[2400]: E0625 01:43:18.278287    2400 secret.go:198] Couldn't get secret kube-system/storage-provisioner-token-88nkk: couldn't propagate object cache: timed out waiting for the condition
Jun 25 01:43:18 minikube kubelet[2400]: E0625 01:43:18.278312    2400 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/86f5f440-7865-11e9-a975-ae1ef4d6adfd-storage-provisioner-token-88nkk\" (\"86f5f440-7865-11e9-a975-ae1ef4d6adfd\")" failed. No retries permitted until 2019-06-25 01:43:18.77830064 +0000 UTC m=+9.462504428 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"storage-provisioner-token-88nkk\" (UniqueName: \"kubernetes.io/secret/86f5f440-7865-11e9-a975-ae1ef4d6adfd-storage-provisioner-token-88nkk\") pod \"storage-provisioner\" (UID: \"86f5f440-7865-11e9-a975-ae1ef4d6adfd\") : couldn't propagate object cache: timed out waiting for the condition"
Jun 25 01:43:18 minikube kubelet[2400]: E0625 01:43:18.278325    2400 secret.go:198] Couldn't get secret kube-system/kube-proxy-token-4sgj2: couldn't propagate object cache: timed out waiting for the condition
Jun 25 01:43:18 minikube kubelet[2400]: E0625 01:43:18.278348    2400 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/e86e30a9-93c5-11e9-91cb-ae1ef4d6adfd-kube-proxy-token-4sgj2\" (\"e86e30a9-93c5-11e9-91cb-ae1ef4d6adfd\")" failed. No retries permitted until 2019-06-25 01:43:18.778337262 +0000 UTC m=+9.462541051 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"kube-proxy-token-4sgj2\" (UniqueName: \"kubernetes.io/secret/e86e30a9-93c5-11e9-91cb-ae1ef4d6adfd-kube-proxy-token-4sgj2\") pod \"kube-proxy-zrb65\" (UID: \"e86e30a9-93c5-11e9-91cb-ae1ef4d6adfd\") : couldn't propagate object cache: timed out waiting for the condition"
Jun 25 01:43:18 minikube kubelet[2400]: E0625 01:43:18.278517    2400 configmap.go:203] Couldn't get configMap kube-system/coredns: couldn't propagate object cache: timed out waiting for the condition
Jun 25 01:43:18 minikube kubelet[2400]: E0625 01:43:18.278564    2400 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/configmap/8643eeb0-7865-11e9-a975-ae1ef4d6adfd-config-volume\" (\"8643eeb0-7865-11e9-a975-ae1ef4d6adfd\")" failed. No retries permitted until 2019-06-25 01:43:18.778547346 +0000 UTC m=+9.462751134 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8643eeb0-7865-11e9-a975-ae1ef4d6adfd-config-volume\") pod \"coredns-fb8b8dccf-bwrkk\" (UID: \"8643eeb0-7865-11e9-a975-ae1ef4d6adfd\") : couldn't propagate object cache: timed out waiting for the condition"
Jun 25 01:43:18 minikube kubelet[2400]: E0625 01:43:18.278581    2400 secret.go:198] Couldn't get secret kube-system/coredns-token-grwvx: couldn't propagate object cache: timed out waiting for the condition
Jun 25 01:43:18 minikube kubelet[2400]: E0625 01:43:18.278610    2400 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/secret/86422cc3-7865-11e9-a975-ae1ef4d6adfd-coredns-token-grwvx\" (\"86422cc3-7865-11e9-a975-ae1ef4d6adfd\")" failed. No retries permitted until 2019-06-25 01:43:18.778597042 +0000 UTC m=+9.462800834 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"coredns-token-grwvx\" (UniqueName: \"kubernetes.io/secret/86422cc3-7865-11e9-a975-ae1ef4d6adfd-coredns-token-grwvx\") pod \"coredns-fb8b8dccf-x66ww\" (UID: \"86422cc3-7865-11e9-a975-ae1ef4d6adfd\") : couldn't propagate object cache: timed out waiting for the condition"
Jun 25 01:43:18 minikube kubelet[2400]: E0625 01:43:18.278651    2400 configmap.go:203] Couldn't get configMap kube-system/coredns: couldn't propagate object cache: timed out waiting for the condition
Jun 25 01:43:18 minikube kubelet[2400]: E0625 01:43:18.278679    2400 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/configmap/86422cc3-7865-11e9-a975-ae1ef4d6adfd-config-volume\" (\"86422cc3-7865-11e9-a975-ae1ef4d6adfd\")" failed. No retries permitted until 2019-06-25 01:43:18.778666607 +0000 UTC m=+9.462870395 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86422cc3-7865-11e9-a975-ae1ef4d6adfd-config-volume\") pod \"coredns-fb8b8dccf-x66ww\" (UID: \"86422cc3-7865-11e9-a975-ae1ef4d6adfd\") : couldn't propagate object cache: timed out waiting for the condition"
Jun 25 01:43:18 minikube kubelet[2400]: W0625 01:43:18.807963    2400 container.go:422] Failed to get RecentStats("/system.slice/run-r05e775c95bc64fd3b5d26fcd34a2e6e9.scope") while determining the next housekeeping: unable to find data in memory cache
Jun 25 01:43:19 minikube kubelet[2400]: W0625 01:43:19.402754    2400 container.go:409] Failed to create summary reader for "/system.slice/run-ra5bf797d38bd43ff81936a7eb6666ef0.scope": none of the resources are being tracked.
Jun 25 01:43:19 minikube kubelet[2400]: E0625 01:43:19.963865    2400 cadvisor_stats_provider.go:403] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/besteffort/podd1da9633-93c5-11e9-91cb-ae1ef4d6adfd/848f405b2c40311100a78947b0934e7dbb2b7d0e63b616a490e74479eeda467b": RecentStats: unable to find data in memory cache]
Jun 25 01:43:19 minikube kubelet[2400]: W0625 01:43:19.968301    2400 pod_container_deletor.go:75] Container "99818bcd246d975f9fae9d642857e6f598d2d4fbf1a05a9272f368727d2b2c14" not found in pod's containers
Jun 25 01:43:20 minikube kubelet[2400]: W0625 01:43:20.073295    2400 pod_container_deletor.go:75] Container "d1767fd58929d2787c9a166e5036a311968d2ab696e86d19d10b0fbf8230885a" not found in pod's containers
Jun 25 01:43:20 minikube kubelet[2400]: W0625 01:43:20.284362    2400 pod_container_deletor.go:75] Container "03959375fdc94cae3764c1675bd53e9598a57e65f6b252734a9dc0b2d82e9d14" not found in pod's containers
Jun 25 01:43:50 minikube kubelet[2400]: E0625 01:43:50.488827    2400 cadvisor_stats_provider.go:403] Partial failure issuing cadvisor.ContainerInfoV2: partial failures: ["/kubepods/burstable/pod8643eeb0-7865-11e9-a975-ae1ef4d6adfd/a488b8fc4c5626821d112ba6f91158272fb0e49266e078ff35fbf649d2de7a90": RecentStats: unable to find data in memory cache]
Jun 25 01:43:50 minikube kubelet[2400]: E0625 01:43:50.742678    2400 pod_workers.go:190] Error syncing pod d1da9633-93c5-11e9-91cb-ae1ef4d6adfd ("kubernetes-dashboard-79dd6bfc48-6v4w5_kube-system(d1da9633-93c5-11e9-91cb-ae1ef4d6adfd)"), skipping: failed to "StartContainer" for "kubernetes-dashboard" with CrashLoopBackOff: "Back-off 10s restarting failed container=kubernetes-dashboard pod=kubernetes-dashboard-79dd6bfc48-6v4w5_kube-system(d1da9633-93c5-11e9-91cb-ae1ef4d6adfd)"
Jun 25 01:43:50 minikube kubelet[2400]: E0625 01:43:50.782622    2400 pod_workers.go:190] Error syncing pod 86f5f440-7865-11e9-a975-ae1ef4d6adfd ("storage-provisioner_kube-system(86f5f440-7865-11e9-a975-ae1ef4d6adfd)"), skipping: failed to "StartContainer" for "storage-provisioner" with CrashLoopBackOff: "Back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(86f5f440-7865-11e9-a975-ae1ef4d6adfd)"
Jun 25 01:43:50 minikube kubelet[2400]: E0625 01:43:50.808310    2400 pod_workers.go:190] Error syncing pod 86422cc3-7865-11e9-a975-ae1ef4d6adfd ("coredns-fb8b8dccf-x66ww_kube-system(86422cc3-7865-11e9-a975-ae1ef4d6adfd)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-x66ww_kube-system(86422cc3-7865-11e9-a975-ae1ef4d6adfd)"
Jun 25 01:43:50 minikube kubelet[2400]: E0625 01:43:50.832304    2400 pod_workers.go:190] Error syncing pod 8643eeb0-7865-11e9-a975-ae1ef4d6adfd ("coredns-fb8b8dccf-bwrkk_kube-system(8643eeb0-7865-11e9-a975-ae1ef4d6adfd)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-bwrkk_kube-system(8643eeb0-7865-11e9-a975-ae1ef4d6adfd)"
Jun 25 01:43:51 minikube kubelet[2400]: E0625 01:43:51.882769    2400 pod_workers.go:190] Error syncing pod 86422cc3-7865-11e9-a975-ae1ef4d6adfd ("coredns-fb8b8dccf-x66ww_kube-system(86422cc3-7865-11e9-a975-ae1ef4d6adfd)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-x66ww_kube-system(86422cc3-7865-11e9-a975-ae1ef4d6adfd)"
Jun 25 01:43:52 minikube kubelet[2400]: E0625 01:43:52.865857    2400 pod_workers.go:190] Error syncing pod 8643eeb0-7865-11e9-a975-ae1ef4d6adfd ("coredns-fb8b8dccf-bwrkk_kube-system(8643eeb0-7865-11e9-a975-ae1ef4d6adfd)"), skipping: failed to "StartContainer" for "coredns" with CrashLoopBackOff: "Back-off 10s restarting failed container=coredns pod=coredns-fb8b8dccf-bwrkk_kube-system(8643eeb0-7865-11e9-a975-ae1ef4d6adfd)"
Jun 25 01:52:25 minikube kubelet[2400]: W0625 01:52:25.518623    2400 kubelet.go:1624] Deleting mirror pod "kube-controller-manager-minikube_kube-system(a64b51c8-7865-11e9-a975-ae1ef4d6adfd)" because it is outdated
Jun 25 01:52:25 minikube kubelet[2400]: E0625 01:52:25.574614    2400 file.go:108] Unable to process watch event: can't process config file "/etc/kubernetes/manifests/etcd.yaml": /etc/kubernetes/manifests/etcd.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file.
Jun 25 01:52:25 minikube kubelet[2400]: I0625 01:52:25.659675    2400 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "flexvolume-dir" (UniqueName: "kubernetes.io/host-path/1818598dcad3b01111d87ff09f02142a-flexvolume-dir") pod "kube-controller-manager-minikube" (UID: "1818598dcad3b01111d87ff09f02142a")
Jun 25 01:52:25 minikube kubelet[2400]: I0625 01:52:25.659767    2400 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/1818598dcad3b01111d87ff09f02142a-kubeconfig") pod "kube-controller-manager-minikube" (UID: "1818598dcad3b01111d87ff09f02142a")
Jun 25 01:52:25 minikube kubelet[2400]: I0625 01:52:25.659804    2400 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/1818598dcad3b01111d87ff09f02142a-ca-certs") pod "kube-controller-manager-minikube" (UID: "1818598dcad3b01111d87ff09f02142a")
Jun 25 01:52:25 minikube kubelet[2400]: I0625 01:52:25.659828    2400 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/1818598dcad3b01111d87ff09f02142a-k8s-certs") pod "kube-controller-manager-minikube" (UID: "1818598dcad3b01111d87ff09f02142a")
Jun 25 01:52:25 minikube kubelet[2400]: I0625 01:52:25.659936    2400 reconciler.go:207] operationExecutor.VerifyControllerAttachedVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/1818598dcad3b01111d87ff09f02142a-usr-share-ca-certificates") pod "kube-controller-manager-minikube" (UID: "1818598dcad3b01111d87ff09f02142a")
Jun 25 01:52:26 minikube kubelet[2400]: W0625 01:52:26.374924    2400 pod_container_deletor.go:75] Container "b9e93f44f7ce819b7dd0bfd976503c244e666b498a6e772f39f50557f605ee06" not found in pod's containers
Jun 25 01:52:27 minikube kubelet[2400]: I0625 01:52:27.672386    2400 reconciler.go:181] operationExecutor.UnmountVolume started for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/14f21e8d22bf58a1aafff0a82de4f720-usr-share-ca-certificates") pod "14f21e8d22bf58a1aafff0a82de4f720" (UID: "14f21e8d22bf58a1aafff0a82de4f720")
Jun 25 01:52:27 minikube kubelet[2400]: I0625 01:52:27.672585    2400 reconciler.go:181] operationExecutor.UnmountVolume started for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/14f21e8d22bf58a1aafff0a82de4f720-kubeconfig") pod "14f21e8d22bf58a1aafff0a82de4f720" (UID: "14f21e8d22bf58a1aafff0a82de4f720")
Jun 25 01:52:27 minikube kubelet[2400]: I0625 01:52:27.672625    2400 reconciler.go:181] operationExecutor.UnmountVolume started for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/14f21e8d22bf58a1aafff0a82de4f720-ca-certs") pod "14f21e8d22bf58a1aafff0a82de4f720" (UID: "14f21e8d22bf58a1aafff0a82de4f720")
Jun 25 01:52:27 minikube kubelet[2400]: I0625 01:52:27.672747    2400 reconciler.go:181] operationExecutor.UnmountVolume started for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/14f21e8d22bf58a1aafff0a82de4f720-k8s-certs") pod "14f21e8d22bf58a1aafff0a82de4f720" (UID: "14f21e8d22bf58a1aafff0a82de4f720")
Jun 25 01:52:27 minikube kubelet[2400]: I0625 01:52:27.672911    2400 operation_generator.go:815] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14f21e8d22bf58a1aafff0a82de4f720-k8s-certs" (OuterVolumeSpecName: "k8s-certs") pod "14f21e8d22bf58a1aafff0a82de4f720" (UID: "14f21e8d22bf58a1aafff0a82de4f720"). InnerVolumeSpecName "k8s-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jun 25 01:52:27 minikube kubelet[2400]: I0625 01:52:27.673013    2400 operation_generator.go:815] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14f21e8d22bf58a1aafff0a82de4f720-usr-share-ca-certificates" (OuterVolumeSpecName: "usr-share-ca-certificates") pod "14f21e8d22bf58a1aafff0a82de4f720" (UID: "14f21e8d22bf58a1aafff0a82de4f720"). InnerVolumeSpecName "usr-share-ca-certificates". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jun 25 01:52:27 minikube kubelet[2400]: I0625 01:52:27.673153    2400 operation_generator.go:815] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14f21e8d22bf58a1aafff0a82de4f720-kubeconfig" (OuterVolumeSpecName: "kubeconfig") pod "14f21e8d22bf58a1aafff0a82de4f720" (UID: "14f21e8d22bf58a1aafff0a82de4f720"). InnerVolumeSpecName "kubeconfig". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jun 25 01:52:27 minikube kubelet[2400]: I0625 01:52:27.673208    2400 operation_generator.go:815] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14f21e8d22bf58a1aafff0a82de4f720-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "14f21e8d22bf58a1aafff0a82de4f720" (UID: "14f21e8d22bf58a1aafff0a82de4f720"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/host-path", VolumeGidValue ""
Jun 25 01:52:27 minikube kubelet[2400]: I0625 01:52:27.773286    2400 reconciler.go:301] Volume detached for volume "ca-certs" (UniqueName: "kubernetes.io/host-path/14f21e8d22bf58a1aafff0a82de4f720-ca-certs") on node "minikube" DevicePath ""
Jun 25 01:52:27 minikube kubelet[2400]: I0625 01:52:27.773405    2400 reconciler.go:301] Volume detached for volume "k8s-certs" (UniqueName: "kubernetes.io/host-path/14f21e8d22bf58a1aafff0a82de4f720-k8s-certs") on node "minikube" DevicePath ""
Jun 25 01:52:27 minikube kubelet[2400]: I0625 01:52:27.773454    2400 reconciler.go:301] Volume detached for volume "usr-share-ca-certificates" (UniqueName: "kubernetes.io/host-path/14f21e8d22bf58a1aafff0a82de4f720-usr-share-ca-certificates") on node "minikube" DevicePath ""
Jun 25 01:52:27 minikube kubelet[2400]: I0625 01:52:27.773498    2400 reconciler.go:301] Volume detached for volume "kubeconfig" (UniqueName: "kubernetes.io/host-path/14f21e8d22bf58a1aafff0a82de4f720-kubeconfig") on node "minikube" DevicePath ""
Jun 25 01:52:29 minikube kubelet[2400]: W0625 01:52:29.475233    2400 kubelet_getters.go:284] Path "/var/lib/kubelet/pods/14f21e8d22bf58a1aafff0a82de4f720/volumes" does not exist
Jun 25 01:52:53 minikube kubelet[2400]: W0625 01:52:53.772004    2400 pod_container_deletor.go:75] Container "b12fd1bd7651c83bee5f941b90a773fb66e344721ff2e7701b09140242b8f5e4" not found in pod's containers
Jun 25 01:52:54 minikube kubelet[2400]: E0625 01:52:54.022864    2400 pod_workers.go:190] Error syncing pod 0abcb7a1f0c9c0ebc9ec348ffdfb220c ("kube-addon-manager-minikube_kube-system(0abcb7a1f0c9c0ebc9ec348ffdfb220c)"), skipping: failed to "StartContainer" for "kube-addon-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=kube-addon-manager pod=kube-addon-manager-minikube_kube-system(0abcb7a1f0c9c0ebc9ec348ffdfb220c)"
Jun 25 01:52:54 minikube kubelet[2400]: E0625 01:52:54.791543    2400 pod_workers.go:190] Error syncing pod 0abcb7a1f0c9c0ebc9ec348ffdfb220c ("kube-addon-manager-minikube_kube-system(0abcb7a1f0c9c0ebc9ec348ffdfb220c)"), skipping: failed to "StartContainer" for "kube-addon-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=kube-addon-manager pod=kube-addon-manager-minikube_kube-system(0abcb7a1f0c9c0ebc9ec348ffdfb220c)"
Jun 25 01:52:55 minikube kubelet[2400]: E0625 01:52:55.811664    2400 pod_workers.go:190] Error syncing pod 0abcb7a1f0c9c0ebc9ec348ffdfb220c ("kube-addon-manager-minikube_kube-system(0abcb7a1f0c9c0ebc9ec348ffdfb220c)"), skipping: failed to "StartContainer" for "kube-addon-manager" with CrashLoopBackOff: "Back-off 10s restarting failed container=kube-addon-manager pod=kube-addon-manager-minikube_kube-system(0abcb7a1f0c9c0ebc9ec348ffdfb220c)"

==> kubernetes-dashboard <==
2019/06/25 01:44:00 Starting overwatch
2019/06/25 01:44:00 Using in-cluster config to connect to apiserver
2019/06/25 01:44:00 Using service account token for csrf signing
2019/06/25 01:44:00 Successful initial request to the apiserver, version: v1.14.1
2019/06/25 01:44:00 Generating JWE encryption key
2019/06/25 01:44:00 New synchronizer has been registered: kubernetes-dashboard-key-holder-kube-system. Starting
2019/06/25 01:44:00 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kube-system
2019/06/25 01:44:00 Initializing JWE encryption key from synchronized object
2019/06/25 01:44:00 Creating in-cluster Heapster client
2019/06/25 01:44:00 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:44:00 Serving insecurely on HTTP port: 9090
2019/06/25 01:44:30 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:45:00 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:45:30 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:46:00 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:46:30 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:47:00 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:47:30 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:48:00 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:48:30 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:49:00 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:49:30 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:50:00 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:50:30 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:51:00 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:51:30 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:52:00 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:52:30 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.
2019/06/25 01:53:00 Metric client health check failed: the server could not find the requested resource (get services heapster). Retrying in 30 seconds.

==> storage-provisioner <==

The operating system version:
macOS 10.14.5

@chaimleib
Copy link
Author

chaimleib commented Jun 25, 2019

Possibly related: images also could not be fetched when I tried using minikube dashboard to create a new app using nginx:alpine, nginx, docker.io/library/nginx:alpine, or k8s.gcr.io/nginx:alpine.

Failed to pull image "nginx:alpine": rpc error: code = Unknown desc = Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.2:57706->192.168.64.1:53: read: connection refused

All the failure messages from the minikube dashboard GUI were the same, just with different image names.

Also possibly related:

@chaimleib
Copy link
Author

chaimleib commented Jun 25, 2019

Here are some diagnostic commands:

DNS

% sudo lsof -ni:53
Password:
COMMAND     PID   USER   FD   TYPE             DEVICE SIZE/OFF NODE NAME
dnscrypt- 67702 nobody    7u  IPv4 0xd11075e49a67bfd7      0t0  UDP 127.0.0.1:domain
dnscrypt- 67702 nobody    8u  IPv4 0xd11075e4950047c7      0t0  TCP 127.0.0.1:domain (LISTEN)

% ps -afe | grep dns
    0 67041     1   0 10:00AM ??         0:13.82 /Library/Application Support/OpenDNS Roaming Client/dns-updater
   -2 67702     1   0 10:43AM ??         0:05.57 /Library/Application Support/OpenDNS Roaming Client/dnscrypt-proxy --user nobody --local-address=127.0.0.1:53 --plugin=/Library/Application Support/OpenDNS Roaming Client/libdcplugin_erc.so -d
  502 82384 25295   0 11:32AM ttys007    0:00.01 grep dns

Hyperkit install

% hyperkit -v
hyperkit: 0.20190201

Homepage: https://github.com/docker/hyperkit
License: BSD

% ls -la /usr/local/bin/docker-machine-driver-hyperkit
-rwsr-xr-x  1 root  wheel  33224740 Jun 24 18:35 /usr/local/bin/docker-machine-driver-hyperkit

% grep -c minikube /usr/local/bin/docker-machine-driver-hyperkit
93

Docker registries

% docker pull nginx:alpine
alpine: Pulling from library/nginx
e7c96db7181b: Pull complete
f0e40e45c95e: Pull complete
Digest: sha256:b126fee6820be927b1e04ae36b3f51aa47d9b73bf6b1826ff19a59d22b2b4c63
Status: Downloaded newer image for nginx:alpine
% curl -vvv https://k8s.gcr.io/v2/
*   Trying 74.125.142.82...
* TCP_NODELAY set
* Connected to k8s.gcr.io (74.125.142.82) port 443 (#0)
* ALPN, offering h2
* ALPN, offering http/1.1
* Cipher selection: ALL:!EXPORT:!EXPORT40:!EXPORT56:!aNULL:!LOW:!RC4:@STRENGTH
* successfully set certificate verify locations:
*   CAfile: /etc/ssl/cert.pem
  CApath: none
* TLSv1.2 (OUT), TLS handshake, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Server hello (2):
* TLSv1.2 (IN), TLS handshake, Certificate (11):
* TLSv1.2 (IN), TLS handshake, Server key exchange (12):
* TLSv1.2 (IN), TLS handshake, Server finished (14):
* TLSv1.2 (OUT), TLS handshake, Client key exchange (16):
* TLSv1.2 (OUT), TLS change cipher, Client hello (1):
* TLSv1.2 (OUT), TLS handshake, Finished (20):
* TLSv1.2 (IN), TLS change cipher, Client hello (1):
* TLSv1.2 (IN), TLS handshake, Finished (20):
* SSL connection using TLSv1.2 / ECDHE-ECDSA-CHACHA20-POLY1305
* ALPN, server accepted to use h2
* Server certificate:
*  subject: C=US; ST=California; L=Mountain View; O=Google LLC; CN=*.gcr.io
*  start date: Jun 11 12:40:53 2019 GMT
*  expire date: Sep  3 12:21:00 2019 GMT
*  subjectAltName: host "k8s.gcr.io" matched cert's "*.gcr.io"
*  issuer: C=US; O=Google Trust Services; CN=Google Internet Authority G3
*  SSL certificate verify ok.
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fa670006600)
> GET /v2/ HTTP/2
> Host: k8s.gcr.io
> User-Agent: curl/7.54.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 401
< docker-distribution-api-version: registry/2.0
< www-authenticate: Bearer realm="https://k8s.gcr.io/v2/token",service="k8s.gcr.io"
< content-type: application/json
< date: Tue, 25 Jun 2019 18:28:52 GMT
< server: Docker Registry
< cache-control: private
< x-xss-protection: 0
< x-frame-options: SAMEORIGIN
< alt-svc: quic=":443"; ma=2592000; v="46,44,43,39"
< accept-ranges: none
< vary: Accept-Encoding
<
* Connection #0 to host k8s.gcr.io left intact
{"errors":[{"code":"UNAUTHORIZED","message":"Unauthorized access."}]}

@medyagh
Copy link
Member

medyagh commented Jun 25, 2019

I am curious, are you trying to pull image to minikube? have you done minikube docker-env ?
are you behind a proxy that limits your access to GCR?

@medyagh medyagh added triage/needs-information Indicates an issue needs more information in order to work on it. co/hyperkit Hyperkit related issues labels Jun 25, 2019
@medyagh
Copy link
Member

medyagh commented Jun 25, 2019

I wonder if this is related to this #4547

@chaimleib
Copy link
Author

I was able to get things working using --vm-driver parallels, so the proxy seems to allow connections to the registry.

@medyagh medyagh changed the title Get https://k8s.gcr.io/v2/: dial tcp: lookup k8s.gcr.io on 192.168.64.1:53: read udp x->192.168.64.1:53: read: connection refused hyperkit: tcp: lookup k8s.gcr.io read: connection refused Jun 26, 2019
@dfang
Copy link
Contributor

dfang commented Jun 26, 2019

I wonder if this is related to this #4547

@medyagh for me, I start minikube without --insecure-registry, with or without HTTP_PROXY, HTTPS_PROXY,

docker pull in minikube ssh

minikube ssh
                         _             _
            _         _ ( )           ( )
  ___ ___  (_)  ___  (_)| |/')  _   _ | |_      __
/' _ ` _ `\| |/' _ `\| || , <  ( ) ( )| '_`\  /'__`\
| ( ) ( ) || || ( ) || || |\`\ | (_) || |_) )(  ___/
(_) (_) (_)(_)(_) (_)(_)(_) (_)`\___/'(_,__/'`\____)

$ docker pull alpine
Using default tag: latest
Error response from daemon: Get https://registry-1.docker.io/v2/: dial tcp: lookup registry-1.docker.io on 192.168.64.1:53: read udp 192.168.64.96:50393->192.168.64.1:53: read: connection refused

$ curl -v -I google.com
* Could not resolve host: google.com
curl: (6) Could not resolve host: google.com

can't pull alpine images on docker hub, needless to say images on gcr.io.

how dns work in minikube ? can't find any documentation.

by the way. on macOS version 10.14.5, minikube version 1.2.0, hyperkit driver

didn't try other driver yet ....

@chaimleib
Copy link
Author

I don't have HTTP_PROXY or http_proxy set in my env vars.

@lambda-9
Copy link
Contributor

See #3036 (comment) on issue #3036. Cisco Umbrella and Cisco Anyconnect may run a local DNS proxy which does not seem to work with Hyperkit and minikube. I am not using any sort of HTTP proxy.

A temporary workaround for me is to use the parallels vm-driver. I do not experience this issue with Parallels. I did not test Virtualbox or VMware.

@malagant
Copy link

I found out that I had dnsmasq running on my Mac. Aber disabling it, I downloaded the newest hyperkit driver, deleted the exisiting minikube and set up a new minikube. Now everything works as expected.

  1. Stop dnsmasq
    brew services stop dnsmasq
  2. Download latest hyperkit
    curl -LO https://storage.googleapis.com/minikube/releases/latest/docker-machine-driver-hyperkit && sudo install -o root -g wheel -m 4755 docker-machine-driver-hyperkit /usr/local/bin/
  3. Remove minikube instance
    minikube delete -p minikube
  4. Start new minikube instance
    minikube start --memory 16384 --cpus 4 --vm-driver=hyperkit --disk-size 100g

Hopefully this helps someone.

@tstromberg
Copy link
Contributor

@chaimleib - any chance there is a local DNS server running on your system? You can confirm using:

sudo lsof -i :53

If so, this is due to #3036

@tstromberg tstromberg added area/dns DNS issues cause/port-conflict Start failures due to port or other network conflict labels Jul 17, 2019
@medyagh
Copy link
Member

medyagh commented Jul 25, 2019

@chaimleib could you confirm the output of sudo lsof -i :53

@chaimleib
Copy link
Author

chaimleib commented Jul 25, 2019

Copying from above:

% sudo lsof -ni:53
Password:
COMMAND     PID   USER   FD   TYPE             DEVICE SIZE/OFF NODE NAME
dnscrypt- 67702 nobody    7u  IPv4 0xd11075e49a67bfd7      0t0  UDP 127.0.0.1:domain
dnscrypt- 67702 nobody    8u  IPv4 0xd11075e4950047c7      0t0  TCP 127.0.0.1:domain (LISTEN)

-n just means not to resolve hostnames.

@tstromberg tstromberg removed the triage/needs-information Indicates an issue needs more information in order to work on it. label Aug 1, 2019
@tstromberg
Copy link
Contributor

Thanks for the info! It does appear that you have a DNS server (dnscrypt) which conflicts with the hyperkit DNS server. You can either use VirtualBox or kill the dnscrypt process before hand.

Closing as dupe of #3036

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/dns DNS issues cause/port-conflict Start failures due to port or other network conflict co/hyperkit Hyperkit related issues triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

6 participants