Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve error handling on ARM architectures #7818

Closed
LyleLee opened this issue Apr 21, 2020 · 13 comments
Closed

Improve error handling on ARM architectures #7818

LyleLee opened this issue Apr 21, 2020 · 13 comments
Labels
co/docker-driver Issues related to kubernetes in container kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-solution-message Issues where where offering a solution for an error would be helpful priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.

Comments

@LyleLee
Copy link

LyleLee commented Apr 21, 2020

Can anyone instruct me a little bit about what is going on? I am working with:

  • ARM64
  • CentOS 4.18.0-80.7.2.el7.aarch64

Steps to reproduce the issue:

  1. wget https://storage.googleapis.com/minikube/releases/v1.9.2/minikube-linux-arm64
  2. ln -s minikube-linux-arm64 minikube
  3. minikube start

Full output of failed command:

[user1@arm64-server ~]$ minikube start
* minikube v1.9.2 on Centos 7.7.1908 (arm64)
* Using the docker driver based on existing profile
* Starting control plane node m01 in cluster minikube
* Pulling base image ...
* Restarting existing docker container for "minikube" ...
! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "minikube", output
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
: exit status 1
* Restarting existing docker container for "minikube" ...
*
X Failed to start docker container. "minikube start" may fix it.: provision: get ssh host-port: get host-bind port 22 for "minikube", output
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
: exit status 1
*
* minikube is exiting due to an error. If the above message is not useful, open an issue:
  - https://github.com/kubernetes/minikube/issues/new/choose

Full output of minikube start command used, if not already included:

As above.

Optional: Full output of minikube logs command:

[user1@kunpeng920 ~]$ minikube logs
* The control plane node must be running for this command
  - To fix this, run: "minikube start"
@tstromberg tstromberg added co/docker-driver Issues related to kubernetes in container kind/bug Categorizes issue or PR as related to a bug. labels Apr 21, 2020
@tstromberg
Copy link
Contributor

Sorry about that. Do you mind sharing the output of:

  • minikube start --alsologtostderr -v=1
  • docker inspect minikube

It would also be useful to know if using the latest beta fixes this issue for you: https://github.com/kubernetes/minikube/releases/tag/v1.10.0-beta.0

Thank you for the report!

@tstromberg tstromberg added kind/support Categorizes issue or PR as a support question. priority/backlog Higher priority than priority/awaiting-more-evidence. and removed kind/support Categorizes issue or PR as a support question. labels Apr 21, 2020
@tstromberg
Copy link
Contributor

I'm marking this as a bug, because whatever the underlying root cause is here, our error handling is clearly not very good.

@LyleLee
Copy link
Author

LyleLee commented Apr 21, 2020

Thanks a lot, here it is:

  • minikube start --alsologtostderr -v=1
[user1@arm64-server ~]$ minikube start --alsologtostderr -v=1
W0421 11:47:50.210136  101380 root.go:248] Error reading config file at /home/user1/.minikube/config/config.json: open /home/user1/.minikube/config/config.json: no such file or directory
I0421 11:47:50.210548  101380 notify.go:125] Checking for updates...
I0421 11:47:51.151697  101380 start.go:262] hostinfo: {"hostname":"arm64-server","uptime":1558143,"bootTime":1585882728,"procs":1315,"os":"linux","platform":"centos","platformFamily":"rhel","platformVersion":"7.7.1908","kernelVersion":"4.18.0-80.7.2.el7.aarch64","virtualizationSystem":"","virtualizationRole":"","hostid":"5bb68dff-8357-4d63-a431-3ec4a206719a"}
I0421 11:47:51.152765  101380 start.go:272] virtualization:
* minikube v1.9.2 on Centos 7.7.1908 (arm64)
I0421 11:47:51.155150  101380 driver.go:245] Setting default libvirt URI to qemu:///system
* Using the docker driver based on existing profile
I0421 11:47:51.251505  101380 start.go:310] selected driver: docker
I0421 11:47:51.251528  101380 start.go:656] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: Memory:130600 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[HTTP_PROXY=socks5://192.168.1.201:52044 HTTPS_PROXY=socks5://192.168.1.201:52044 HTTP_PROXY=socks5://192.168.1.201:52044 HTTPS_PROXY=socks5://192.168.1.201:52044] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio KubernetesConfig:{KubernetesVersion:v1.18.0 ClusterName:minikube APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.244.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false NodeIP: NodePort:0 NodeName:} Nodes:[{Name:m01 IP: Port:8443 KubernetesVersion:v1.18.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true]}
I0421 11:47:51.251629  101380 start.go:662] status for docker: {Installed:true Healthy:true Error:<nil> Fix: Doc:}
I0421 11:47:51.251651  101380 start.go:1100] auto setting extra-config to "kubeadm.pod-network-cidr=10.244.0.0/16".
I0421 11:47:51.349656  101380 start.go:1004] Using suggested 130600MB memory alloc based on sys=522799MB, container=522799MB
I0421 11:47:51.349806  101380 start.go:1210] Wait components to verify : map[apiserver:true system_pods:true]
* Starting control plane node m01 in cluster minikube
* Pulling base image ...
I0421 11:47:51.350319  101380 cache.go:104] Beginning downloading kic artifacts
I0421 11:47:51.350346  101380 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0421 11:47:51.350485  101380 cache.go:106] Downloading gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0421 11:47:51.350547  101380 image.go:84] Writing gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0421 11:47:51.403633  101380 image.go:90] Found gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 in local docker daemon, skipping pull
W0421 11:47:51.745438  101380 preload.go:110] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v2-v1.18.0-docker-overlay2-arm64.tar.lz4 status code: 404
I0421 11:47:51.745676  101380 cache.go:92] acquiring lock: {Name:mk06b9c55e741a352ff708eef4ac3fb13485606e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0421 11:47:51.745751  101380 cache.go:92] acquiring lock: {Name:mk0283216493c2ade8def79a557aea749cfa702b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0421 11:47:51.745734  101380 cache.go:92] acquiring lock: {Name:mk9ee4a2772b241f54aa6c029f05cf30493ab057 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0421 11:47:51.745786  101380 cache.go:92] acquiring lock: {Name:mkb2c0fb5213c9172bb4ce55d882c42898c558b2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0421 11:47:51.745819  101380 cache.go:92] acquiring lock: {Name:mk6fc96117bd426ee9959489c892ecffe6283cb2 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0421 11:47:51.745742  101380 cache.go:92] acquiring lock: {Name:mk6d2e43fbc4e7d670c77503cb096f96cd80e5eb Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0421 11:47:51.745694  101380 cache.go:92] acquiring lock: {Name:mk1f4e500dad3a27914230f792dc6eb0ebcf9d21 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0421 11:47:51.745702  101380 profile.go:138] Saving config to /home/user1/.minikube/profiles/minikube/config.json ...
I0421 11:47:51.745744  101380 cache.go:92] acquiring lock: {Name:mka83b909d879abd0902c8bb08731a6dec42e8e5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0421 11:47:51.745773  101380 cache.go:92] acquiring lock: {Name:mk56b35ea9341883e5ff637eb316473109c0c596 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0421 11:47:51.745826  101380 cache.go:92] acquiring lock: {Name:mkebf0707a94dca06729850218b0b78283017cfd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0421 11:47:51.746659  101380 cache.go:117] Successfully downloaded all kic artifacts
I0421 11:47:51.746710  101380 start.go:260] acquiring machines lock for minikube: {Name:mkc8ffa5e423e7f77e9e1efa466afe7dc7c1d2ab Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0421 11:47:51.746735  101380 image.go:112] retrieving image: k8s.gcr.io/etcd-arm64:3.4.3-0
I0421 11:47:51.746779  101380 image.go:112] retrieving image: k8s.gcr.io/coredns:1.6.7
I0421 11:47:51.746847  101380 image.go:112] retrieving image: kubernetesui/metrics-scraper:v1.0.2
I0421 11:47:51.746867  101380 start.go:264] acquired machines lock for "minikube" in 120.807µs
I0421 11:47:51.746899  101380 start.go:90] Skipping create...Using existing machine configuration
I0421 11:47:51.746910  101380 fix.go:53] fixHost starting: m01
I0421 11:47:51.746867  101380 image.go:112] retrieving image: gcr.io/k8s-minikube/storage-provisioner-arm64:v1.8.1
I0421 11:47:51.746949  101380 image.go:112] retrieving image: k8s.gcr.io/kube-controller-manager-arm64:v1.18.0
I0421 11:47:51.746963  101380 image.go:112] retrieving image: k8s.gcr.io/kube-proxy-arm64:v1.18.0
I0421 11:47:51.746854  101380 image.go:112] retrieving image: k8s.gcr.io/kube-scheduler-arm64:v1.18.0
I0421 11:47:51.747053  101380 image.go:112] retrieving image: kubernetesui/dashboard:v2.0.0-rc6
I0421 11:47:51.746909  101380 image.go:112] retrieving image: k8s.gcr.io/kube-apiserver-arm64:v1.18.0
I0421 11:47:51.747183  101380 oci.go:250] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0421 11:47:51.747068  101380 cache.go:100] /home/user1/.minikube/cache/images/k8s.gcr.io/pause-arm64_3.2 exists
I0421 11:47:51.747335  101380 cache.go:81] cache image "k8s.gcr.io/pause-arm64:3.2" -> "/home/user1/.minikube/cache/images/k8s.gcr.io/pause-arm64_3.2" took 1.571647ms
I0421 11:47:51.747373  101380 cache.go:66] save to tar file k8s.gcr.io/pause-arm64:3.2 -> /home/user1/.minikube/cache/images/k8s.gcr.io/pause-arm64_3.2 succeeded
I0421 11:47:51.750950  101380 image.go:120] daemon lookup for kubernetesui/metrics-scraper:v1.0.2: Error response from daemon: reference does not exist
I0421 11:47:51.751435  101380 image.go:120] daemon lookup for k8s.gcr.io/etcd-arm64:3.4.3-0: Error response from daemon: reference does not exist
I0421 11:47:51.751486  101380 image.go:120] daemon lookup for gcr.io/k8s-minikube/storage-provisioner-arm64:v1.8.1: Error response from daemon: reference does not exist
I0421 11:47:51.751608  101380 image.go:120] daemon lookup for k8s.gcr.io/kube-scheduler-arm64:v1.18.0: Error response from daemon: reference does not exist
I0421 11:47:51.751495  101380 image.go:120] daemon lookup for k8s.gcr.io/kube-controller-manager-arm64:v1.18.0: Error response from daemon: reference does not exist
I0421 11:47:51.751776  101380 image.go:120] daemon lookup for k8s.gcr.io/kube-proxy-arm64:v1.18.0: Error response from daemon: reference does not exist
I0421 11:47:51.752491  101380 image.go:120] daemon lookup for k8s.gcr.io/coredns:1.6.7: Error response from daemon: reference does not exist
I0421 11:47:51.754999  101380 image.go:120] daemon lookup for k8s.gcr.io/kube-apiserver-arm64:v1.18.0: Error response from daemon: reference does not exist
I0421 11:47:51.755110  101380 image.go:120] daemon lookup for kubernetesui/dashboard:v2.0.0-rc6: Error response from daemon: reference does not exist
I0421 11:47:51.788860  101380 fix.go:105] recreateIfNeeded on minikube: state=Stopped err=<nil>
I0421 11:47:51.788911  101380 fix.go:109] exists: true err=<nil>
I0421 11:47:51.788923  101380 fix.go:110] %!q(<nil>) vs "machine does not exist"
W0421 11:47:51.788940  101380 fix.go:130] unexpected machine state, will restart: <nil>
* Restarting existing docker container for "minikube" ...
I0421 11:47:51.789629  101380 oci.go:250] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0421 11:47:52.644363  101380 machine.go:86] provisioning docker machine ...
I0421 11:47:52.644434  101380 ubuntu.go:166] provisioning hostname "minikube"
I0421 11:47:53.055676  101380 cache.go:138] opening:  /home/user1/.minikube/cache/images/k8s.gcr.io/kube-controller-manager-arm64_v1.18.0
I0421 11:47:53.239210  101380 cache.go:138] opening:  /home/user1/.minikube/cache/images/k8s.gcr.io/etcd-arm64_3.4.3-0
I0421 11:47:53.240198  101380 cache.go:138] opening:  /home/user1/.minikube/cache/images/k8s.gcr.io/kube-apiserver-arm64_v1.18.0
I0421 11:47:53.305234  101380 machine.go:89] provisioned docker machine in 660.814165ms
I0421 11:47:53.305273  101380 fix.go:55] fixHost completed within 1.558363739s
I0421 11:47:53.305291  101380 start.go:77] releasing machines lock for "minikube", held for 1.558399521s
! StartHost failed, but will try again: provision: get ssh host-port: get host-bind port 22 for "minikube", output
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
: exit status 1
I0421 11:47:53.388851  101380 cache.go:138] opening:  /home/user1/.minikube/cache/images/k8s.gcr.io/kube-scheduler-arm64_v1.18.0
I0421 11:47:53.454587  101380 cache.go:138] opening:  /home/user1/.minikube/cache/images/k8s.gcr.io/kube-proxy-arm64_v1.18.0
I0421 11:47:53.494531  101380 cache.go:138] opening:  /home/user1/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner-arm64_v1.8.1
I0421 11:47:53.710910  101380 cache.go:138] opening:  /home/user1/.minikube/cache/images/k8s.gcr.io/coredns_1.6.7
I0421 11:47:58.305532  101380 start.go:260] acquiring machines lock for minikube: {Name:mkc8ffa5e423e7f77e9e1efa466afe7dc7c1d2ab Clock:{} Delay:500ms Timeout:15m0s Cancel:<nil>}
I0421 11:47:58.305789  101380 start.go:264] acquired machines lock for "minikube" in 205.233µs
I0421 11:47:58.305830  101380 start.go:90] Skipping create...Using existing machine configuration
I0421 11:47:58.305846  101380 fix.go:53] fixHost starting: m01
I0421 11:47:58.306145  101380 oci.go:250] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0421 11:47:58.348300  101380 fix.go:105] recreateIfNeeded on minikube: state=Stopped err=<nil>
I0421 11:47:58.348346  101380 fix.go:109] exists: true err=<nil>
I0421 11:47:58.348360  101380 fix.go:110] %!q(<nil>) vs "machine does not exist"
W0421 11:47:58.348380  101380 fix.go:130] unexpected machine state, will restart: <nil>
* Restarting existing docker container for "minikube" ...
I0421 11:47:58.349045  101380 oci.go:250] executing with [docker inspect -f {{.State.Status}} minikube] timeout: 19s
I0421 11:47:58.699002  101380 cache.go:138] opening:  /home/user1/.minikube/cache/images/kubernetesui/dashboard_v2.0.0-rc6
I0421 11:47:58.796748  101380 cache.go:138] opening:  /home/user1/.minikube/cache/images/kubernetesui/metrics-scraper_v1.0.2
I0421 11:47:59.163594  101380 machine.go:86] provisioning docker machine ...
I0421 11:47:59.163635  101380 ubuntu.go:166] provisioning hostname "minikube"
I0421 11:47:59.772852  101380 machine.go:89] provisioned docker machine in 609.22451ms
I0421 11:47:59.772898  101380 fix.go:55] fixHost completed within 1.4670536s
I0421 11:47:59.772909  101380 start.go:77] releasing machines lock for "minikube", held for 1.467098933s
W0421 11:47:59.773155  101380 exit.go:101] Failed to start docker container. "minikube start" may fix it.: provision: get ssh host-port: get host-bind port 22 for "minikube", output
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
: exit status 1
*
X Failed to start docker container. "minikube start" may fix it.: provision: get ssh host-port: get host-bind port 22 for "minikube", output
Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0>: error calling index: index of untyped nil
: exit status 1
*
* minikube is exiting due to an error. If the above message is not useful, open an issue:
  - https://github.com/kubernetes/minikube/issues/new/choose
[user1@arm64-server ~]$
  • docker inspect minikube
[
    {
        "Id": "0caf89481a57b50817f4078d58a6588b1b36cfba43b2011dec4e7b99bb045b31",
        "Created": "2020-04-21T02:50:44.969803615Z",
        "Path": "/usr/local/bin/entrypoint",
        "Args": [
            "/sbin/init"
        ],
        "State": {
            "Status": "exited",
            "Running": false,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 0,
            "ExitCode": 1,
            "Error": "",
            "StartedAt": "2020-04-21T03:47:59.160026632Z",
            "FinishedAt": "2020-04-21T03:47:59.161230766Z"
        },
        "Image": "sha256:11589cdc9ef4b67a64cc243dd3cf013e81ad02bbed105fc37dc07aa272044680",
        "ResolvConfPath": "/home/user1/dockerdata/containers/0caf89481a57b50817f4078d58a6588b1b36cfba43b2011dec4e7b99bb045b31/resolv.conf",
        "HostnamePath": "/home/user1/dockerdata/containers/0caf89481a57b50817f4078d58a6588b1b36cfba43b2011dec4e7b99bb045b31/hostname",
        "HostsPath": "/home/user1/dockerdata/containers/0caf89481a57b50817f4078d58a6588b1b36cfba43b2011dec4e7b99bb045b31/hosts",
        "LogPath": "/home/user1/dockerdata/containers/0caf89481a57b50817f4078d58a6588b1b36cfba43b2011dec4e7b99bb045b31/0caf89481a57b50817f4078d58a6588b1b36cfba43b2011dec4e7b99bb045b31-json.log",
        "Name": "/minikube",
        "RestartCount": 0,
        "Driver": "overlay2",
        "Platform": "linux",
        "MountLabel": "",
        "ProcessLabel": "",
        "AppArmorProfile": "",
        "ExecIDs": null,
        "HostConfig": {
            "Binds": [
                "/lib/modules:/lib/modules:ro",
                "minikube:/var"
            ],
            "ContainerIDFile": "",
            "LogConfig": {
                "Type": "json-file",
                "Config": {}
            },
            "NetworkMode": "default",
            "PortBindings": {
                "22/tcp": [
                    {
                        "HostIp": "127.0.0.1",
                        "HostPort": ""
                    }
                ],
                "2376/tcp": [
                    {
                        "HostIp": "127.0.0.1",
                        "HostPort": ""
                    }
                ],
                "8443/tcp": [
                    {
                        "HostIp": "127.0.0.1",
                        "HostPort": ""
                    }
                ]
            },
            "RestartPolicy": {
                "Name": "no",
                "MaximumRetryCount": 0
            },
            "AutoRemove": false,
            "VolumeDriver": "",
            "VolumesFrom": null,
            "CapAdd": null,
            "CapDrop": null,
            "Capabilities": null,
            "Dns": [],
            "DnsOptions": [],
            "DnsSearch": [],
            "ExtraHosts": null,
            "GroupAdd": null,
            "IpcMode": "private",
            "Cgroup": "",
            "Links": null,
            "OomScoreAdj": 0,
            "PidMode": "",
            "Privileged": true,
            "PublishAllPorts": false,
            "ReadonlyRootfs": false,
            "SecurityOpt": [
                "seccomp=unconfined",
                "label=disable"
            ],
            "Tmpfs": {
                "/run": "",
                "/tmp": ""
            },
            "UTSMode": "",
            "UsernsMode": "",
            "ShmSize": 67108864,
            "Runtime": "runc",
            "ConsoleSize": [
                0,
                0
            ],
            "Isolation": "",
            "CpuShares": 0,
            "Memory": 136944025600,
            "NanoCpus": 2000000000,
            "CgroupParent": "",
            "BlkioWeight": 0,
            "BlkioWeightDevice": [],
            "BlkioDeviceReadBps": null,
            "BlkioDeviceWriteBps": null,
            "BlkioDeviceReadIOps": null,
            "BlkioDeviceWriteIOps": null,
            "CpuPeriod": 0,
            "CpuQuota": 0,
            "CpuRealtimePeriod": 0,
            "CpuRealtimeRuntime": 0,
            "CpusetCpus": "",
            "CpusetMems": "",
            "Devices": [],
            "DeviceCgroupRules": null,
            "DeviceRequests": null,
            "KernelMemory": 0,
            "KernelMemoryTCP": 0,
            "MemoryReservation": 0,
            "MemorySwap": 273888051200,
            "MemorySwappiness": null,
            "OomKillDisable": false,
            "PidsLimit": null,
            "Ulimits": null,
            "CpuCount": 0,
            "CpuPercent": 0,
            "IOMaximumIOps": 0,
            "IOMaximumBandwidth": 0,
            "MaskedPaths": null,
            "ReadonlyPaths": null
        },
        "GraphDriver": {
            "Data": {
                "LowerDir": "/home/user1/dockerdata/overlay2/fbf27f1e3daf9ebfb81996a7d825fa1ec1ee719e01a65b6f9e8001cc37d60f05-init/diff:/home/user1/dockerdata/overlay2/b2b2b81f5480f0b826de443c4a7ec0069998693df0b63557158480ca091b14dc/diff:/home/user1/dockerdata/overlay2/fa409a68209c956eb418ba10c20d92d155609cbbe400a7b386e1f26e8368f04f/diff:/home/user1/dockerdata/overlay2/363b06962a3dcbcc2dad43ef04435fa5e3cc31f69e1724347202acd4e4cd419c/diff:/home/user1/dockerdata/overlay2/4ca9d0a68ac589e4963fdfbb52d188652737325531d6424663875b95030def7d/diff:/home/user1/dockerdata/overlay2/04f7583a634cda7eb67041e547489a0ebf337d69ac618ebc37879df339fd3171/diff:/home/user1/dockerdata/overlay2/322e998d900b297fbc2b94266e7fc6ca9e4c62f3c7136d22c77a2b1361c69b24/diff:/home/user1/dockerdata/overlay2/69ff3ec80e038a14eae0f1b9fd983607ec8c8fcb5e456296740c1802669f001c/diff:/home/user1/dockerdata/overlay2/afe6928ec0dd28ed053e3c60ce9c884615e2bdbe152b4f9004838544cf323c44/diff:/home/user1/dockerdata/overlay2/4661aa404ad90b60f90b282d1cdb87dedca9256ef90bd69ce6ce9591a9f07a50/diff:/home/user1/dockerdata/overlay2/de730d72bb202433ce3b68f37543418a70b9d2bd63eb43da24bb207bb667009b/diff:/home/user1/dockerdata/overlay2/b76357cc70d528ba74d89c5f1e964bf4e97fdb8d633b89622e61040b6f224f48/diff:/home/user1/dockerdata/overlay2/6cc0f9af91ec3e784c0901ba6429c4ecbce2037eb83e5014978978a1ef939113/diff:/home/user1/dockerdata/overlay2/08c3cbc416dda998d94e30400572c81ca8ee2d7861fbfba46fbf0a22729d979d/diff:/home/user1/dockerdata/overlay2/80bc2fe640d04ff9aff0922522cc8a86f1570a5af71941b5dfcbae793bac3593/diff:/home/user1/dockerdata/overlay2/5aec233e108888dd5bb952af92e407ec22d4c89c8438f65558016e12c012e4ea/diff:/home/user1/dockerdata/overlay2/e5a223f113ae5ae3bcc424994ee55d4825517742737f6ff053370ef97cc9af47/diff:/home/user1/dockerdata/overlay2/a43916990757ecd3358a2ccc8fbe39c67b501a6bd493692d7836577eb6331e5a/diff:/home/user1/dockerdata/overlay2/d926de6c5b52add794c3eef791cac279a19597738ad2239746e20a9a1bcfb82f/diff:/home/user1/dockerdata/overlay2/afbea6315b7930d7e2049d335315bbfc4fb68d949525f741bf4768f6d63d9afc/diff:/home/user1/dockerdata/overlay2/03cbf55bcebda9a702283613f1a79751b2d288491914eb8eed64c3b55bb7398f/diff:/home/user1/dockerdata/overlay2/a7f2837d145a864cac1acadeab1a22dce89f1e82f87c131615b35d4811c45457/diff",
                "MergedDir": "/home/user1/dockerdata/overlay2/fbf27f1e3daf9ebfb81996a7d825fa1ec1ee719e01a65b6f9e8001cc37d60f05/merged",
                "UpperDir": "/home/user1/dockerdata/overlay2/fbf27f1e3daf9ebfb81996a7d825fa1ec1ee719e01a65b6f9e8001cc37d60f05/diff",
                "WorkDir": "/home/user1/dockerdata/overlay2/fbf27f1e3daf9ebfb81996a7d825fa1ec1ee719e01a65b6f9e8001cc37d60f05/work"
            },
            "Name": "overlay2"
        },
        "Mounts": [
            {
                "Type": "bind",
                "Source": "/lib/modules",
                "Destination": "/lib/modules",
                "Mode": "ro",
                "RW": false,
                "Propagation": "rprivate"
            },
            {
                "Type": "volume",
                "Name": "minikube",
                "Source": "/home/user1/dockerdata/volumes/minikube/_data",
                "Destination": "/var",
                "Driver": "local",
                "Mode": "z",
                "RW": true,
                "Propagation": ""
            }
        ],
        "Config": {
            "Hostname": "minikube",
            "Domainname": "",
            "User": "root",
            "AttachStdin": false,
            "AttachStdout": false,
            "AttachStderr": false,
            "ExposedPorts": {
                "22/tcp": {},
                "2376/tcp": {},
                "8443/tcp": {}
            },
            "Tty": true,
            "OpenStdin": false,
            "StdinOnce": false,
            "Env": [
                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
                "container=docker"
            ],
            "Cmd": null,
            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81",
            "Volumes": null,
            "WorkingDir": "",
            "Entrypoint": [
                "/usr/local/bin/entrypoint",
                "/sbin/init"
            ],
            "OnBuild": null,
            "Labels": {
                "created_by.minikube.sigs.k8s.io": "true",
                "mode.minikube.sigs.k8s.io": "minikube",
                "name.minikube.sigs.k8s.io": "minikube",
                "role.minikube.sigs.k8s.io": ""
            },
            "StopSignal": "SIGRTMIN+3"
        },
        "NetworkSettings": {
            "Bridge": "",
            "SandboxID": "81c73a4c2ff71aa5c0cc9fc6af97177012937c0916a05572b9620a5381b8039c",
            "HairpinMode": false,
            "LinkLocalIPv6Address": "",
            "LinkLocalIPv6PrefixLen": 0,
            "Ports": {},
            "SandboxKey": "/var/run/docker/netns/81c73a4c2ff7",
            "SecondaryIPAddresses": null,
            "SecondaryIPv6Addresses": null,
            "EndpointID": "",
            "Gateway": "",
            "GlobalIPv6Address": "",
            "GlobalIPv6PrefixLen": 0,
            "IPAddress": "",
            "IPPrefixLen": 0,
            "IPv6Gateway": "",
            "MacAddress": "",
            "Networks": {
                "bridge": {
                    "IPAMConfig": null,
                    "Links": null,
                    "Aliases": null,
                    "NetworkID": "0b7a130dfb89697b3d52af93ee7591516245a78499fb921a89bad39aafd0f684",
                    "EndpointID": "",
                    "Gateway": "",
                    "IPAddress": "",
                    "IPPrefixLen": 0,
                    "IPv6Gateway": "",
                    "GlobalIPv6Address": "",
                    "GlobalIPv6PrefixLen": 0,
                    "MacAddress": "",
                    "DriverOpts": null
                }
            }
        }
    }
]

@afbjorklund
Copy link
Collaborator

I think it is related to go-containerregistry not supporting architectures properly...

* Pulling base image ...
I0421 11:47:51.350319  101380 cache.go:104] Beginning downloading kic artifacts
I0421 11:47:51.350346  101380 preload.go:81] Checking if preload exists for k8s version v1.18.0 and runtime docker
I0421 11:47:51.350485  101380 cache.go:106] Downloading gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0421 11:47:51.350547  101380 image.go:84] Writing gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 to local daemon
I0421 11:47:51.403633  101380 image.go:90] Found gcr.io/k8s-minikube/kicbase:v0.0.8@sha256:2f3380ebf1bb0c75b0b47160fd4e61b7b8fef0f1f32f9def108d3eada50a7a81 in local docker daemon, skipping pull

The kicbase image is only available for amd64 at the moment (not arm64)

$ docker inspect gcr.io/k8s-minikube/kicbase:v0.0.8 | grep Arch
        "Architecture": "amd64",

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 21, 2020

@LyleLee : only the none driver is supported on arm64/aarch64, for now.

I'm marking this as a bug, because whatever the underlying root cause is here, our error handling is clearly not very good.

@tstromberg : that seems right

@afbjorklund afbjorklund added the needs-solution-message Issues where where offering a solution for an error would be helpful label Apr 21, 2020
@afbjorklund
Copy link
Collaborator

Should add some arch checks, to the documentation and to the code.
Right now, everything just assumes amd64 (and doesn't verify much)

See #6159

@LyleLee
Copy link
Author

LyleLee commented Apr 22, 2020

@afbjorklund Thanks for point out the clue. I'd try build gcr.io/k8s-minikube/kicbase from Dockerfile rather than pull it from registry.

@LyleLee : only the none driver is supported on arm64/aarch64, for now.

I'm marking this as a bug, because whatever the underlying root cause is here, our error handling is clearly not very good.

@tstromberg : that seems right

@LyleLee
Copy link
Author

LyleLee commented May 3, 2020

Update:
I was successfully running minikube according to #5667 on ARM64, thanks @afbjorklund !

minikube start  --vm-driver=none

But those images referred in documentation is not built for ARM64, like:

Can we start to push ARM64 images? I would like to help out if I know the routine to publish these images.

@medyagh
Copy link
Member

medyagh commented May 9, 2020

Currently docker driver doesn't support ARM yet. we need to have better solution message to let the users nicely.

@medyagh medyagh changed the title Template parsing error: template: :1:4: executing "" at <index (index .NetworkSettings.Ports "22/tcp") 0> Docker driver on ARM64 May 9, 2020
@priyawadhwa priyawadhwa added priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. and removed priority/backlog Higher priority than priority/awaiting-more-evidence. labels May 28, 2020
@priyawadhwa priyawadhwa changed the title Docker driver on ARM64 Improve error handling on ARM architectures May 28, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 26, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 25, 2020
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

@k8s-ci-robot
Copy link
Contributor

@fejta-bot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/bug Categorizes issue or PR as related to a bug. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. needs-solution-message Issues where where offering a solution for an error would be helpful priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests

7 participants