-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problems with minikube on Apple Silicon M1 with podman/hyperkit #13006
Comments
/kind support |
Attempted exactly this procedure on M1 Mac:
Proximal output (for searchers)
|
Is there someone who has a running minikube installtion on Apple Silicon M1 without the Docker Environment? |
I finally chose rancher-desktop. |
Also curious on the answer to this. I wanted to use this as part of the "drop-in replacement for Docker Desktop" as this guide suggests but it seems this isn't compatible with the M1 at all in its current state? |
Yep, also looking at drop in replacements as 'Docker Desktop' is now a no go for adhoc pdev for docker-compose helper scripts for ui dev's. The guide https://dhwaneetbhatt.com/blog/run-docker-without-docker-desktop-on-macos is great for intel cpu's but fall on its face for m1 arm64. There seems to be nothing truely available to run containers on M1 easily for the non-techy users.. |
Have a look at rancher-desktop now supports m1, rancher-sandbox/rancher-desktop#956, which is my current alternative |
Thank you very much that looks really good and I don't had Rancher Desktop in mind. The thing is I actually want to prevent a desktop application like Docker, I want to use a CLI tooling like minikube.
Hopefully we get a M1/AARCH64 solution for minikube 🙏 @RA489 |
colima (HEAD) with lima/docker works perfectly on arm64. Haven't tried it with kube. I also have mostly good experiences with Rancher Desktop but the bind-mounting doesn't really work. |
Oh @rfay that looks really good 🤩 I did a short |
@greenchapter Regarding colima and QEMU : does it start an intel based VM? Or one based on arm64 ? |
I'm having the exact same issue as described by @carl-reverb. Any help would be greatly appreciated. I talked to @baude and he says the driver isn't managed by the Podman folks? Looks to be an ssh configuration issue I believe? My specific log output:
|
Just as an update on the HyperKit side, we are making progress towards getting an ARM64 ISO, here's the PR for reference |
Hopefully it will be merged soon 😍 |
@spowelljr hyperkit does not support arm64, so the new ISO will have to use some other VM driver (vmware/parallels/qemu2) |
By default, upstream lima starts an intel VM on an intel host and an arm64 VM on an arm64 host: # Arch: "default", "x86_64", "aarch64".
# 🟢 Builtin default: "default" (corresponds to the host architecture)
arch: null The colima distribution might be different, but there is lima Kubernetes support with containerd:
|
Thank everyone for the patience, please track the update in this issue, |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
/remove-lifecycle rotten |
The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle stale |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues and PRs according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /lifecycle rotten |
The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs. This bot triages issues according to the following rules:
You can:
Please send feedback to sig-contributor-experience at kubernetes/community. /close not-planned |
@k8s-triage-robot: Closing this issue, marking it as "Not Planned". In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Actually I want to run minikube on my Apple Silicon M1 MacBook Pro without using Docker (the Docker integration actually works) actually I focusing on podman and hyperkit but currently nothing of this two drivers will work.
For me it looks like that podman is currently running and I can build images with podman but I got no working hypervisor for minikube.
hyperkit looks like is currently not available for macOS Monteray and macOS BigSur.
Steps to reproduce the issue with podman :
podman machine init --cpus 2 --memory 2048 --disk-size 20
podman machine start
podman system connection default podman-machine-default-root
minikube start --driver=podman --container-runtime=cri-o --alsologtostderr -v=7
Like described here https://minikube.sigs.k8s.io/docs/drivers/podman/
I1122 20:37:30.153204 86302 out.go:297] Setting OutFile to fd 1 ...
I1122 20:37:30.153325 86302 out.go:349] isatty.IsTerminal(1) = true
I1122 20:37:30.153329 86302 out.go:310] Setting ErrFile to fd 2...
I1122 20:37:30.153333 86302 out.go:349] isatty.IsTerminal(2) = true
I1122 20:37:30.153438 86302 root.go:313] Updating PATH: /Users/xxx/.minikube/bin
I1122 20:37:30.153662 86302 out.go:304] Setting JSON to false
I1122 20:37:30.184995 86302 start.go:112] hostinfo: {"hostname":"MacBook-Pro-(14-inch,-2021)","uptime":289813,"bootTime":1637320037,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.0.1","kernelVersion":"21.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"4aece9fd-434e-5ffd-98bb-571ad2700651"}
W1122 20:37:30.185114 86302 start.go:120] gopshost.Virtualization returned error: not implemented yet
I1122 20:37:30.224896 86302 out.go:176] 😄 minikube v1.24.0 on Darwin 12.0.1 (arm64)
😄 minikube v1.24.0 on Darwin 12.0.1 (arm64)
W1122 20:37:30.224959 86302 preload.go:294] Failed to list preload files: open /Users/thomasott/.minikube/cache/preloaded-tarball: no such file or directory
I1122 20:37:30.224978 86302 notify.go:174] Checking for updates...
I1122 20:37:30.225263 86302 config.go:176] Loaded profile config "minikube": Driver=podman, ContainerRuntime=crio, KubernetesVersion=v1.22.3
I1122 20:37:30.226175 86302 driver.go:343] Setting default libvirt URI to qemu:///system
I1122 20:37:30.649297 86302 podman.go:121] podman version: 3.4.1
I1122 20:37:30.686936 86302 out.go:176] ✨ Using the podman (experimental) driver based on existing profile
✨ Using the podman (experimental) driver based on existing profile
I1122 20:37:30.686980 86302 start.go:280] selected driver: podman
I1122 20:37:30.686986 86302 start.go:762] validating driver "podman" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:1953 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
I1122 20:37:30.687086 86302 start.go:773] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I1122 20:37:30.687207 86302 cli_runner.go:115] Run: podman system info --format json
I1122 20:37:30.777296 86302 info.go:285] podman info: {Host:{BuildahVersion:1.23.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.30-2.fc35.aarch64 Path:/usr/bin/conmon Version:conmon version 2.0.30, commit: } Distribution:{Distribution:fedora Version:35} MemFree:1687957504 MemTotal:2048811008 OCIRuntime:{Name:crun Package:crun-1.2-1.fc35.aarch64 Path:/usr/bin/crun Version:crun version 1.2
commit: 4f6c8e0583c679bfee6a899c05ac6b916022561b
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:arm64 Cpus:2 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.14.14-300.fc35.aarch64 Os:linux Rootless:false Uptime:1m 17.03s} Registries:{Search:[docker.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:1} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:xfs NativeOverlayDiff:false SupportsDType:true UsingMetacopy:true} ImageStore:{Number:21} RunRoot:/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
W1122 20:37:30.777521 86302 info.go:50] Unable to get CPU info: no such file or directory
W1122 20:37:30.777563 86302 start.go:925] could not get system cpu info while verifying memory limits, which might be okay: no such file or directory
I1122 20:37:30.777580 86302 cni.go:93] Creating CNI manager for ""
I1122 20:37:30.777590 86302 cni.go:160] "podman" driver + crio runtime found, recommending kindnet
I1122 20:37:30.777597 86302 start_flags.go:282] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:1953 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
I1122 20:37:30.815605 86302 out.go:176] 👍 Starting control plane node minikube in cluster minikube
👍 Starting control plane node minikube in cluster minikube
I1122 20:37:30.815645 86302 cache.go:118] Beginning downloading kic base image for podman with crio
I1122 20:37:30.834575 86302 out.go:176] 🚜 Pulling base image ...
🚜 Pulling base image ...
I1122 20:37:30.834599 86302 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
I1122 20:37:30.834615 86302 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime crio
I1122 20:37:30.834730 86302 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory
I1122 20:37:30.834748 86302 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory, skipping pull
I1122 20:37:30.834755 86302 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in cache, skipping pull
I1122 20:37:30.834768 86302 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c as a tarball
W1122 20:37:30.977056 86302 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v13-v1.22.3-cri-o-overlay-arm64.tar.lz4 status code: 404
I1122 20:37:30.977212 86302 profile.go:147] Saving config to /Users/xxx/.minikube/profiles/minikube/config.json ...
I1122 20:37:30.977228 86302 cache.go:107] acquiring lock: {Name:mk40bef815be226ea29d5bc86b3383e6154705f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977228 86302 cache.go:107] acquiring lock: {Name:mk2418a6fa6e24465b71d3519abd3f8dc9f8a0b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977276 86302 cache.go:107] acquiring lock: {Name:mkf87800ed7fc7eed0d738e0f33fb5e2ece18afd Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977365 86302 cache.go:107] acquiring lock: {Name:mk16eccaed07225dafed77b4115f34d0dd7284c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977398 86302 cache.go:107] acquiring lock: {Name:mk5b6da63a86efdee9cfef1e2298b08c1eae8014 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977240 86302 cache.go:107] acquiring lock: {Name:mkb5fe77e21a27f876c9bee8c33a48d3965dda05 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977416 86302 cache.go:107] acquiring lock: {Name:mkbdfa454f4a36ba936a6c14c690a1a486555bab Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977449 86302 cache.go:115] /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3 exists
I1122 20:37:30.977485 86302 cache.go:107] acquiring lock: {Name:mk3a6fb5c13219e5e2e19e5b4f022b1cb40df351 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977522 86302 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.22.3" -> "/Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3" took 280.583µs
I1122 20:37:30.977537 86302 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.22.3 -> /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3 succeeded
I1122 20:37:30.977557 86302 cache.go:115] /Users/xxx/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 exists
I1122 20:37:30.977563 86302 cache.go:115] /Users/xxx/.minikube/cache/images/k8s.gcr.io/pause_3.5 exists
I1122 20:37:30.977570 86302 cache.go:115] /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3 exists
I1122 20:37:30.977425 86302 cache.go:107] acquiring lock: {Name:mk77ffb7414d61c081d4b7e214a133487b58d1b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977579 86302 cache.go:96] cache image "k8s.gcr.io/pause:3.5" -> "/Users/xxx/.minikube/cache/images/k8s.gcr.io/pause_3.5" took 347.458µs
I1122 20:37:30.977591 86302 cache.go:115] /Users/xxx/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
I1122 20:37:30.977606 86302 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/xxx/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 381.75µs
I1122 20:37:30.977616 86302 cache.go:80] save to tar file k8s.gcr.io/pause:3.5 -> /Users/xxx/.minikube/cache/images/k8s.gcr.io/pause_3.5 succeeded
I1122 20:37:30.977642 86302 cache.go:115] /Users/xxx/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 exists
I1122 20:37:30.977389 86302 cache.go:115] /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3 exists
I1122 20:37:30.977647 86302 cache.go:107] acquiring lock: {Name:mkd5bef9333f685dfecb42053b5446f22e1b7c6b Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977661 86302 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.4" -> "/Users/xxx/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4" took 324µs
I1122 20:37:30.977669 86302 cache.go:115] /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3 exists
I1122 20:37:30.977618 86302 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /Users/xxx/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
I1122 20:37:30.977671 86302 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.4 -> /Users/xxx/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 succeeded
I1122 20:37:30.977683 86302 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.22.3" -> "/Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3" took 336.084µs
I1122 20:37:30.977696 86302 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.22.3 -> /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3 succeeded
I1122 20:37:30.977584 86302 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.22.3" -> "/Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3" took 187.25µs
I1122 20:37:30.977779 86302 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.22.3 -> /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3 succeeded
I1122 20:37:30.977572 86302 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.0-0" -> "/Users/xxx/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0" took 228.375µs
I1122 20:37:30.977790 86302 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.0-0 -> /Users/xxx/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 succeeded
I1122 20:37:30.977668 86302 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.22.3" -> "/Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3" took 389.459µs
I1122 20:37:30.977804 86302 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.3 -> /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3 succeeded
I1122 20:37:30.977721 86302 cache.go:115] /Users/xxx/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 exists
I1122 20:37:30.977835 86302 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/xxx/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 445.584µs
I1122 20:37:30.977847 86302 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /Users/xxx/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
E1122 20:37:30.977846 86302 cache.go:201] Error downloading kic artifacts: not yet implemented, see issue #8426
I1122 20:37:30.977856 86302 cache.go:206] Successfully downloaded all kic artifacts
I1122 20:37:30.977755 86302 cache.go:115] /Users/xxx/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1122 20:37:30.977869 86302 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/xxx/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 424.375µs
I1122 20:37:30.977877 86302 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/xxx/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1122 20:37:30.977894 86302 start.go:313] acquiring machines lock for minikube: {Name:mk84d1ed5ca79c0b7db1faffa1c9974e981befba Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.978079 86302 cache.go:87] Successfully saved all images to host disk.
I1122 20:37:30.978139 86302 start.go:317] acquired machines lock for "minikube" in 54.209µs
I1122 20:37:30.978158 86302 start.go:93] Skipping create...Using existing machine configuration
I1122 20:37:30.978163 86302 fix.go:55] fixHost starting:
I1122 20:37:30.978498 86302 cli_runner.go:115] Run: podman container inspect minikube --format={{.State.Status}}
I1122 20:37:31.071896 86302 fix.go:108] recreateIfNeeded on minikube: state=Stopped err=
W1122 20:37:31.071933 86302 fix.go:134] unexpected machine state, will restart:
I1122 20:37:31.109413 86302 out.go:176] 🔄 Restarting existing podman container for "minikube" ...
🔄 Restarting existing podman container for "minikube" ...
I1122 20:37:31.109545 86302 cli_runner.go:115] Run: podman start minikube
I1122 20:37:31.496345 86302 cli_runner.go:115] Run: podman container inspect minikube --format={{.State.Status}}
I1122 20:37:31.613023 86302 kic.go:420] container "minikube" state is running.
I1122 20:37:31.613634 86302 cli_runner.go:115] Run: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I1122 20:37:31.748228 86302 cli_runner.go:115] Run: podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1122 20:37:31.846707 86302 profile.go:147] Saving config to /Users/thomasott/.minikube/profiles/minikube/config.json ...
I1122 20:37:31.857454 86302 machine.go:88] provisioning docker machine ...
I1122 20:37:31.857486 86302 ubuntu.go:169] provisioning hostname "minikube"
I1122 20:37:31.857551 86302 cli_runner.go:115] Run: podman version --format {{.Version}}
I1122 20:37:31.973151 86302 cli_runner.go:115] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1122 20:37:32.083443 86302 main.go:130] libmachine: Using SSH client type: native
I1122 20:37:32.083665 86302 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x102ba3940] 0x102ba6760 [] 0s} 127.0.0.1 40165 }
I1122 20:37:32.083673 86302 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
2021/11/22 20:37:32 tcpproxy: for incoming conn 127.0.0.1:60672, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:32.084382 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60672->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:35 tcpproxy: for incoming conn 127.0.0.1:60673, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:35.089395 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60673->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:38 tcpproxy: for incoming conn 127.0.0.1:60674, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:38.093398 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60674->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:41 tcpproxy: for incoming conn 127.0.0.1:60675, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:41.098962 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60675->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:44 tcpproxy: for incoming conn 127.0.0.1:60676, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:44.104067 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60676->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:47 tcpproxy: for incoming conn 127.0.0.1:60677, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:47.106034 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60677->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:50 tcpproxy: for incoming conn 127.0.0.1:60678, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:50.108916 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60678->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:53 tcpproxy: for incoming conn 127.0.0.1:60679, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:53.116612 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60679->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:56 tcpproxy: for incoming conn 127.0.0.1:60680, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:56.123638 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60680->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:59 tcpproxy: for incoming conn 127.0.0.1:60681, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:59.130485 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60681->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:38:02 tcpproxy: for incoming conn 127.0.0.1:60682, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:38:02.132407 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60682->127.0.0.1:40165: read: connection reset by peer
Steps to reproduce the issue with hyperkit :
brew install hyperkit
Hopefully that is detailed enough and someone can help me to get a running setup without Docker CE / Docker Desktop
The text was updated successfully, but these errors were encountered: