Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problems with minikube on Apple Silicon M1 with podman/hyperkit #13006

Closed
greenchapter opened this issue Nov 22, 2021 · 24 comments
Closed

Problems with minikube on Apple Silicon M1 with podman/hyperkit #13006

greenchapter opened this issue Nov 22, 2021 · 24 comments
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code os/macos triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@greenchapter
Copy link

greenchapter commented Nov 22, 2021

Actually I want to run minikube on my Apple Silicon M1 MacBook Pro without using Docker (the Docker integration actually works) actually I focusing on podman and hyperkit but currently nothing of this two drivers will work.

For me it looks like that podman is currently running and I can build images with podman but I got no working hypervisor for minikube.
hyperkit looks like is currently not available for macOS Monteray and macOS BigSur.

Steps to reproduce the issue with podman :

  1. podman machine init --cpus 2 --memory 2048 --disk-size 20
  2. podman machine start
  3. podman system connection default podman-machine-default-root
  4. minikube start --driver=podman --container-runtime=cri-o --alsologtostderr -v=7

Like described here https://minikube.sigs.k8s.io/docs/drivers/podman/

❌  Exiting due to GUEST_STATUS: state: unknown state "minikube": podman container inspect minikube --format=: exit status 125
stdout:

stderr:
Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM
Error: unable to connect to Podman. failed to create sshClient: Connection to bastion host (ssh://root@localhost:50063/run/podman/podman.sock) failed.: dial tcp [::1]:50063: connect: connection refused

I1122 20:37:30.153204 86302 out.go:297] Setting OutFile to fd 1 ...
I1122 20:37:30.153325 86302 out.go:349] isatty.IsTerminal(1) = true
I1122 20:37:30.153329 86302 out.go:310] Setting ErrFile to fd 2...
I1122 20:37:30.153333 86302 out.go:349] isatty.IsTerminal(2) = true
I1122 20:37:30.153438 86302 root.go:313] Updating PATH: /Users/xxx/.minikube/bin
I1122 20:37:30.153662 86302 out.go:304] Setting JSON to false
I1122 20:37:30.184995 86302 start.go:112] hostinfo: {"hostname":"MacBook-Pro-(14-inch,-2021)","uptime":289813,"bootTime":1637320037,"procs":452,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.0.1","kernelVersion":"21.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"4aece9fd-434e-5ffd-98bb-571ad2700651"}
W1122 20:37:30.185114 86302 start.go:120] gopshost.Virtualization returned error: not implemented yet
I1122 20:37:30.224896 86302 out.go:176] 😄 minikube v1.24.0 on Darwin 12.0.1 (arm64)
😄 minikube v1.24.0 on Darwin 12.0.1 (arm64)
W1122 20:37:30.224959 86302 preload.go:294] Failed to list preload files: open /Users/thomasott/.minikube/cache/preloaded-tarball: no such file or directory
I1122 20:37:30.224978 86302 notify.go:174] Checking for updates...
I1122 20:37:30.225263 86302 config.go:176] Loaded profile config "minikube": Driver=podman, ContainerRuntime=crio, KubernetesVersion=v1.22.3
I1122 20:37:30.226175 86302 driver.go:343] Setting default libvirt URI to qemu:///system
I1122 20:37:30.649297 86302 podman.go:121] podman version: 3.4.1
I1122 20:37:30.686936 86302 out.go:176] ✨ Using the podman (experimental) driver based on existing profile
✨ Using the podman (experimental) driver based on existing profile
I1122 20:37:30.686980 86302 start.go:280] selected driver: podman
I1122 20:37:30.686986 86302 start.go:762] validating driver "podman" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:1953 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
I1122 20:37:30.687086 86302 start.go:773] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc:}
I1122 20:37:30.687207 86302 cli_runner.go:115] Run: podman system info --format json
I1122 20:37:30.777296 86302 info.go:285] podman info: {Host:{BuildahVersion:1.23.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.30-2.fc35.aarch64 Path:/usr/bin/conmon Version:conmon version 2.0.30, commit: } Distribution:{Distribution:fedora Version:35} MemFree:1687957504 MemTotal:2048811008 OCIRuntime:{Name:crun Package:crun-1.2-1.fc35.aarch64 Path:/usr/bin/crun Version:crun version 1.2
commit: 4f6c8e0583c679bfee6a899c05ac6b916022561b
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:arm64 Cpus:2 Eventlogger:journald Hostname:localhost.localdomain Kernel:5.14.14-300.fc35.aarch64 Os:linux Rootless:false Uptime:1m 17.03s} Registries:{Search:[docker.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:1} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:xfs NativeOverlayDiff:false SupportsDType:true UsingMetacopy:true} ImageStore:{Number:21} RunRoot:/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
W1122 20:37:30.777521 86302 info.go:50] Unable to get CPU info: no such file or directory
W1122 20:37:30.777563 86302 start.go:925] could not get system cpu info while verifying memory limits, which might be okay: no such file or directory
I1122 20:37:30.777580 86302 cni.go:93] Creating CNI manager for ""
I1122 20:37:30.777590 86302 cni.go:160] "podman" driver + crio runtime found, recommending kindnet
I1122 20:37:30.777597 86302 start_flags.go:282] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c Memory:1953 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.22.3 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host}
I1122 20:37:30.815605 86302 out.go:176] 👍 Starting control plane node minikube in cluster minikube
👍 Starting control plane node minikube in cluster minikube
I1122 20:37:30.815645 86302 cache.go:118] Beginning downloading kic base image for podman with crio
I1122 20:37:30.834575 86302 out.go:176] 🚜 Pulling base image ...
🚜 Pulling base image ...
I1122 20:37:30.834599 86302 cache.go:146] Downloading gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c to local cache
I1122 20:37:30.834615 86302 preload.go:132] Checking if preload exists for k8s version v1.22.3 and runtime crio
I1122 20:37:30.834730 86302 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory
I1122 20:37:30.834748 86302 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c in local cache directory, skipping pull
I1122 20:37:30.834755 86302 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c exists in cache, skipping pull
I1122 20:37:30.834768 86302 cache.go:149] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.28@sha256:4780f1897569d2bf77aafb3d133a08d42b4fe61127f06fcfc90c2c5d902d893c as a tarball
W1122 20:37:30.977056 86302 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v13-v1.22.3-cri-o-overlay-arm64.tar.lz4 status code: 404
I1122 20:37:30.977212 86302 profile.go:147] Saving config to /Users/xxx/.minikube/profiles/minikube/config.json ...
I1122 20:37:30.977228 86302 cache.go:107] acquiring lock: {Name:mk40bef815be226ea29d5bc86b3383e6154705f9 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977228 86302 cache.go:107] acquiring lock: {Name:mk2418a6fa6e24465b71d3519abd3f8dc9f8a0b0 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977276 86302 cache.go:107] acquiring lock: {Name:mkf87800ed7fc7eed0d738e0f33fb5e2ece18afd Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977365 86302 cache.go:107] acquiring lock: {Name:mk16eccaed07225dafed77b4115f34d0dd7284c9 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977398 86302 cache.go:107] acquiring lock: {Name:mk5b6da63a86efdee9cfef1e2298b08c1eae8014 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977240 86302 cache.go:107] acquiring lock: {Name:mkb5fe77e21a27f876c9bee8c33a48d3965dda05 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977416 86302 cache.go:107] acquiring lock: {Name:mkbdfa454f4a36ba936a6c14c690a1a486555bab Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977449 86302 cache.go:115] /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3 exists
I1122 20:37:30.977485 86302 cache.go:107] acquiring lock: {Name:mk3a6fb5c13219e5e2e19e5b4f022b1cb40df351 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977522 86302 cache.go:96] cache image "k8s.gcr.io/kube-apiserver:v1.22.3" -> "/Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3" took 280.583µs
I1122 20:37:30.977537 86302 cache.go:80] save to tar file k8s.gcr.io/kube-apiserver:v1.22.3 -> /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-apiserver_v1.22.3 succeeded
I1122 20:37:30.977557 86302 cache.go:115] /Users/xxx/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 exists
I1122 20:37:30.977563 86302 cache.go:115] /Users/xxx/.minikube/cache/images/k8s.gcr.io/pause_3.5 exists
I1122 20:37:30.977570 86302 cache.go:115] /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3 exists
I1122 20:37:30.977425 86302 cache.go:107] acquiring lock: {Name:mk77ffb7414d61c081d4b7e214a133487b58d1b9 Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977579 86302 cache.go:96] cache image "k8s.gcr.io/pause:3.5" -> "/Users/xxx/.minikube/cache/images/k8s.gcr.io/pause_3.5" took 347.458µs
I1122 20:37:30.977591 86302 cache.go:115] /Users/xxx/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 exists
I1122 20:37:30.977606 86302 cache.go:96] cache image "docker.io/kubernetesui/metrics-scraper:v1.0.7" -> "/Users/xxx/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7" took 381.75µs
I1122 20:37:30.977616 86302 cache.go:80] save to tar file k8s.gcr.io/pause:3.5 -> /Users/xxx/.minikube/cache/images/k8s.gcr.io/pause_3.5 succeeded
I1122 20:37:30.977642 86302 cache.go:115] /Users/xxx/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 exists
I1122 20:37:30.977389 86302 cache.go:115] /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3 exists
I1122 20:37:30.977647 86302 cache.go:107] acquiring lock: {Name:mkd5bef9333f685dfecb42053b5446f22e1b7c6b Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.977661 86302 cache.go:96] cache image "k8s.gcr.io/coredns/coredns:v1.8.4" -> "/Users/xxx/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4" took 324µs
I1122 20:37:30.977669 86302 cache.go:115] /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3 exists
I1122 20:37:30.977618 86302 cache.go:80] save to tar file docker.io/kubernetesui/metrics-scraper:v1.0.7 -> /Users/xxx/.minikube/cache/images/docker.io/kubernetesui/metrics-scraper_v1.0.7 succeeded
I1122 20:37:30.977671 86302 cache.go:80] save to tar file k8s.gcr.io/coredns/coredns:v1.8.4 -> /Users/xxx/.minikube/cache/images/k8s.gcr.io/coredns/coredns_v1.8.4 succeeded
I1122 20:37:30.977683 86302 cache.go:96] cache image "k8s.gcr.io/kube-scheduler:v1.22.3" -> "/Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3" took 336.084µs
I1122 20:37:30.977696 86302 cache.go:80] save to tar file k8s.gcr.io/kube-scheduler:v1.22.3 -> /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-scheduler_v1.22.3 succeeded
I1122 20:37:30.977584 86302 cache.go:96] cache image "k8s.gcr.io/kube-proxy:v1.22.3" -> "/Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3" took 187.25µs
I1122 20:37:30.977779 86302 cache.go:80] save to tar file k8s.gcr.io/kube-proxy:v1.22.3 -> /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.3 succeeded
I1122 20:37:30.977572 86302 cache.go:96] cache image "k8s.gcr.io/etcd:3.5.0-0" -> "/Users/xxx/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0" took 228.375µs
I1122 20:37:30.977790 86302 cache.go:80] save to tar file k8s.gcr.io/etcd:3.5.0-0 -> /Users/xxx/.minikube/cache/images/k8s.gcr.io/etcd_3.5.0-0 succeeded
I1122 20:37:30.977668 86302 cache.go:96] cache image "k8s.gcr.io/kube-controller-manager:v1.22.3" -> "/Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3" took 389.459µs
I1122 20:37:30.977804 86302 cache.go:80] save to tar file k8s.gcr.io/kube-controller-manager:v1.22.3 -> /Users/xxx/.minikube/cache/images/k8s.gcr.io/kube-controller-manager_v1.22.3 succeeded
I1122 20:37:30.977721 86302 cache.go:115] /Users/xxx/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 exists
I1122 20:37:30.977835 86302 cache.go:96] cache image "docker.io/kubernetesui/dashboard:v2.3.1" -> "/Users/xxx/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1" took 445.584µs
I1122 20:37:30.977847 86302 cache.go:80] save to tar file docker.io/kubernetesui/dashboard:v2.3.1 -> /Users/xxx/.minikube/cache/images/docker.io/kubernetesui/dashboard_v2.3.1 succeeded
E1122 20:37:30.977846 86302 cache.go:201] Error downloading kic artifacts: not yet implemented, see issue #8426
I1122 20:37:30.977856 86302 cache.go:206] Successfully downloaded all kic artifacts
I1122 20:37:30.977755 86302 cache.go:115] /Users/xxx/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 exists
I1122 20:37:30.977869 86302 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/Users/xxx/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5" took 424.375µs
I1122 20:37:30.977877 86302 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /Users/xxx/.minikube/cache/images/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
I1122 20:37:30.977894 86302 start.go:313] acquiring machines lock for minikube: {Name:mk84d1ed5ca79c0b7db1faffa1c9974e981befba Clock:{} Delay:500ms Timeout:10m0s Cancel:}
I1122 20:37:30.978079 86302 cache.go:87] Successfully saved all images to host disk.
I1122 20:37:30.978139 86302 start.go:317] acquired machines lock for "minikube" in 54.209µs
I1122 20:37:30.978158 86302 start.go:93] Skipping create...Using existing machine configuration
I1122 20:37:30.978163 86302 fix.go:55] fixHost starting:
I1122 20:37:30.978498 86302 cli_runner.go:115] Run: podman container inspect minikube --format={{.State.Status}}
I1122 20:37:31.071896 86302 fix.go:108] recreateIfNeeded on minikube: state=Stopped err=
W1122 20:37:31.071933 86302 fix.go:134] unexpected machine state, will restart:
I1122 20:37:31.109413 86302 out.go:176] 🔄 Restarting existing podman container for "minikube" ...
🔄 Restarting existing podman container for "minikube" ...
I1122 20:37:31.109545 86302 cli_runner.go:115] Run: podman start minikube
I1122 20:37:31.496345 86302 cli_runner.go:115] Run: podman container inspect minikube --format={{.State.Status}}
I1122 20:37:31.613023 86302 kic.go:420] container "minikube" state is running.
I1122 20:37:31.613634 86302 cli_runner.go:115] Run: podman container inspect -f {{.NetworkSettings.IPAddress}} minikube
I1122 20:37:31.748228 86302 cli_runner.go:115] Run: podman container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube
I1122 20:37:31.846707 86302 profile.go:147] Saving config to /Users/thomasott/.minikube/profiles/minikube/config.json ...
I1122 20:37:31.857454 86302 machine.go:88] provisioning docker machine ...
I1122 20:37:31.857486 86302 ubuntu.go:169] provisioning hostname "minikube"
I1122 20:37:31.857551 86302 cli_runner.go:115] Run: podman version --format {{.Version}}
I1122 20:37:31.973151 86302 cli_runner.go:115] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I1122 20:37:32.083443 86302 main.go:130] libmachine: Using SSH client type: native
I1122 20:37:32.083665 86302 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x102ba3940] 0x102ba6760 [] 0s} 127.0.0.1 40165 }
I1122 20:37:32.083673 86302 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
2021/11/22 20:37:32 tcpproxy: for incoming conn 127.0.0.1:60672, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:32.084382 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60672->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:35 tcpproxy: for incoming conn 127.0.0.1:60673, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:35.089395 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60673->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:38 tcpproxy: for incoming conn 127.0.0.1:60674, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:38.093398 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60674->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:41 tcpproxy: for incoming conn 127.0.0.1:60675, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:41.098962 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60675->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:44 tcpproxy: for incoming conn 127.0.0.1:60676, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:44.104067 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60676->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:47 tcpproxy: for incoming conn 127.0.0.1:60677, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:47.106034 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60677->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:50 tcpproxy: for incoming conn 127.0.0.1:60678, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:50.108916 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60678->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:53 tcpproxy: for incoming conn 127.0.0.1:60679, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:53.116612 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60679->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:56 tcpproxy: for incoming conn 127.0.0.1:60680, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:56.123638 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60680->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:37:59 tcpproxy: for incoming conn 127.0.0.1:60681, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:37:59.130485 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60681->127.0.0.1:40165: read: connection reset by peer
2021/11/22 20:38:02 tcpproxy: for incoming conn 127.0.0.1:60682, error dialing "192.168.127.2:40165": connect tcp 192.168.127.2:40165: connection was refused
I1122 20:38:02.132407 86302 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:60682->127.0.0.1:40165: read: connection reset by peer

Steps to reproduce the issue with hyperkit :

  1. brew install hyperkit
==> Auto-updated Homebrew!
Updated 1 tap (homebrew/core).
==> Updated Formulae
Updated 11 formulae.

Updating Homebrew...
Error: hyperkit: no bottle available!
You can try to install from source with:
  brew install --build-from-source hyperkit
Please note building from source is unsupported. You will encounter build
failures with some formulae. If you experience any issues please create pull
requests instead of asking for help on Homebrew's GitHub, Twitter or any other
official channels.

Hopefully that is detailed enough and someone can help me to get a running setup without Docker CE / Docker Desktop

@RA489
Copy link

RA489 commented Nov 23, 2021

/kind support

@k8s-ci-robot k8s-ci-robot added the kind/support Categorizes issue or PR as a support question. label Nov 23, 2021
@carl-reverb
Copy link

Attempted exactly this procedure on M1 Mac: MacBook Pro (16-inch, 2021), with slight modification:

  1. podman machine init --cpus 2 --memory 2048 --disk-size 20 --image-path next
  2. podman machine start
  3. podman system connection default podman-machine-default-root
  4. minikube start --driver=podman --container-runtime=cri-o

Proximal output (for searchers)

😄  minikube v1.24.0 on Darwin 12.0.1 (arm64)
✨  Using the podman (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
E1124 09:23:52.515427    2826 cache.go:201] Error downloading kic artifacts:  not yet implemented, see issue #8426
🔥  Creating podman container (CPUs=2, Memory=1953MB) .../ 2021/11/24 09:24:18 tcpproxy: for incoming conn 127.0.0.1:49540, error dialing "192.168.127.2:39067": connect tcp 192.168.127.2:39067: connection was refused
\ 2021/11/24 09:24:21 tcpproxy: for incoming conn 127.0.0.1:49541, error dialing "192.168.127.2:39067": connect tcp 192.168.127.2:39067: connection was refused
[... much of the same output elided]
- 2021/11/24 09:29:54 tcpproxy: for incoming conn 127.0.0.1:49749, error dialing "192.168.127.2:39067": connect tcp 192.168.127.2:39067: connection was refused

✋  Stopping node "minikube"  ...
🛑  Powering off "minikube" via SSH ...
2021/11/24 09:29:55 tcpproxy: for incoming conn 127.0.0.1:49759, error dialing "192.168.127.2:39067": connect tcp 192.168.127.2:39067: connection was refused
ERRO[0415] accept tcp [::]:42671: use of closed network connection 
ERRO[0415] accept tcp [::]:39067: use of closed network connection 
ERRO[0415] accept tcp [::]:44001: use of closed network connection 
ERRO[0415] accept tcp [::]:41557: use of closed network connection 
ERRO[0415] accept tcp [::]:41943: use of closed network connection 
🔥  Deleting "minikube" in podman ...
🤦  StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
🔥  Creating podman container (CPUs=2, Memory=1953MB) ...
😿  Failed to start podman container. Running "minikube delete" may fix it: creating host: create: creating: setting up container node: creating volume for minikube container: podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:

stderr:
Error: volume with name minikube already exists: volume already exists


❌  Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: creating: setting up container node: creating volume for minikube container: podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:

stderr:
Error: volume with name minikube already exists: volume already exists

logs.txt

@greenchapter
Copy link
Author

Is there someone who has a running minikube installtion on Apple Silicon M1 without the Docker Environment?

@Cluas
Copy link

Cluas commented Dec 5, 2021

Is there someone who has a running minikube installtion on Apple Silicon M1 without the Docker Environment?

I finally chose rancher-desktop.

@yllekz
Copy link

yllekz commented Dec 13, 2021

Also curious on the answer to this. I wanted to use this as part of the "drop-in replacement for Docker Desktop" as this guide suggests but it seems this isn't compatible with the M1 at all in its current state?

@duttonw
Copy link

duttonw commented Dec 17, 2021

Yep, also looking at drop in replacements as 'Docker Desktop' is now a no go for adhoc pdev for docker-compose helper scripts for ui dev's. The guide https://dhwaneetbhatt.com/blog/run-docker-without-docker-desktop-on-macos is great for intel cpu's but fall on its face for m1 arm64.

There seems to be nothing truely available to run containers on M1 easily for the non-techy users..

@Cluas
Copy link

Cluas commented Dec 17, 2021

Yep, also looking at drop in replacements as 'Docker Desktop' is now a no go for adhoc pdev for docker-compose helper scripts for ui dev's. The guide https://dhwaneetbhatt.com/blog/run-docker-without-docker-desktop-on-macos is great for intel cpu's but fall on its face for m1 arm64.

There seems to be nothing truely available to run containers on M1 easily for the non-techy users..

Have a look at rancher-desktop now supports m1, rancher-sandbox/rancher-desktop#956, which is my current alternative

@greenchapter
Copy link
Author

greenchapter commented Dec 30, 2021

Yep, also looking at drop in replacements as 'Docker Desktop' is now a no go for adhoc pdev for docker-compose helper scripts for ui dev's. The guide https://dhwaneetbhatt.com/blog/run-docker-without-docker-desktop-on-macos is great for intel cpu's but fall on its face for m1 arm64.
There seems to be nothing truely available to run containers on M1 easily for the non-techy users..

Have a look at rancher-desktop now supports m1, rancher-sandbox/rancher-desktop#956, which is my current alternative

Thank you very much that looks really good and I don't had Rancher Desktop in mind.

The thing is I actually want to prevent a desktop application like Docker, I want to use a CLI tooling like minikube.
regarding the notes it looks like that the verion 0.7.0 currently isn't running completely on aarch64.

Screenshot 2021-12-30 at 21 43 08

Highlighted Features

  • Apple Silicon (M1) Support: Rancher Desktop can now be installed on Apple Silicon (M1). In the downloads, choose the aarch64 version to get Apple Silicon support. Note: this version does require Rosetta 2 as some of the components needed by Rancher Desktop don’t yet have native builds.

Hopefully we get a M1/AARCH64 solution for minikube 🙏 @RA489

@rfay
Copy link

rfay commented Dec 30, 2021

colima (HEAD) with lima/docker works perfectly on arm64. Haven't tried it with kube. I also have mostly good experiences with Rancher Desktop but the bind-mounting doesn't really work.

@greenchapter
Copy link
Author

colima (HEAD) with lima/docker works perfectly on arm64. Haven't tried it with kube. I also have mostly good experiences with Rancher Desktop but the bind-mounting doesn't really work.

Oh @rfay that looks really good 🤩

I did a short brew install colima and afterwards a started a kubernetes cluster with colima start --with-kubernetes. In my first testing it runs a k3s cluster on top of qemu, which looks on the first sight very good actually an my M1 machine.

@hcguersoy
Copy link

@greenchapter Regarding colima and QEMU : does it start an intel based VM? Or one based on arm64 ?

@cdbattags
Copy link

cdbattags commented Feb 4, 2022

Attempted exactly this procedure on M1 Mac: MacBook Pro (16-inch, 2021), with slight modification:

  1. podman machine init --cpus 2 --memory 2048 --disk-size 20 --image-path next
  2. podman machine start
  3. podman system connection default podman-machine-default-root
  4. minikube start --driver=podman --container-runtime=cri-o

Proximal output (for searchers)

😄  minikube v1.24.0 on Darwin 12.0.1 (arm64)
✨  Using the podman (experimental) driver based on user configuration
👍  Starting control plane node minikube in cluster minikube
🚜  Pulling base image ...
E1124 09:23:52.515427    2826 cache.go:201] Error downloading kic artifacts:  not yet implemented, see issue #8426
🔥  Creating podman container (CPUs=2, Memory=1953MB) .../ 2021/11/24 09:24:18 tcpproxy: for incoming conn 127.0.0.1:49540, error dialing "192.168.127.2:39067": connect tcp 192.168.127.2:39067: connection was refused
\ 2021/11/24 09:24:21 tcpproxy: for incoming conn 127.0.0.1:49541, error dialing "192.168.127.2:39067": connect tcp 192.168.127.2:39067: connection was refused
[... much of the same output elided]
- 2021/11/24 09:29:54 tcpproxy: for incoming conn 127.0.0.1:49749, error dialing "192.168.127.2:39067": connect tcp 192.168.127.2:39067: connection was refused

✋  Stopping node "minikube"  ...
🛑  Powering off "minikube" via SSH ...
2021/11/24 09:29:55 tcpproxy: for incoming conn 127.0.0.1:49759, error dialing "192.168.127.2:39067": connect tcp 192.168.127.2:39067: connection was refused
ERRO[0415] accept tcp [::]:42671: use of closed network connection 
ERRO[0415] accept tcp [::]:39067: use of closed network connection 
ERRO[0415] accept tcp [::]:44001: use of closed network connection 
ERRO[0415] accept tcp [::]:41557: use of closed network connection 
ERRO[0415] accept tcp [::]:41943: use of closed network connection 
🔥  Deleting "minikube" in podman ...
🤦  StartHost failed, but will try again: creating host: create host timed out in 360.000000 seconds
🔥  Creating podman container (CPUs=2, Memory=1953MB) ...
😿  Failed to start podman container. Running "minikube delete" may fix it: creating host: create: creating: setting up container node: creating volume for minikube container: podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:

stderr:
Error: volume with name minikube already exists: volume already exists


❌  Exiting due to GUEST_PROVISION: Failed to start host: creating host: create: creating: setting up container node: creating volume for minikube container: podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true: exit status 125
stdout:

stderr:
Error: volume with name minikube already exists: volume already exists

logs.txt

I'm having the exact same issue as described by @carl-reverb. Any help would be greatly appreciated. I talked to @baude and he says the driver isn't managed by the Podman folks?

Looks to be an ssh configuration issue I believe?

My specific log output:

I0204 16:28:40.772222   12741 out.go:297] Setting OutFile to fd 1 ...
I0204 16:28:40.772377   12741 out.go:349] isatty.IsTerminal(1) = true
I0204 16:28:40.772382   12741 out.go:310] Setting ErrFile to fd 2...
I0204 16:28:40.772386   12741 out.go:349] isatty.IsTerminal(2) = true
I0204 16:28:40.772451   12741 root.go:315] Updating PATH: /Users/cdbattags/.minikube/bin
I0204 16:28:40.773185   12741 out.go:304] Setting JSON to false
I0204 16:28:40.816599   12741 start.go:112] hostinfo: {"hostname":"christians-mbp.lan","uptime":1836103,"bootTime":1642174017,"procs":817,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"12.0.1","kernelVersion":"21.1.0","kernelArch":"arm64","virtualizationSystem":"","virtualizationRole":"","hostId":"9f598b00-79ef-5d59-8583-527b2cc6cf00"}
W0204 16:28:40.816759   12741 start.go:120] gopshost.Virtualization returned error: not implemented yet
I0204 16:28:40.837315   12741 out.go:176] 😄  minikube v1.25.1 on Darwin 12.0.1 (arm64)
😄  minikube v1.25.1 on Darwin 12.0.1 (arm64)
I0204 16:28:40.837438   12741 notify.go:174] Checking for updates...
I0204 16:28:40.891481   12741 out.go:176]     ▪ KUBECONFIG=/Users/cdbattags/.kube/config:
    ▪ KUBECONFIG=/Users/cdbattags/.kube/config:
I0204 16:28:40.891662   12741 driver.go:344] Setting default libvirt URI to qemu:///system
I0204 16:28:41.012691   12741 podman.go:121] podman version: 3.4.4
I0204 16:28:41.033257   12741 out.go:176] ✨  Using the podman (experimental) driver based on user configuration
✨  Using the podman (experimental) driver based on user configuration
I0204 16:28:41.033283   12741 start.go:280] selected driver: podman
I0204 16:28:41.033291   12741 start.go:795] validating driver "podman" against <nil>
I0204 16:28:41.033304   12741 start.go:806] status for podman: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
I0204 16:28:41.033316   12741 start.go:1498] auto setting extra-config to "kubelet.housekeeping-interval=5m".
I0204 16:28:41.033465   12741 cli_runner.go:133] Run: podman system info --format json
I0204 16:28:41.135713   12741 info.go:285] podman info: {Host:{BuildahVersion:1.23.1 CgroupVersion:v2 Conmon:{Package:conmon-2.0.30-2.fc35.aarch64 Path:/usr/bin/conmon Version:conmon version 2.0.30, commit: } Distribution:{Distribution:fedora Version:35} MemFree:8511893504 MemTotal:10396856320 OCIRuntime:{Name:crun Package:crun-1.4.1-1.fc35.aarch64 Path:/usr/bin/crun Version:crun version 1.4.1
commit: 802613580a3f25a88105ce4b78126202fef51dfb
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +YAJL} SwapFree:0 SwapTotal:0 Arch:arm64 Cpus:4 Eventlogger:journald Hostname:minikube Kernel:5.15.17-200.fc35.aarch64 Os:linux Rootless:false Uptime:58m 40.35s} Registries:{Search:[docker.io]} Store:{ConfigFile:/etc/containers/storage.conf ContainerStore:{Number:0} GraphDriverName:overlay GraphOptions:{} GraphRoot:/var/lib/containers/storage GraphStatus:{BackingFilesystem:xfs NativeOverlayDiff:false SupportsDType:true UsingMetacopy:true} ImageStore:{Number:1} RunRoot:/run/containers/storage VolumePath:/var/lib/containers/storage/volumes}}
I0204 16:28:41.135950   12741 start_flags.go:286] no existing cluster config was found, will generate one from the flags
I0204 16:28:41.136068   12741 start_flags.go:367] Using suggested 9867MB memory alloc based on sys=65536MB, container=9915MB
I0204 16:28:41.136138   12741 start_flags.go:796] Wait components to verify : map[apiserver:true system_pods:true]
I0204 16:28:41.136158   12741 cni.go:93] Creating CNI manager for ""
I0204 16:28:41.136162   12741 cni.go:160] "podman" driver + crio runtime found, recommending kindnet
I0204 16:28:41.136167   12741 start_flags.go:295] Found "CNI" CNI - setting NetworkPlugin=cni
I0204 16:28:41.136172   12741 start_flags.go:300] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:9867 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:}
I0204 16:28:41.177463   12741 out.go:176] 👍  Starting control plane node minikube in cluster minikube
👍  Starting control plane node minikube in cluster minikube
I0204 16:28:41.177510   12741 cache.go:120] Beginning downloading kic base image for podman with crio
I0204 16:28:41.196510   12741 out.go:176] 🚜  Pulling base image ...
🚜  Pulling base image ...
I0204 16:28:41.196555   12741 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime crio
I0204 16:28:41.196559   12741 cache.go:148] Downloading gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b to local cache
I0204 16:28:41.196618   12741 preload.go:148] Found local preload: /Users/cdbattags/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-cri-o-overlay-arm64.tar.lz4
I0204 16:28:41.196629   12741 cache.go:57] Caching tarball of preloaded images
I0204 16:28:41.196725   12741 preload.go:174] Found /Users/cdbattags/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
I0204 16:28:41.196739   12741 cache.go:60] Finished verifying existence of preloaded tar for  v1.23.1 on crio
I0204 16:28:41.196722   12741 image.go:59] Checking for gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local cache directory
I0204 16:28:41.196816   12741 image.go:62] Found gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b in local cache directory, skipping pull
I0204 16:28:41.196827   12741 image.go:103] gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b exists in cache, skipping pull
I0204 16:28:41.196836   12741 cache.go:151] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b as a tarball
I0204 16:28:41.196957   12741 profile.go:147] Saving config to /Users/cdbattags/.minikube/profiles/minikube/config.json ...
I0204 16:28:41.196988   12741 lock.go:35] WriteFile acquiring /Users/cdbattags/.minikube/profiles/minikube/config.json: {Name:mkd321d07bcf4787db3d0ee0fe251b2d1337beb3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
E0204 16:28:41.215817   12741 cache.go:203] Error downloading kic artifacts:  not yet implemented, see issue #8426
I0204 16:28:41.215853   12741 cache.go:208] Successfully downloaded all kic artifacts
I0204 16:28:41.215878   12741 start.go:313] acquiring machines lock for minikube: {Name:mkb6ff83acad3d62d883977c0cfa602b99b93ab3 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0204 16:28:41.215940   12741 start.go:317] acquired machines lock for "minikube" in 47.167µs
I0204 16:28:41.215965   12741 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.29@sha256:be897edc9ed473a9678010f390a0092f488f6a1c30865f571c3b6388f9f56f9b Memory:9867 CPUs:2 DiskSize:20000 VMDriver: Driver:podman HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.1 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror:} &{Name: IP: Port:8443 KubernetesVersion:v1.23.1 ControlPlane:true Worker:true}
I0204 16:28:41.216061   12741 start.go:126] createHost starting for "" (driver="podman")
I0204 16:28:41.254988   12741 out.go:203] 🔥  Creating podman container (CPUs=2, Memory=9867MB) ...
🔥  Creating podman container (CPUs=2, Memory=9867MB) ...| I0204 16:28:41.255182   12741 start.go:160] libmachine.API.Create for "minikube" (driver="podman")
I0204 16:28:41.255207   12741 client.go:168] LocalClient.Create starting
I0204 16:28:41.255314   12741 main.go:130] libmachine: Reading certificate data from /Users/cdbattags/.minikube/certs/ca.pem
I0204 16:28:41.255355   12741 main.go:130] libmachine: Decoding PEM data...
I0204 16:28:41.255367   12741 main.go:130] libmachine: Parsing certificate...
I0204 16:28:41.255409   12741 main.go:130] libmachine: Reading certificate data from /Users/cdbattags/.minikube/certs/cert.pem
I0204 16:28:41.255436   12741 main.go:130] libmachine: Decoding PEM data...
I0204 16:28:41.255442   12741 main.go:130] libmachine: Parsing certificate...
I0204 16:28:41.255979   12741 cli_runner.go:133] Run: podman network inspect minikube --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}"
/ W0204 16:28:41.356346   12741 cli_runner.go:180] podman network inspect minikube --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}" returned with exit code 125
I0204 16:28:41.356564   12741 network_create.go:254] running [podman network inspect minikube] to gather additional debugging logs...
I0204 16:28:41.356605   12741 cli_runner.go:133] Run: podman network inspect minikube
- W0204 16:28:41.456851   12741 cli_runner.go:180] podman network inspect minikube returned with exit code 125
I0204 16:28:41.456893   12741 network_create.go:257] error running [podman network inspect minikube]: podman network inspect minikube: exit status 125
stdout:

stderr:
Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM
Error: unable to connect to Podman. failed to create sshClient: Connection to bastion host (ssh://root@localhost:49525/run/podman/podman.sock) failed.: ssh: handshake failed: ssh: disconnect, reason 2: Too many authentication failures
I0204 16:28:41.456922   12741 network_create.go:259] output of [podman network inspect minikube]:
** stderr **
Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM
Error: unable to connect to Podman. failed to create sshClient: Connection to bastion host (ssh://root@localhost:49525/run/podman/podman.sock) failed.: ssh: handshake failed: ssh: disconnect, reason 2: Too many authentication failures

** /stderr **
I0204 16:28:41.457095   12741 cli_runner.go:133] Run: podman network inspect podman --format "{{range .plugins}}{{if eq .type "bridge"}}{{(index (index .ipam.ranges 0) 0).subnet}},{{(index (index .ipam.ranges 0) 0).gateway}}{{end}}{{end}}"
I0204 16:28:41.547110   12741 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0x140007926b0] misses:0}
I0204 16:28:41.547159   12741 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
I0204 16:28:41.547171   12741 network_create.go:106] attempt to create podman network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0 ...
I0204 16:28:41.547292   12741 cli_runner.go:133] Run: podman network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 --label=created_by.minikube.sigs.k8s.io=true minikube
\ W0204 16:28:41.654355   12741 cli_runner.go:180] podman network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 --label=created_by.minikube.sigs.k8s.io=true minikube returned with exit code 125
E0204 16:28:41.654414   12741 network_create.go:95] error while trying to create podman network minikube 192.168.49.0/24: create podman network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: podman network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 --label=created_by.minikube.sigs.k8s.io=true minikube: exit status 125
stdout:

stderr:
Error: network 192.168.49.0/24 is already being used by a cni configuration
W0204 16:28:41.654564   12741 out.go:241] ❗  Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create podman network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: podman network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 --label=created_by.minikube.sigs.k8s.io=true minikube: exit status 125
stdout:

stderr:
Error: network 192.168.49.0/24 is already being used by a cni configuration


❗  Unable to create dedicated network, this might result in cluster IP change after restart: un-retryable: create podman network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 0: podman network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 --label=created_by.minikube.sigs.k8s.io=true minikube: exit status 125
stdout:

stderr:
Error: network 192.168.49.0/24 is already being used by a cni configuration

I0204 16:28:41.654715   12741 cli_runner.go:133] Run: podman ps -a --format {{.Names}}
I0204 16:28:41.749002   12741 cli_runner.go:133] Run: podman volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true
I0204 16:28:41.865998   12741 oci.go:102] Successfully created a podman volume minikube
I0204 16:28:41.866174   12741 cli_runner.go:133] Run: podman run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.29 -d /var/lib
I0204 16:28:42.457001   12741 oci.go:106] Successfully prepared a podman volume minikube
I0204 16:28:42.457088   12741 preload.go:132] Checking if preload exists for k8s version v1.23.1 and runtime crio
I0204 16:28:42.457102   12741 kic.go:179] Starting extracting preloaded images to volume ...
I0204 16:28:42.457244   12741 cli_runner.go:133] Run: podman run --rm --entrypoint /usr/bin/tar -v /Users/cdbattags/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29 -I lz4 -xf /preloaded.tar -C /extractDir
W0204 16:28:42.548933   12741 cli_runner.go:180] podman run --rm --entrypoint /usr/bin/tar -v /Users/cdbattags/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29 -I lz4 -xf /preloaded.tar -C /extractDir returned with exit code 125
I0204 16:28:42.548994   12741 kic.go:186] Unable to extract preloaded tarball to volume: podman run --rm --entrypoint /usr/bin/tar -v /Users/cdbattags/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v16-v1.23.1-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.29 -I lz4 -xf /preloaded.tar -C /extractDir: exit status 125
stdout:

stderr:
Cannot connect to Podman. Please verify your connection to the Linux system using `podman system connection list`, or try `podman machine init` and `podman machine start` to manage a new Linux VM
Error: unable to connect to Podman. failed to create sshClient: Connection to bastion host (ssh://root@localhost:49525/run/podman/podman.sock) failed.: ssh: handshake failed: ssh: disconnect, reason 2: Too many authentication failures
I0204 16:28:42.549129   12741 cli_runner.go:133] Run: podman info --format "'{{json .SecurityOptions}}'"
W0204 16:28:42.654683   12741 cli_runner.go:180] podman info --format "'{{json .SecurityOptions}}'" returned with exit code 125
I0204 16:28:42.654884   12741 cli_runner.go:133] Run: podman run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --volume minikube:/var:exec --memory-swap=9867mb --memory=9867mb --cpus=2 -e container=podman --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.29
I0204 16:28:43.020539   12741 cli_runner.go:133] Run: podman container inspect minikube --format={{.State.Running}}
I0204 16:28:43.181851   12741 cli_runner.go:133] Run: podman container inspect minikube --format={{.State.Status}}
I0204 16:28:43.271128   12741 cli_runner.go:133] Run: podman exec minikube stat /var/lib/dpkg/alternatives/iptables
I0204 16:28:43.510225   12741 oci.go:281] the created container "minikube" has a running status.
I0204 16:28:43.510265   12741 kic.go:210] Creating ssh key for kic: /Users/cdbattags/.minikube/machines/minikube/id_rsa...
I0204 16:28:43.568990   12741 vm_assets.go:163] NewFileAsset: /Users/cdbattags/.minikube/machines/minikube/id_rsa.pub -> /home/docker/.ssh/authorized_keys
I0204 16:28:43.569047   12741 kic_runner.go:191] podman (temp): /Users/cdbattags/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
I0204 16:28:43.573079   12741 kic_runner.go:276] Run: /opt/homebrew/bin/podman exec -i minikube tee /home/docker/.ssh/authorized_keys
I0204 16:28:43.752698   12741 cli_runner.go:133] Run: podman container inspect minikube --format={{.State.Status}}
I0204 16:28:43.890276   12741 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
I0204 16:28:43.890301   12741 kic_runner.go:114] Args: [podman exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]
I0204 16:28:44.223738   12741 cli_runner.go:133] Run: podman container inspect minikube --format={{.State.Status}}
I0204 16:28:44.381002   12741 machine.go:88] provisioning docker machine ...
I0204 16:28:44.381078   12741 ubuntu.go:169] provisioning hostname "minikube"
I0204 16:28:44.381266   12741 cli_runner.go:133] Run: podman version --format {{.Version}}
I0204 16:28:44.495782   12741 cli_runner.go:133] Run: podman container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube
I0204 16:28:44.680537   12741 main.go:130] libmachine: Using SSH client type: native
I0204 16:28:44.680789   12741 main.go:130] libmachine: &{{{<nil> 0 [] [] []} docker [0x1052cdc10] 0x1052d0a30 <nil>  [] 0s} 127.0.0.1 36029 <nil> <nil>}
I0204 16:28:44.680799   12741 main.go:130] libmachine: About to run SSH command:
sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname
2022/02/04 16:28:44 tcpproxy: for incoming conn 127.0.0.1:51001, error dialing "192.168.127.2:36029": connect tcp 192.168.127.2:36029: connection was refused
I0204 16:28:44.681550   12741 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51001->127.0.0.1:36029: read: connection reset by peer
2022/02/04 16:28:47 tcpproxy: for incoming conn 127.0.0.1:51003, error dialing "192.168.127.2:36029": connect tcp 192.168.127.2:36029: connection was refused
I0204 16:28:47.687678   12741 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51003->127.0.0.1:36029: read: connection reset by peer
2022/02/04 16:28:50 tcpproxy: for incoming conn 127.0.0.1:51005, error dialing "192.168.127.2:36029": connect tcp 192.168.127.2:36029: connection was refused
I0204 16:28:50.689023   12741 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51005->127.0.0.1:36029: read: connection reset by peer
2022/02/04 16:28:53 tcpproxy: for incoming conn 127.0.0.1:51007, error dialing "192.168.127.2:36029": connect tcp 192.168.127.2:36029: connection was refused
I0204 16:28:53.692089   12741 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51007->127.0.0.1:36029: read: connection reset by peer
2022/02/04 16:28:56 tcpproxy: for incoming conn 127.0.0.1:51008, error dialing "192.168.127.2:36029": connect tcp 192.168.127.2:36029: connection was refused
I0204 16:28:56.695504   12741 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51008->127.0.0.1:36029: read: connection reset by peer
2022/02/04 16:28:59 tcpproxy: for incoming conn 127.0.0.1:51009, error dialing "192.168.127.2:36029": connect tcp 192.168.127.2:36029: connection was refused
I0204 16:28:59.697838   12741 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51009->127.0.0.1:36029: read: connection reset by peer

@spowelljr
Copy link
Member

Just as an update on the HyperKit side, we are making progress towards getting an ARM64 ISO, here's the PR for reference

#13762

@greenchapter
Copy link
Author

Just as an update on the HyperKit side, we are making progress towards getting an ARM64 ISO, here's the PR for reference

#13762

Hopefully it will be merged soon 😍

@afbjorklund
Copy link
Collaborator

@spowelljr hyperkit does not support arm64, so the new ISO will have to use some other VM driver (vmware/parallels/qemu2)

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 10, 2022

Regarding colima and QEMU : does it start an intel based VM? Or one based on arm64 ?

By default, upstream lima starts an intel VM on an intel host and an arm64 VM on an arm64 host:

# Arch: "default", "x86_64", "aarch64".
# 🟢 Builtin default: "default" (corresponds to the host architecture)
arch: null

The colima distribution might be different, but there is lima Kubernetes support with containerd:

limactl start https://raw.githubusercontent.com/lima-vm/lima/master/examples/k8s.yaml

@afbjorklund afbjorklund added triage/duplicate Indicates an issue is a duplicate of other open issue. os/macos labels Apr 10, 2022
@medyagh
Copy link
Member

medyagh commented Apr 13, 2022

Thank everyone for the patience, please track the update in this issue,
#9228

@spowelljr spowelljr added the long-term-support Long-term support issues that can't be fixed in code label May 4, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 2, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 1, 2022
@RA489
Copy link

RA489 commented Sep 2, 2022

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Sep 2, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 1, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Dec 31, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jan 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. long-term-support Long-term support issues that can't be fixed in code os/macos triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests