Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hi getting issue #14774

Closed
thiya13 opened this issue Aug 10, 2022 · 6 comments
Closed

Hi getting issue #14774

thiya13 opened this issue Aug 10, 2022 · 6 comments
Labels
co/none-driver co/runtime/docker Issues specific to a docker runtime kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@thiya13
Copy link

thiya13 commented Aug 10, 2022

What Happened?

Getting issue while running minikube start

Attach the log file

  • ==> Audit <==

  • |---------|------------------|----------|------|---------|---------------------|----------|
    | Command | Args | Profile | User | Version | Start Time | End Time |
    |---------|------------------|----------|------|---------|---------------------|----------|
    | start | --vm-driver=none | minikube | root | v1.26.1 | 10 Aug 22 16:15 UTC | |
    | start | --vm-driver=none | minikube | root | v1.26.1 | 10 Aug 22 16:25 UTC | |
    | start | --force | minikube | root | v1.26.1 | 10 Aug 22 16:26 UTC | |
    | start | --force | minikube | root | v1.26.1 | 10 Aug 22 16:26 UTC | |
    | start | | minikube | root | v1.26.1 | 10 Aug 22 16:30 UTC | |
    | start | --force | minikube | root | v1.26.1 | 10 Aug 22 16:32 UTC | |
    |---------|------------------|----------|------|---------|---------------------|----------|

  • ==> Last Start <==

  • Log file created at: 2022/08/10 16:32:21
    Running on machine: ip-172-31-35-153
    Binary: Built with gc go1.18.3 for linux/amd64
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    I0810 16:32:21.430712 5065 out.go:296] Setting OutFile to fd 1 ...
    I0810 16:32:21.430852 5065 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color
    I0810 16:32:21.430857 5065 out.go:309] Setting ErrFile to fd 2...
    I0810 16:32:21.430863 5065 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color
    I0810 16:32:21.430998 5065 root.go:333] Updating PATH: /root/.minikube/bin
    W0810 16:32:21.431109 5065 root.go:310] Error reading config file at /root/.minikube/config/config.json: open /root/.minikube/config/config.json: no such file or directory
    I0810 16:32:21.431318 5065 out.go:303] Setting JSON to false
    I0810 16:32:21.432097 5065 start.go:115] hostinfo: {"hostname":"ip-172-31-35-153","uptime":1286,"bootTime":1660147855,"procs":119,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.13.0-1029-aws","kernelArch":"x86_64","virtualizationSystem":"xen","virtualizationRole":"guest","hostId":"ec2a3a45-fe6f-8f66-da7d-405bc96b88a1"}
    I0810 16:32:21.432162 5065 start.go:125] virtualization: xen guest
    I0810 16:32:21.437612 5065 out.go:177] * minikube v1.26.1 on Ubuntu 20.04 (xen/amd64)
    W0810 16:32:21.439780 5065 out.go:239] ! minikube skips various validations when --force is supplied; this may lead to unexpected behavior
    W0810 16:32:21.439783 5065 preload.go:295] Failed to list preload files: open /root/.minikube/cache/preloaded-tarball: no such file or directory
    I0810 16:32:21.439867 5065 notify.go:193] Checking for updates...
    I0810 16:32:21.440290 5065 config.go:180] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.24.3
    I0810 16:32:21.440329 5065 driver.go:365] Setting default libvirt URI to qemu:///system
    I0810 16:32:21.440925 5065 exec_runner.go:51] Run: systemctl --version
    I0810 16:32:21.445368 5065 out.go:177] * Using the none driver based on existing profile
    I0810 16:32:21.447019 5065 start.go:284] selected driver: none
    I0810 16:32:21.447032 5065 start.go:808] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.31.35.153 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/root:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
    I0810 16:32:21.447111 5065 start.go:819] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:}
    I0810 16:32:21.447148 5065 start.go:1544] auto setting extra-config to "kubelet.resolv-conf=/run/systemd/resolve/resolv.conf".
    I0810 16:32:21.447596 5065 cni.go:95] Creating CNI manager for ""
    I0810 16:32:21.447603 5065 cni.go:149] Driver none used, CNI unnecessary in this configuration, recommending no CNI
    I0810 16:32:21.447614 5065 start_flags.go:310] config:
    {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.33@sha256:73b259e144d926189cf169ae5b46bbec4e08e4e2f2bd87296054c3244f70feb8 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubelet Key:resolv-conf Value:/run/systemd/resolve/resolv.conf}] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.31.35.153 Port:8443 KubernetesVersion:v1.24.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/root:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:}
    I0810 16:32:21.450766 5065 out.go:177] * Starting control plane node minikube in cluster minikube
    I0810 16:32:21.452883 5065 profile.go:148] Saving config to /root/.minikube/profiles/minikube/config.json ...
    I0810 16:32:21.453100 5065 cache.go:208] Successfully downloaded all kic artifacts
    I0810 16:32:21.453124 5065 start.go:371] acquiring machines lock for minikube: {Name:mkc8ab01ad3ea83211c505c81a7ee49a8e3ecb89 Clock:{} Delay:500ms Timeout:13m0s Cancel:}
    I0810 16:32:21.453311 5065 start.go:375] acquired machines lock for "minikube" in 169.915µs
    I0810 16:32:21.453326 5065 start.go:95] Skipping create...Using existing machine configuration
    I0810 16:32:21.453333 5065 fix.go:55] fixHost starting: m01
    W0810 16:32:21.453497 5065 none.go:130] unable to get port: "minikube" does not appear in /root/.kube/config
    I0810 16:32:21.453506 5065 api_server.go:165] Checking apiserver status ...
    I0810 16:32:21.453528 5065 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.minikube.
    W0810 16:32:21.467456 5065 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: exit status 1
    stdout:

stderr:
I0810 16:32:21.467494 5065 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0810 16:32:21.478835 5065 fix.go:103] recreateIfNeeded on minikube: state=Stopped err=
W0810 16:32:21.478859 5065 fix.go:129] unexpected machine state, will restart:
I0810 16:32:21.482682 5065 out.go:177] * Restarting existing none bare metal machine for "minikube" ...
I0810 16:32:21.485992 5065 profile.go:148] Saving config to /root/.minikube/profiles/minikube/config.json ...
I0810 16:32:21.486166 5065 start.go:307] post-start starting for "minikube" (driver="none")
I0810 16:32:21.486204 5065 start.go:335] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0810 16:32:21.486236 5065 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0810 16:32:21.494851 5065 main.go:134] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
I0810 16:32:21.494872 5065 main.go:134] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
I0810 16:32:21.494883 5065 main.go:134] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
I0810 16:32:21.497158 5065 out.go:177] * OS release is Ubuntu 20.04.4 LTS
I0810 16:32:21.499231 5065 filesync.go:126] Scanning /root/.minikube/addons for local assets ...
I0810 16:32:21.499285 5065 filesync.go:126] Scanning /root/.minikube/files for local assets ...
I0810 16:32:21.499304 5065 start.go:310] post-start completed in 13.12805ms
I0810 16:32:21.499311 5065 fix.go:57] fixHost completed within 45.979381ms
I0810 16:32:21.499318 5065 start.go:82] releasing machines lock for "minikube", held for 45.996225ms
I0810 16:32:21.499705 5065 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0810 16:32:21.499796 5065 exec_runner.go:51] Run: curl -sS -m 2 https://k8s.gcr.io/
I0810 16:32:21.536184 5065 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0810 16:32:21.777221 5065 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0810 16:32:22.039708 5065 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0810 16:32:22.278304 5065 exec_runner.go:51] Run: sudo systemctl restart docker
I0810 16:32:22.541131 5065 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0810 16:32:22.814809 5065 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0810 16:32:23.050809 5065 exec_runner.go:51] Run: sudo systemctl start cri-docker.socket
I0810 16:32:23.063530 5065 start.go:450] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0810 16:32:23.063577 5065 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0810 16:32:23.064978 5065 start.go:471] Will wait 60s for crictl version
I0810 16:32:23.065020 5065 exec_runner.go:51] Run: sudo crictl version
I0810 16:32:23.070227 5065 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: exit status 1
stdout:

stderr:
sudo: crictl: command not found
I0810 16:32:34.117098 5065 exec_runner.go:51] Run: sudo crictl version
I0810 16:32:34.123264 5065 retry.go:31] will retry after 21.607636321s: Temporary Error: sudo crictl version: exit status 1
stdout:

stderr:
sudo: crictl: command not found
I0810 16:32:55.731174 5065 exec_runner.go:51] Run: sudo crictl version
I0810 16:32:55.737469 5065 retry.go:31] will retry after 26.202601198s: Temporary Error: sudo crictl version: exit status 1
stdout:

stderr:
sudo: crictl: command not found
I0810 16:33:21.940291 5065 exec_runner.go:51] Run: sudo crictl version
I0810 16:33:21.959841 5065 out.go:177]
W0810 16:33:21.962008 5065 out.go:239] X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: exit status 1
stdout:

stderr:
sudo: crictl: command not found

W0810 16:33:21.962042 5065 out.go:239] *
W0810 16:33:21.963026 5065 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0810 16:33:21.965731 5065 out.go:177]

Operating System

Ubuntu

Driver

No response

@afbjorklund
Copy link
Collaborator

afbjorklund commented Aug 10, 2022

You need to install crictl, it's a new requirement (for docker) since k8s 1.24

https://minikube.sigs.k8s.io/docs/drivers/none/

@afbjorklund afbjorklund added co/none-driver co/runtime/docker Issues specific to a docker runtime triage/duplicate Indicates an issue is a duplicate of other open issue. kind/support Categorizes issue or PR as a support question. labels Aug 10, 2022
@klaases
Copy link
Contributor

klaases commented Sep 21, 2022

Hi @thiya13, were you able to install cri-dockerd as suggested above?

Here is some additional installation information:
#14410 (comment)

You need to install the cri-dockerd,
We will build the cri-dockerd from scratch.

Clone the repo: git clone https://github.com/Mirantis/cri-dockerd.git

Install Golang Skip If present :

wget https://storage.googleapis.com/golang/getgo/installer_linux
chmod +x ./installer_linux
./installer_linux
source ~/.bash_profile
Build the cri-dockerd :

cd cri-dockerd
mkdir bin
go get && go build -o bin/cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 bin/cri-dockerd /usr/local/bin/cri-dockerd
cp -a packaging/systemd/* /etc/systemd/system
sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
systemctl daemon-reload
systemctl enable cri-docker.service
systemctl enable --now cri-docker.socket`

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 20, 2022
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 19, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Feb 18, 2023
@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver co/runtime/docker Issues specific to a docker runtime kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

5 participants