Skip to content

kubectl not starting #19077

Closed as not planned
Closed as not planned
@sathelagopalreddy

Description

@sathelagopalreddy

What Happened?

[root@localhost ~]# minikube start --force

  • minikube v1.33.1 on Redhat 9.4
    ! minikube skips various validations when --force is supplied; this may lead to unexpected behavior
  • Using the podman driver based on existing profile
  • The "podman" driver should not be used with root privileges. If you wish to continue as root, use --force.
  • If you are running minikube within a VM, consider using --driver=none:
  • https://minikube.sigs.k8s.io/docs/reference/drivers/none/
  • Tip: To remove this root owned cluster, run: sudo minikube delete

X podman only has 1775MiB available, less than the required 1800MiB for Kubernetes

X System only has 1775MiB available, less than the required 1800MiB for Kubernetes

X Requested memory allocation 1775MiB is less than the usable minimum of 1800MB

X Requested memory allocation (1775MB) is less than the recommended minimum 1900MB. Deployments may fail.

X The requested memory allocation of 1775MiB does not leave room for system overhead (total system memory: 1775MiB). You may face stability issues.

  • Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1775mb'

  • Starting "minikube" primary control-plane node in "minikube" cluster

  • Pulling base image v0.0.44 ...
    E0616 21:29:31.126953 5379 cache.go:189] Error downloading kic artifacts: not yet implemented, see issue podman: load kic base image from cache if available for offline mode #8426

  • Restarting existing podman container for "minikube" ...
    ! StartHost failed, but will try again: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
    stdout:
    stderr:
    Error: no container with name or ID "minikube" found: no such container

  • Restarting existing podman container for "minikube" ...

  • Failed to start podman container. Running "minikube delete" may fix it: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
    stdout:

stderr:
Error: no container with name or ID "minikube" found: no such container

X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
stdout:

stderr:
Error: no container with name or ID "minikube" found: no such container

╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

[root@localhost ~]#

Attach the log file

[root@localhost ~]# minikube start --force

  • minikube v1.33.1 on Redhat 9.4
    ! minikube skips various validations when --force is supplied; this may lead to unexpected behavior
  • Using the podman driver based on existing profile
  • The "podman" driver should not be used with root privileges. If you wish to continue as root, use --force.
  • If you are running minikube within a VM, consider using --driver=none:
  • https://minikube.sigs.k8s.io/docs/reference/drivers/none/
  • Tip: To remove this root owned cluster, run: sudo minikube delete

X podman only has 1775MiB available, less than the required 1800MiB for Kubernetes

X System only has 1775MiB available, less than the required 1800MiB for Kubernetes

X Requested memory allocation 1775MiB is less than the usable minimum of 1800MB

X Requested memory allocation (1775MB) is less than the recommended minimum 1900MB. Deployments may fail.

X The requested memory allocation of 1775MiB does not leave room for system overhead (total system memory: 1775MiB). You may face stability issues.

  • Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1775mb'

  • Starting "minikube" primary control-plane node in "minikube" cluster

  • Pulling base image v0.0.44 ...
    E0616 21:29:31.126953 5379 cache.go:189] Error downloading kic artifacts: not yet implemented, see issue podman: load kic base image from cache if available for offline mode #8426

  • Restarting existing podman container for "minikube" ...
    ! StartHost failed, but will try again: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
    stdout:
    stderr:
    Error: no container with name or ID "minikube" found: no such container

  • Restarting existing podman container for "minikube" ...

  • Failed to start podman container. Running "minikube delete" may fix it: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
    stdout:

stderr:
Error: no container with name or ID "minikube" found: no such container

X Exiting due to GUEST_PROVISION: error provisioning guest: Failed to start host: driver start: start: sudo -n podman start --cgroup-manager cgroupfs minikube: exit status 125
stdout:

stderr:
Error: no container with name or ID "minikube" found: no such container

╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

[root@localhost ~]#

Operating System

Windows

Driver

Docker

Metadata

Metadata

Assignees

No one assigned

    Labels

    lifecycle/rottenDenotes an issue or PR that has aged beyond stale and will be auto-closed.

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions