Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

none: sudo systemctl restart docker: exit status 127 (systemd is required) #3748

Closed
tianyirenjian opened this issue Feb 25, 2019 · 9 comments
Labels
co/none-driver triage/duplicate Indicates an issue is a duplicate of other open issue.

Comments

@tianyirenjian
Copy link

How to replicate the error, including the exact command-lines used:

I download it with

curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64 \
&& chmod +x minikube
cp minikube /usr/local/bin && rm minikube

then start it :

minikube start --vm-driver=none

The full output of the command that failed

root@vultr:~# minikube start --vm-driver=none
o   minikube v0.34.1 on linux (amd64)
>   Configuring local host environment ...

!   The 'none' driver provides limited isolation and may reduce system security and reliability.
!   For more information, see:
-   https://github.com/kubernetes/minikube/blob/master/docs/vmdriver-none.md

!   kubectl and minikube configuration will be stored in /root
!   To use kubectl or minikube commands as your own user, you may
!   need to relocate them. For example, to overwrite your own settings:

    - sudo mv /root/.kube /root/.minikube $HOME
    - sudo chown -R $USER /root/.kube /root/.minikube

i   This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
i   Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
:   Restarting existing none VM for "minikube" ...
:   Waiting for SSH access ...
-   "minikube" IP address is 139.180.143.41
-   Configuring Docker as the container runtime ...
!   Failed to enable container runtime: running command: sudo systemctl restart docker: exit status 127

*   Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
-   https://github.com/kubernetes/minikube/issues/new

It seems can't restart docker. I restart docker by /etc/init.d/docker restart not systemctl restart docker.

The operating system name and version used

root@vultr:~# uname -a
Linux vultr.guest 4.9.0-8-amd64 #1 SMP Debian 4.9.110-3+deb9u6 (2018-10-08) x86_64 GNU/Linux
@afbjorklund
Copy link
Collaborator

Currently systemd is required. See #2704

@tianyirenjian
Copy link
Author

Systemd is working.

This is minikube logs. It says cant find matching container.

root@vultr:~# minikube logs
==> k8s_coredns_coredns <==
E0226 07:33:02.163771   27036 logs.go:120] failed: running command: No container was found matching "k8s_coredns_coredns"
.: exit status 127
==> k8s_kube-apiserver <==
E0226 07:33:02.166227   27036 logs.go:120] failed: running command: No container was found matching "k8s_kube-apiserver"
.: exit status 127
==> k8s_kube-scheduler <==
E0226 07:33:02.167629   27036 logs.go:120] failed: running command: No container was found matching "k8s_kube-scheduler"
.: exit status 127
==> kubelet <==
-- No entries --
!   Error getting machine logs: unable to fetch logs for: k8s_coredns_coredns, k8s_kube-apiserver, k8s_kube-scheduler

*   Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
-   https://github.com/kubernetes/minikube/issues/new

@TheTimKiely
Copy link

I'm having the same problem on this system:
Linux <host_name> 4.15.0-45-generic #48-Ubuntu SMP Tue Jan 29 16:28:13 UTC 2019 x86_64 x86_64 x86_64 GNU/Linux

@afbjorklund
Copy link
Collaborator

afbjorklund commented Feb 27, 2019

Currently it needs to be able to run systemctl, which comes with your systemd installation...

Issues with minikube logs are separate, it is a rather new feature that needs some work yet.

@tianyirenjian
Copy link
Author

systemctl is working now, but it still show the error above.

@afbjorklund
Copy link
Collaborator

You can maybe use minikube ssh, to try to see why sudo systemctl restart docker is failing.

Or use a supported VM driver, like kvm2 ? That will use the regular minikube OS environment...

@tianyirenjian
Copy link
Author

I tried use kvm2, but it still does't work. May be something wrong with my vps.

I installed docker-machine-driver-kvm2 follow this: https://github.com/kubernetes/minikube/blob/master/docs/drivers.md

o   minikube v0.34.1 on linux (amd64)
>   Creating kvm2 VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
@   Downloading Minikube ISO ...
 184.30 MB / 184.30 MB [============================================] 100.00% 0s
!   Unable to start VM: create: Error creating machine: Error in driver during machine creation: creating domain: Error defining domain xml: 
<domain type='kvm'>
  <name>minikube</name>
  <memory unit='MB'>2048</memory>
  <vcpu>2</vcpu>
  <features>
    <acpi/>
    <apic/>
    <pae/>
    <kvm>
      <hidden state='on'/>
    </kvm>
  </features>
  <cpu mode='host-passthrough'/>
  <os>
    <type>hvm</type>
    <boot dev='cdrom'/>
    <boot dev='hd'/>
    <bootmenu enable='no'/>
  </os>
  <devices>
    <disk type='file' device='cdrom'>
      <source file='/root/.minikube/machines/minikube/boot2docker.iso'/>
      <target dev='hdc' bus='scsi'/>
      <readonly/>
    </disk>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='default' io='threads' />
      <source file='/root/.minikube/machines/minikube/minikube.rawdisk'/>
      <target dev='hda' bus='virtio'/>
    </disk>
    <interface type='network'>
      <source network='default'/>
      <mac address='a0:1a:18:27:f3:13'/>
      <model type='virtio'/>
    </interface>
    <interface type='network'>
      <source network='minikube-net'/>
      <mac address='58:69:b1:4d:eb:dc'/>
      <model type='virtio'/>
    </interface>
    <serial type='pty'>
      <target port='0'/>
    </serial>
    <console type='pty'>
      <target type='serial' port='0'/>
    </console>
    <rng model='virtio'>
      <backend model='random'>/dev/random</backend>
    </rng>
    
  </devices>
</domain>
: virError(Code=8, Domain=44, Message='invalid argument: could not find capabilities for domaintype=kvm ')

*   Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
-   https://github.com/kubernetes/minikube/issues/new

@TheTimKiely
Copy link

Please let me know if I should not be piggy backing on this thread.
I am getting the same errors.
When I install the kvm2 driver, I got the same exact error as tianyirenjian posted above.

I've tried starting minikube with kvm2 and "--vm-driver=none", but I get the same error under both circumstances:

minikube logs

==> k8s_coredns_coredns <==
E0227 11:03:35.531826 76319 logs.go:120] failed: running command: No container was found matching "k8s_coredns_coredns"
.: exit status 127
==> k8s_kube-apiserver <==
E0227 11:03:35.533985 76319 logs.go:120] failed: running command: No container was found matching "k8s_kube-apiserver"
.: exit status 127
==> k8s_kube-scheduler <==
E0227 11:03:35.535225 76319 logs.go:120] failed: running command: No container was found matching "k8s_kube-scheduler"
.: exit status 127
==> kubelet <==

@tstromberg tstromberg changed the title Can't start minikube none: sudo systemctl restart docker: exit status 127 (systemd is required) Mar 8, 2019
@tstromberg tstromberg added the triage/duplicate Indicates an issue is a duplicate of other open issue. label Mar 8, 2019
@tstromberg
Copy link
Contributor

@tianyirenjian - Thanks for the feedback!

It sounds like virtualization may not be properly configured on that host you are trying to run KVM on. Try running virt-host-validate- and see #2991 for more information. If you aren't able to find a solution, please open a new issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver triage/duplicate Indicates an issue is a duplicate of other open issue.
Projects
None yet
Development

No branches or pull requests

4 participants