-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Job for k3s.service failed because the control process exited with error code #556
Comments
Are you able to provide some more information such as how you are installing and the k3s logs? When using systemd logs you should be able to find the logs in /var/log/syslog or using |
Thanks! The screenshots make it hard to work with, if you can copy & paste the complete line where it says |
@erikwilson I followed below steps:
|
level=fatal msg="starting tls server: Get https://localhost:6444/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions: dial tcp [::1]:6444: connect: connection refused" |
I just had two fresh raspbian lite installed Pi 3 B+ nodes become non responsive after installing k3s. The installer runs, the service starts and the nodes die almost immediately. I let them sit overnight, in case they were just locked up temporarily but they're dead. If I reboot, I have a ~30 second window to get in and stop k3s before it becomes non responsive again. I've tried with and without |
Thanks for the info @Aliabbask08, is it possible to share the output of Are you able to find any more info from the logs @tdewitt or how it dies? Kubectl commands work initially but then... hang or produce an error? If it is possible to try out v0.7.0-rc1, I am curious if it helps with the issue at all. |
The entire node dies about 20s after service startup. I measured from when ansible completes (using setup in contrib now so I can do things concurrently) until I'm no longer receiving ping replies. This is with 0.6.1. I can try with 0.7.0-rc1 in a little while. Service startup logs here: https://gist.github.com/tdewitt/bb2031446aa9b309e92ec0b7628bf98f |
Just tried with 0.7.0-rc1. Same results. Node service seems to be fine. Master dies. |
Swap disabled. Looked OK but turns out it's not. This is everything before it dies: https://gist.github.com/tdewitt/75e5342f85b3f6f9d0f5ba3af2d1d685 |
My problem was networking. My local network collides with the default networks in k3s. Moved them to a couple new blocks and all is well. Thanks @erikwilson for helping me work this out. |
@tdewitt Can you kindly explain what you did to resolve this issue? I am having the same failure while starting up the master. Any pointers will be appreciated. I have edited /etc/dhcpcd.conf and set the static ip as follows:
Error as follows:
|
My problem was that I was colliding with the default networks. My home
network uses 10.42.0.0/23. I used the following args to change the network
used by k3s.
```
--cluster-cidr value Network CIDR to use for pod IPs
(default: "10.42.0.0/16")
--service-cidr value Network CIDR to use for services IPs
(default: "10.43.0.0/16")
```
…On Sun, Sep 22, 2019 at 8:24 PM Srini Karlekar ***@***.***> wrote:
@tdewitt <https://github.com/tdewitt> Can you kindly explain what you did
to resolve this issue? I am having the same failure while starting up the
master. Any pointers will be appreciated.
I have edit /etc/dhcpcd.conf and set the static ip as follows:
sudo cat >> /etc/dhcpcd.conf
interface eth0
static ip_address=192.168.1.52/24
static routers=192.168.1.1
static domain_name_servers=192.168.1.1
Error as follows:
● k3s.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
Active: activating (auto-restart) (Result: exit-code) since Sun 2019-09-22 21:21:09 EDT; 4s ago
Docs: https://k3s.io
Process: 19579 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Process: 19580 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Process: 19581 ExecStart=/usr/local/bin/k3s server --write-kubeconfig-mode 644 KillMode=process (code=exited, status=1/FAILURE)
Main PID: 19581 (code=exited, status=1/FAILURE)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#556>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AGVAYJFUN3UGGE4UVCAGBW3QLALDTANCNFSM4HZSOLEA>
.
|
I'm running into the same issue as @Aliabbask08 . I'm trying to setup the server on a RPI 2B+ and keep getting the error:
I'm using version v0.10.0 |
@ssedrick can u please share the steps you follow? |
I had a clean install of Raspbian Buster. From there I ran the instructions on https://www.k3s.io. Didn't work. I've done a little bit of troubleshooting, and followed the k3sup project https://k3sup.dev, and that worked. It is using 0.9.1 /etc/hosts:
/boot/cmdline.txt |
I'm running into what I believe to be the same issue, except that my error message is slightly different. I'm getting
More context:
Both on Raspberry Pi 3 and Zero W. |
Just another me-too without any further (real) clues.
|
Can confirm that the issue does not occur with v0.9.1. It's a regression. Maybe we should open a new issue for this? Tested on RPI B+ and Zero W |
|
Having similar case with Armbian on OrangePi One/PC. |
Got similar error on Archlinux, Raspberry PI 3B+, latest version of k3s ( Uninstalled and reinstalled
k3s service started with no issues. |
I've just tested v0.9.1 with Raspberry Pi 3B and Works too! 0.10.2 fails |
downgrading to k3s version 0.9.1 worked for me too. Running on RPi 3B+ with OS:
The error I got on version 0.10.2 and 0.10.0 was |
Running on RPi 3B+ with OS:
i setup a airdrop environment and networking is completely disabled. so i have to add below add a default route:
then sudo k3s server , it finally raised blow error:
don't know what reason. |
Same for here. |
I had a similar error I was working through for hours and it turned out I needed to update this file on my agent node: |
Just ran into the same issue with
|
I had problems with Raspbian "Buster" because it updated the kernel to major 5. Going back to Linux kernel 4 fixed this for me. |
@mfriedenhagen could it be linked to |
Hello @alepee , indeed, thanks for the link. I think I will give this a try together with kernel 5. |
Hm, I already tried this back in the day.
but it did not work with the new kernel back then. |
|
Ran the following and resulted in successful deployment
|
Closing due to age. Anyone experiencing similar problems should open a new issue and fill out the template. |
Don't know if it may help but I fixed this issue by adding these at the end of [...] |
Thanks it saves me a day! |
That's a really old version of k3s, I wouldn't recommend using it. |
Luckily, after uninstall the v0.9.1, and try with latest version again, now it works! Thanks @brandond |
@quangthe Do you have any idea why we need to install v0.9.1 before the latest version to get it to work? Surely this is a bug. |
They didn't have to install the old version first... they're saying that the new version worked for them where the old one did not, and that they uninstalled the old version before trying the new version. |
@brandond Thank you for correcting my misunderstanding. Was able to get it working by upgrading:
I also updated selinux policy on RHEL8: |
Had similar issue on RHEL 9 running on AWS, fixed it with this.
|
This is covered here: https://docs.k3s.io/installation/requirements?os=rhel#operating-systems But in general please do not bump years old issues with unrelated comments. |
Hello Team,
Trying to run k3s cluster on raspberrypi using official doc but causing this issue.
● k3s.service - Lightweight Kubernetes
Loaded: loaded (/etc/systemd/system/k3s.service; enabled; vendor preset: enabled)
Active: failed (Result: exit-code) since Thu 2019-06-20 12:18:07 UTC; 4min 13s ago
Docs: https://k3s.io
Process: 1722 ExecStart=/usr/local/bin/k3s server (code=exited, status=1/FAILURE)
Process: 1719 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
Process: 1716 ExecStartPre=/sbin/modprobe br_netfilter (code=exited, status=0/SUCCESS)
Main PID: 1722 (code=exited, status=1/FAILURE)
CPU: 2.150s
Jun 20 12:18:06 master systemd[1]: k3s.service: Unit entered failed state.
Jun 20 12:18:06 master systemd[1]: k3s.service: Failed with result 'exit-code'.
Jun 20 12:18:07 master systemd[1]: k3s.service: Service hold-off time over, scheduling restart.
Jun 20 12:18:07 master systemd[1]: Stopped Lightweight Kubernetes.
Jun 20 12:18:07 master systemd[1]: k3s.service: Start request repeated too quickly.
Jun 20 12:18:07 master systemd[1]: Failed to start Lightweight Kubernetes.
Jun 20 12:18:07 master systemd[1]: k3s.service: Unit entered failed state.
Jun 20 12:18:07 master systemd[1]: k3s.service: Failed with result 'exit-code'.
The text was updated successfully, but these errors were encountered: