Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Detect when \.\pipe\docker_engine already exists during startup #7169

Open
rklec opened this issue Jul 9, 2024 · 9 comments
Open

Detect when \.\pipe\docker_engine already exists during startup #7169

rklec opened this issue Jul 9, 2024 · 9 comments
Labels
area/diagnostics kind/enhancement New feature or request platform/windows runtime/moby triage/next-candidate Discuss if it should be moved to "Next" milestone

Comments

@rklec
Copy link

rklec commented Jul 9, 2024

Actual Behavior

I tried activating WSL support and it failed like this:
grafik

AFAIK I had Kubernetes checked and I thought let's just try... but as it did not work I have unchecked it and tried using pure Docker (which would be fine for my use case for now).

Steps to Reproduce

This could be related due to many problems:

  • WSL is not installed from Windows Store due to it being blocked
  • WSL was installed some time ago, I dunno, likely the feature activation through Windows... and/or through chocolately
  • Ubuntu 20.04 LTS for WSL was installed via chocolately https://community.chocolatey.org/packages/wsl-ubuntu-2004 (at least it's currently listed as being installed)
  • I do have docker already installed in the WSL manually before and I was able to start it with dockerd. Unfortunately, my tries making it a real service or so always failed so far (and it was not important).
  • Edit: I also have docker and coker-compose installed on Windows, separately, via chocolately
  • etc.

I just started Rancher and let it do it's thing... activated WSL etc.

Result

see screenshot above, or in text:

Kubernetes Error
Rancher Desktop 1.14.1 - win32 (x64)
Error Starting Kubernetes
Error: C:\Program Files\Rancher Desktop\resources\resources\win32\bin\docker.exe exited with code 1
Last command run:
C:\Program Files\Rancher Desktop\resources\resources\win32\bin\docker.exe image load --input C:\Program Files\Rancher Desktop\resources\resources\rdx-proxy.tar

Context:
Unknown

Some recent [logfile](app://./index.html#) lines:
  code: 1,
  [Symbol(child-process.command)]: 'wsl.exe --distribution rancher-desktop --exec /usr/local/bin/wsl-proxy -debug false'
}
2024-07-09T15:10:32.015Z: Registered distributions: Ubuntu,rancher-desktop-data,rancher-desktop
2024-07-09T15:10:32.299Z: Registered distributions: Ubuntu,rancher-desktop-data,rancher-desktop
2024-07-09T15:10:32.638Z: Registered distributions: Ubuntu,rancher-desktop-data,rancher-desktop
2024-07-09T15:10:32.638Z: data distro already registered
2024-07-09T15:10:38.222Z: Installing C:\Program Files\Rancher Desktop\resources\resources\linux\internal\trivy as /mnt/c/Program Files/Rancher Desktop/resources/resources/linux/internal/trivy into /usr/local/bin/trivy ...
2024-07-09T15:10:38.718Z: Installing C:\Program Files\Rancher Desktop\resources\resources\linux\internal\rancher-desktop-guestagent as /mnt/c/Program Files/Rancher Desktop/resources/resources/linux/internal/rancher-desktop-guestagent into /usr/local/bin//rancher-desktop-guestagent ...

This is reproducible, no matter whether I (try to) disable or (try to) enable WSL again (note the toggle in the settings is always toggled, and I can always click it again), it's always some error about rdx-proxy.tar.

Thus, I searched for rdx-proxy.tar in the provided logs folder and found only results in background.log:

2024-07-09T15:11:09.635Z: Background process Win32 socket proxy (pid 70136) exited with status 1 signal null
2024-07-09T15:11:10.230Z: Launching background process Win32 socket proxy.
2024-07-09T15:11:10.475Z: Kubernetes was unable to start: c [Error]: C:\Program Files\Rancher Desktop\resources\resources\win32\bin\docker.exe exited with code 1
    at ChildProcess.<anonymous> (C:\Program Files\Rancher Desktop\resources\app.asar\dist\app\background.js:2:156858)
    at ChildProcess.emit (node:events:513:28)
    at Process.onexit (node:internal/child_process:291:12) {
  command: [
    'C:\\Program Files\\Rancher Desktop\\resources\\resources\\win32\\bin\\docker.exe',
    'image',
    'load',
    '--input',
    'C:\\Program Files\\Rancher Desktop\\resources\\resources\\rdx-proxy.tar'
  ],
  code: 1,
  [Symbol(child-process.command)]: 'C:\\Program Files\\Rancher Desktop\\resources\\resources\\win32\\bin\\docker.exe image load --input C:\\Program Files\\Rancher Desktop\\resources\\resources\\rdx-proxy.tar'
}

(This is basically everything in that file... repeated several times in a similar way.)

It all looks very successful, but I cannot find useful error information. So I've just searched for "error" in all files (> 25.000 results 😆)

Warning: many logs ahead

docker.log:

time="2024-07-09T13:27:22.776338100+02:00" level=info msg="containerd successfully booted in 0.062553s"
time="2024-07-09T13:27:23.655698900+02:00" level=info msg="[graphdriver] using prior storage driver: overlay2"
time="2024-07-09T13:27:23.660546900+02:00" level=info msg="Loading containers: start."
time="2024-07-09T13:27:24.010384700+02:00" level=info msg="Removing stale sandbox bbb070c05b637400e69c47c7493ac679171dd51a432046ee7b0fc8a35eba8cc8 (af4d41e0b147205deccec869c18530f52b832eae204ce56c26f623e96f170839)"
time="2024-07-09T13:27:24.017123100+02:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 8a6fad1cd747d3e0e56a8bc5437421ebfde9040f4e10cd930af85d49152398c0 8ddc0faa6f44fdb65a7ae180d0c4f858d8a2c13ee4bcc924c411c19d705dda7a], retrying...."
time="2024-07-09T13:27:24.035288500+02:00" level=info msg="Removing stale sandbox f7093513f19efc77b0b8dfc4404e4a3744df11aeae0e1a5270bb56324bb46981 (7f713d2db18c3f90e3c081ee8e22c62e3f8e1ab45932276d75b4d4ae487c3d19)"
time="2024-07-09T13:27:24.041503600+02:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 8a6fad1cd747d3e0e56a8bc5437421ebfde9040f4e10cd930af85d49152398c0 2b486cc6b2357221177c9f27402ed1f490101490f02db1179918d0e5126ade9d], retrying...."
time="2024-07-09T13:27:24.065178900+02:00" level=info msg="Removing stale sandbox 59f9d6d4996e9cc687db09aa8e5f7294220408a973b3066127926c263436f79f (3aa1a29f397e9d9c1bbc5fbb917e169674925791fce6304725058ae26d99f5c7)"
time="2024-07-09T13:27:24.071938600+02:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 8a6fad1cd747d3e0e56a8bc5437421ebfde9040f4e10cd930af85d49152398c0 3bbee5ca84600c68c90c09837728e86fa2f0dfb3a817b187a892feb90dd4760f], retrying...."
time="2024-07-09T13:27:24.090617600+02:00" level=info msg="Removing stale sandbox 805b38efbafcd2a22c36990813ea2ab7289eab07727dd9662d5970205da03feb (60be1572bfae16a716715592bb875fcca6e932233faca44dca4a9b0c3bc67899)"
time="2024-07-09T13:27:24.097089900+02:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 8a6fad1cd747d3e0e56a8bc5437421ebfde9040f4e10cd930af85d49152398c0 1659398e1e19b0fa880fe00a57d55d3e3055cb7a5a77a0c41b88523bda980e83], retrying...."
time="2024-07-09T13:27:24.116382800+02:00" level=info msg="Removing stale sandbox a6b7063afdb75768c9356be711d3743f3ff7d74e0d7befe195648711270eea62 (07b46af379ba11a7b8c511daf53115504c1de7fc057a23aaeaa6a7ffb422ef3e)"
time="2024-07-09T13:27:24.123336400+02:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 8a6fad1cd747d3e0e56a8bc5437421ebfde9040f4e10cd930af85d49152398c0 4f1d4e1884e1e47140952019e0579bff90b02e646873278715921c0bcf1a90a6], retrying...."
time="2024-07-09T13:27:24.175637900+02:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
time="2024-07-09T13:27:24.212149300+02:00" level=info msg="Loading containers: done."
time="2024-07-09T13:27:24.232408700+02:00" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
time="2024-07-09T13:27:24.233105600+02:00" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
time="2024-07-09T13:27:24.233627800+02:00" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
time="2024-07-09T13:27:24.234107500+02:00" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
time="2024-07-09T13:27:24.235068200+02:00" level=info msg="Docker daemon" commit=e63daec8672d77ac0b2b5c262ef525c7cf17fd20 containerd-snapshotter=false storage-driver=overlay2 version=25.0.5
time="2024-07-09T13:27:24.235990900+02:00" level=info msg="Daemon has completed initialization"
time="2024-07-09T13:27:24.255774500+02:00" level=info msg="API listen on /var/run/docker.sock"
time="2024-07-09T13:27:24.255777500+02:00" level=info msg="API listen on /mnt/wsl/rancher-desktop/run/docker.sock"
error accepting client connection: bad file descriptor
error accepting client connection: invalid argument
error accepting client connection: invalid argument
error accepting client connection: invalid argument
error accepting client connection: invalid argument
error accepting client connection: invalid argument
error accepting client connection: invalid argument
[this line repeats **very** often]
could not connect to docker: dial unix /mnt/wsl/rancher-desktop/run/docker.sock: connect: connection refused
time="2024-07-09T16:43:37.027830100+02:00" level=info msg="Starting up"
time="2024-07-09T16:43:37.030514400+02:00" level=info msg="containerd not running, starting managed containerd"

I also can find a lot of these errors (translated = permission denied; again repeated maybe 1000 times):

Error: could not listen on npipe:////./pipe/docker_engine: open //./pipe/docker_engine: Zugriff verweigert
time="2024-07-09T15:16:53+02:00" level=info msg="Got WSL2 VM" guid=48b7ebae-da49-4b92-94ee-31f889238512
Error: could not listen on npipe:////./pipe/docker_engine: open //./pipe/docker_engine: Zugriff verweigert
time="2024-07-09T15:16:54+02:00" level=info msg="Got WSL2 VM" guid=48b7ebae-da49-4b92-94ee-31f889238512
Error: could not listen on npipe:////./pipe/docker_engine: open //./pipe/docker_engine: Zugriff verweigert
time="2024-07-09T15:16:55+02:00" level=info msg="Got WSL2 VM" guid=48b7ebae-da49-4b92-94ee-31f889238512
Error: could not listen on npipe:////./pipe/docker_engine: open //./pipe/docker_engine: Zugriff verweigert
time="2024-07-09T15:16:57+02:00" level=info msg="Got WSL2 VM" guid=48b7ebae-da49-4b92-94ee-31f889238512
Error: could not listen on npipe:////./pipe/docker_engine: open //./pipe/docker_engine: Zugriff verweigert
time="2024-07-09T15:16:58+02:00" level=info msg="Got WSL2 VM" guid=48b7ebae-da49-4b92-94ee-31f889238512
Error: could not listen on npipe:////./pipe/docker_engine: open //./pipe/docker_engine: Zugriff verweigert
time="2024-07-09T15:16:59+02:00" level=info msg="Got WSL2 VM" guid=48b7ebae-da49-4b92-94ee-31f889238512

host-switch.log:

time="2024-07-09T13:23:20+02:00" level=info msg="port forwarding API server is running on: 192.168.127.1:80"
time="2024-07-09T13:23:20+02:00" level=info msg="new connection from 48b7ebae-da49-4b92-94ee-31f889238512:181b1e2f-facb-11e6-bd58-64006a7986d3 to 48b7ebae-da49-4b92-94ee-31f889238512:00001a00-facb-11e6-bd58-64006a7986d3"
2024/07/09 13:23:25 tcpproxy: for incoming conn 127.0.0.1:62322, error dialing "192.168.127.2:6443": connect tcp 192.168.127.2:6443: connection was refused
2024/07/09 13:26:48 tcpproxy: for incoming conn 127.0.0.1:63057, error dialing "192.168.127.2:6443": connect tcp 192.168.127.2:6443: connection was refused
2024/07/09 13:26:51 tcpproxy: for incoming conn 127.0.0.1:63059, error dialing "192.168.127.2:6443": connect tcp 192.168.127.2:6443: connection was refused
time="2024-07-09T13:26:53+02:00" level=error msg="accept tcp [::]:443: use of closed network connection"
time="2024-07-09T13:26:53+02:00" level=error msg="accept tcp [::]:80: use of closed network connection"

k3s.log:

I0709 13:23:58.702934     559 apf_controller.go:379] Running API Priority and Fairness config worker
I0709 13:23:58.708131     559 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
time="2024-07-09T13:23:58+02:00" level=error msg="Sending HTTP 502 response to 127.0.0.1:39114: dial tcp 10.42.0.4:10250: connect: connection refused"
time="2024-07-09T13:23:58+02:00" level=error msg="Sending HTTP 502 response to 127.0.0.1:39110: dial tcp 10.42.0.4:10250: connect: connection refused"
time="2024-07-09T13:23:58+02:00" level=error msg="Sending HTTP 502 response to 127.0.0.1:39118: dial tcp 10.42.0.4:10250: connect: connection refused"
time="2024-07-09T13:23:58+02:00" level=error msg="Sending HTTP 502 response to 127.0.0.1:39120: dial tcp 10.42.0.4:10250: connect: connection refused"
time="2024-07-09T13:23:58+02:00" level=error msg="Sending HTTP 502 response to 127.0.0.1:39112: dial tcp 10.42.0.4:10250: connect: connection refused"
I0709 13:23:58.721416     559 shared_informer.go:318] Caches are synced for crd-autoregister
I0709 13:23:58.727398     559 aggregator.go:165] initial CRD sync complete...
I0709 13:23:58.728755     559 autoregister_controller.go:141] Starting autoregister controller
I0709 13:23:58.729662     559 cache.go:32] Waiting for caches to sync for autoregister controller
I0709 13:23:58.730781     559 cache.go:39] Caches are synced for autoregister controller
time="2024-07-09T13:23:58+02:00" level=error msg="Sending HTTP 502 response to 127.0.0.1:39132: dial tcp 10.42.0.4:10250: connect: connection refused"
E0709 13:23:58.731783     559 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.42.0.4:10250/apis/metrics.k8s.io/v1beta1: Get "https://10.42.0.4:10250/apis/metrics.k8s.io/v1beta1": proxy error from 127.0.0.1:6443 while dialing 10.42.0.4:10250, code 502: 502 Bad Gateway
time="2024-07-09T13:23:58+02:00" level=error msg="Sending HTTP 502 response to 127.0.0.1:39134: dial tcp 10.42.0.4:10250: connect: connection refused"
I0709 13:23:58.735026     559 handler.go:275] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
time="2024-07-09T13:23:58+02:00" level=error msg="Sending HTTP 502 response to 127.0.0.1:39140: dial tcp 10.42.0.4:10250: connect: connection refused"
E0709 13:23:58.735905     559 controller.go:102] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
time="2024-07-09T13:23:58+02:00" level=error msg="Sending HTTP 502 response to 127.0.0.1:39142: dial tcp 10.42.0.4:10250: connect: connection refused"
E0709 13:23:58.737121     559 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: error trying to reach service: proxy error from 127.0.0.1:6443 while dialing 10.42.0.4:10250, code 502: 502 Bad Gateway
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
W0709 13:23:58.745976     559 handler_proxy.go:93] no RequestInfo found in the context
E0709 13:23:58.746652     559 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
time="2024-07-09T13:23:58+02:00" level=error msg="Sending HTTP 502 response to 127.0.0.1:39148: dial tcp 10.42.0.4:10250: connect: connection refused"

network-setup.log (this is the whole file content):

time="2024-07-09T13:23:19+02:00" level=info msg="starting handshake process with host-switch"
time="2024-07-09T13:23:19+02:00" level=error msg="listenForHandshake reading signature phrase failed: EOF"
time="2024-07-09T13:23:20+02:00" level=info msg="listenForHandshake successful handshake with host-switch"
time="2024-07-09T13:23:20+02:00" level=info msg="created a new namespace NS(10: 4, 4026532193) NS(10: 4, 4026532193)"
time="2024-07-09T13:23:20+02:00" level=info msg="successfully started the vm-switch running with a PID: 463"
time="2024-07-09T13:23:20+02:00" level=info msg="created veth pair veth-rd0 and veth-rd1"
time="2024-07-09T13:26:54+02:00" level=error msg="vm-switch exited with error: signal: killed"
time="2024-07-09T13:26:54+02:00" level=info msg="tearing down link veth-rd0: <nil>"
time="2024-07-09T13:27:18+02:00" level=info msg="starting handshake process with host-switch"
time="2024-07-09T13:27:18+02:00" level=error msg="listenForHandshake reading signature phrase failed: EOF"
time="2024-07-09T13:27:18+02:00" level=info msg="listenForHandshake successful handshake with host-switch"
time="2024-07-09T13:27:18+02:00" level=info msg="created a new namespace NS(10: 4, 4026532212) NS(10: 4, 4026532212)"
time="2024-07-09T13:27:18+02:00" level=info msg="successfully started the vm-switch running with a PID: 461"
time="2024-07-09T13:27:18+02:00" level=info msg="created veth pair veth-rd0 and veth-rd1"
time="2024-07-09T16:43:10+02:00" level=error msg="vm-switch exited with error: signal: killed"
time="2024-07-09T16:43:10+02:00" level=info msg="tearing down link veth-rd0: <nil>"
time="2024-07-09T16:43:31+02:00" level=info msg="starting handshake process with host-switch"
time="2024-07-09T16:43:32+02:00" level=error msg="listenForHandshake reading signature phrase failed: EOF"
time="2024-07-09T16:43:32+02:00" level=info msg="listenForHandshake successful handshake with host-switch"
time="2024-07-09T16:43:32+02:00" level=info msg="created a new namespace NS(10: 4, 4026532212) NS(10: 4, 4026532212)"
time="2024-07-09T16:43:32+02:00" level=info msg="successfully started the vm-switch running with a PID: 463"
time="2024-07-09T16:43:32+02:00" level=info msg="created veth pair veth-rd0 and veth-rd1"
time="2024-07-09T17:04:22+02:00" level=error msg="vm-switch exited with error: signal: killed"
time="2024-07-09T17:04:22+02:00" level=info msg="tearing down link veth-rd0: <nil>"
time="2024-07-09T17:04:48+02:00" level=info msg="starting handshake process with host-switch"
time="2024-07-09T17:04:49+02:00" level=error msg="listenForHandshake reading signature phrase failed: EOF"
time="2024-07-09T17:04:49+02:00" level=info msg="listenForHandshake successful handshake with host-switch"
time="2024-07-09T17:04:49+02:00" level=info msg="created a new namespace NS(10: 4, 4026532212) NS(10: 4, 4026532212)"
time="2024-07-09T17:04:49+02:00" level=info msg="successfully started the vm-switch running with a PID: 462"
time="2024-07-09T17:04:49+02:00" level=info msg="created veth pair veth-rd0 and veth-rd1"
time="2024-07-09T17:10:31+02:00" level=error msg="vm-switch exited with error: signal: killed"
time="2024-07-09T17:10:31+02:00" level=info msg="tearing down link veth-rd0: <nil>"
time="2024-07-09T17:10:53+02:00" level=info msg="starting handshake process with host-switch"
time="2024-07-09T17:10:54+02:00" level=error msg="listenForHandshake reading signature phrase failed: EOF"
time="2024-07-09T17:10:54+02:00" level=info msg="listenForHandshake successful handshake with host-switch"
time="2024-07-09T17:10:54+02:00" level=info msg="created a new namespace NS(10: 4, 4026532212) NS(10: 4, 4026532212)"
time="2024-07-09T17:10:54+02:00" level=info msg="successfully started the vm-switch running with a PID: 464"
time="2024-07-09T17:10:54+02:00" level=info msg="created veth pair veth-rd0 and veth-rd1"

rancher-desktop-guestagent.log (again, that's the whole file):

2024/07/09 13:23:20 [INFO]    Starting Rancher Desktop Agent in [AdminInstall=true] mode
2024/07/09 13:23:20 [FATAL]   failed to send a static portMapping envent to wsl-proxy: dial unix /run/wsl-proxy.sock: connect: no such file or directory
2024/07/09 13:23:26 [INFO]    Starting Rancher Desktop Agent in [AdminInstall=true] mode
2024/07/09 13:23:26 [DEBUG]   successfully forwarded k8s API port [6443] to wsl-proxy
2024/07/09 13:23:26 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:27 [DEBUG]   kubernetes: failed to read kubeconfig [error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory][config-path=/etc/rancher/k3s/k3s.yaml]
2024/07/09 13:23:28 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:29 [DEBUG]   kubernetes: failed to read kubeconfig [error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory][config-path=/etc/rancher/k3s/k3s.yaml]
2024/07/09 13:23:30 [DEBUG]   kubernetes: failed to read kubeconfig [error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory][config-path=/etc/rancher/k3s/k3s.yaml]
2024/07/09 13:23:31 [DEBUG]   checking if container engine API is running at /var/run/docker.sock
2024/07/09 13:23:31 [DEBUG]   kubernetes: failed to read kubeconfig [error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory][config-path=/etc/rancher/k3s/k3s.yaml]
2024/07/09 13:23:32 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:33 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:34 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:35 [DEBUG]   kubernetes: failed to read kubeconfig [error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory][config-path=/etc/rancher/k3s/k3s.yaml]
2024/07/09 13:23:36 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:37 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:38 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:39 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:40 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:41 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:42 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:43 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:44 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:45 [DEBUG]   kubernetes: failed to read kubeconfig [error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory][config-path=/etc/rancher/k3s/k3s.yaml]
2024/07/09 13:23:46 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:47 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:48 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:49 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:50 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:51 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:52 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:53 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:54 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:55 [DEBUG]   kubernetes: failed to read kubeconfig [error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory][config-path=/etc/rancher/k3s/k3s.yaml]
2024/07/09 13:23:56 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:57 [DEBUG]   kubernetes: failed to read kubeconfig [config-path=/etc/rancher/k3s/k3s.yaml][error=could not load Kubernetes client config from /etc/rancher/k3s/k3s.yaml: stat /etc/rancher/k3s/k3s.yaml: no such file or directory]
2024/07/09 13:23:58 [DEBUG]   kubernetes: loaded kubeconfig /etc/rancher/k3s/k3s.yaml
2024/07/09 13:23:58 [DEBUG]   Service Informer: Add func called with: &Service{ObjectMeta:{kubernetes  default  db8d3a3f-e66e-4bcc-bd11-0f95679fa2f7 48 0 2024-06-17 15:50:05 +0200 CEST <nil> <nil> map[component:apiserver provider:kubernetes] map[] [] [] [{k3s Update v1 2024-06-17 15:50:05 +0200 CEST FieldsV1 {"f:metadata":{"f:labels":{".":{},"f:component":{},"f:provider":{}}},"f:spec":{"f:clusterIP":{},"f:internalTrafficPolicy":{},"f:ipFamilyPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{0 6443 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{},ClusterIP:10.43.0.1,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.0.1],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
2024/07/09 13:23:58 [DEBUG]   kubernetes service update: default/kubernetes has -0 +0 service port
2024/07/09 13:23:58 [DEBUG]   Service Informer: Add func called with: &Service{ObjectMeta:{kube-dns  kube-system  feb11d8f-bf19-424a-a8ea-af9ca2d6b015 255 0 2024-06-17 15:50:10 +0200 CEST <nil> <nil> map[k8s-app:kube-dns kubernetes.io/cluster-service:true kubernetes.io/name:CoreDNS objectset.rio.cattle.io/hash:bce283298811743a0386ab510f2f67ef74240c57] map[objectset.rio.cattle.io/applied:H4sIAAAAAAAA/4ySQYvbMBCF/0p5Z9m142TjFfRQdimUQgmk7aXsQZYnG9W2JKRJSgj+70WJl00b0vZm8958vHmjI5Q33yhE4ywk9iUEOmNbSKwp7I0mCAzEqlWsII9Q1jpWbJyN6dc1P0hzJM6DcblWzD3lxr01iQBxU3c/LYXsed9BoqvihbIvxZtPxrbv3rets/9EWDUQJLQL1Nr4X/bolU4z3a6hLB4i0wABH9xAvKVdTG7vAkPivlxUV1rUQfkE4LAjjAK9aqg/1dHVMVPev8DPidJnsMR0mtb9LjKFLE71Tpg/bdNeDy7Q4+f1X/baqriFRKNpVlez+7ouy+W8UkVV36lmURab2eZuSZvlfDYv9GKZ8k7si4i3ahkFoiedVptyf1xBoizyeZUXeVlAvAoR8vul9CRg/Ac1mP6wcr3Rh/SojH3uac1Kd6lXFzhNHV8indOcy19Up+LZaddD4uvjCqO4dGas/S33l4ff3ANxMPqVne567X8SiNSTZhduHHMcx18BAAD//5X9LCMyAwAA objectset.rio.cattle.io/id: objectset.rio.cattle.io/owner-gvk:k3s.cattle.io/v1, Kind=Addon objectset.rio.cattle.io/owner-name:coredns objectset.rio.cattle.io/owner-namespace:kube-system prometheus.io/port:9153 prometheus.io/scrape:true] [] [] [{deploy@x000m1301 Update v1 2024-06-17 15:50:10 +0200 CEST FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:objectset.rio.cattle.io/applied":{},"f:objectset.rio.cattle.io/id":{},"f:objectset.rio.cattle.io/owner-gvk":{},"f:objectset.rio.cattle.io/owner-name":{},"f:objectset.rio.cattle.io/owner-namespace":{},"f:prometheus.io/port":{},"f:prometheus.io/scrape":{}},"f:labels":{".":{},"f:k8s-app":{},"f:kubernetes.io/cluster-service":{},"f:kubernetes.io/name":{},"f:objectset.rio.cattle.io/hash":{}}},"f:spec":{"f:clusterIP":{},"f:clusterIPs":{},"f:internalTrafficPolicy":{},"f:ipFamilyPolicy":{},"f:ports":{".":{},"k:{\"port\":53,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":53,\"protocol\":\"UDP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":9153,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:dns,Protocol:UDP,Port:53,TargetPort:{0 53 },NodePort:0,AppProtocol:nil,},ServicePort{Name:dns-tcp,Protocol:TCP,Port:53,TargetPort:{0 53 },NodePort:0,AppProtocol:nil,},ServicePort{Name:metrics,Protocol:TCP,Port:9153,TargetPort:{0 9153 },NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: kube-dns,},ClusterIP:10.43.0.10,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*SingleStack,ClusterIPs:[10.43.0.10],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
2024/07/09 13:23:58 [DEBUG]   kubernetes service update: kube-system/kube-dns has -0 +0 service port
2024/07/09 13:23:58 [DEBUG]   Service Informer: Add func called with: &Service{ObjectMeta:{metrics-server  kube-system  a6f856eb-bded-4044-9277-4d2a8a95e12d 300 0 2024-06-17 15:50:11 +0200 CEST <nil> <nil> map[kubernetes.io/cluster-service:true kubernetes.io/name:Metrics-server objectset.rio.cattle.io/hash:a5d3bc601c871e123fa32b27f549b6ea770bcf4a] map[objectset.rio.cattle.io/applied:H4sIAAAAAAAA/4SQQWsbMRCF/0p5Z9nNep04FfRQWnopBUNKL6WHWe04VleWhGa8xZj970UbFxLaJCchvZn3vqczKPvvXMSnCIuxgcHgYw+LOy6jdwyDAyv1pAR7BsWYlNSnKPWaul/sVFiXxaelI9XAS5/e+uoA86yefkcui/txgMXQyiNlbMybLz727z/0fYqvWkQ6MGxFLN7JQriMXObjgf31bcnkqsVw7HghJ1E+YDII1HGYO1ahRFaWuujCUfRRhIWWY016Onbh+vqE6wWePckeFnTdt527uWrc7abhZtXuqF11q83uev2uu2HabK46t1tTJfxvdTy8P1NKMrtayefPdPDhtE3BuxMstoV3XD4dKdwpuQEGORUV2B/nvzl71SwXAXa9bg1ySZpcCrD49nELA6Vyz7qdJy4L008D4cBOU5l/81YWlPO/4NM0/QkAAP//sKxN444CAAA objectset.rio.cattle.io/id: objectset.rio.cattle.io/owner-gvk:k3s.cattle.io/v1, Kind=Addon objectset.rio.cattle.io/owner-name:metrics-server-service objectset.rio.cattle.io/owner-namespace:kube-system] [] [] [{deploy@x000m1301 Update v1 2024-06-17 15:50:11 +0200 CEST FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:objectset.rio.cattle.io/applied":{},"f:objectset.rio.cattle.io/id":{},"f:objectset.rio.cattle.io/owner-gvk":{},"f:objectset.rio.cattle.io/owner-name":{},"f:objectset.rio.cattle.io/owner-namespace":{}},"f:labels":{".":{},"f:kubernetes.io/cluster-service":{},"f:kubernetes.io/name":{},"f:objectset.rio.cattle.io/hash":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ipFamilyPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} }]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:https,Protocol:TCP,Port:443,TargetPort:{1 0 https},NodePort:0,AppProtocol:nil,},},Selector:map[string]string{k8s-app: metrics-server,},ClusterIP:10.43.247.77,Type:ClusterIP,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*PreferDualStack,ClusterIPs:[10.43.247.77],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:nil,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{},},Conditions:[]Condition{},},}
2024/07/09 13:23:58 [DEBUG]   kubernetes service update: kube-system/metrics-server has -0 +0 service port
2024/07/09 13:23:58 [DEBUG]   Service Informer: Add func called with: &Service{ObjectMeta:{traefik  kube-system  fbd8161e-709b-4281-b2b5-7f886b257407 615 0 2024-06-17 15:50:24 +0200 CEST <nil> <nil> map[app.kubernetes.io/instance:traefik-kube-system app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:traefik helm.sh/chart:traefik-25.0.3_up25.0.0] map[meta.helm.sh/release-name:traefik meta.helm.sh/release-namespace:kube-system] [] [service.kubernetes.io/load-balancer-cleanup] [{helm Update v1 2024-06-17 15:50:24 +0200 CEST FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:helm.sh/chart":{}}},"f:spec":{"f:allocateLoadBalancerNodePorts":{},"f:externalTrafficPolicy":{},"f:internalTrafficPolicy":{},"f:ipFamilyPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} } {k3s Update v1 2024-06-17 15:50:25 +0200 CEST FieldsV1 {"f:metadata":{"f:finalizers":{".":{},"v:\"service.kubernetes.io/load-balancer-cleanup\"":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} status}]},Spec:ServiceSpec{Ports:[]ServicePort{ServicePort{Name:web,Protocol:TCP,Port:80,TargetPort:{1 0 web},NodePort:32160,AppProtocol:nil,},ServicePort{Name:websecure,Protocol:TCP,Port:443,TargetPort:{1 0 websecure},NodePort:30985,AppProtocol:nil,},},Selector:map[string]string{app.kubernetes.io/instance: traefik-kube-system,app.kubernetes.io/name: traefik,},ClusterIP:10.43.5.92,Type:LoadBalancer,ExternalIPs:[],SessionAffinity:None,LoadBalancerIP:,LoadBalancerSourceRanges:[],ExternalName:,ExternalTrafficPolicy:Cluster,HealthCheckNodePort:0,PublishNotReadyAddresses:false,SessionAffinityConfig:nil,IPFamilyPolicy:*PreferDualStack,ClusterIPs:[10.43.5.92],IPFamilies:[IPv4],AllocateLoadBalancerNodePorts:*true,LoadBalancerClass:nil,InternalTrafficPolicy:*Cluster,},Status:ServiceStatus{LoadBalancer:LoadBalancerStatus{Ingress:[]LoadBalancerIngress{LoadBalancerIngress{IP:192.168.127.2,Hostname:,Ports:[]PortStatus{},},},},Conditions:[]Condition{},},}
2024/07/09 13:23:58 [DEBUG]   coreV1 services list :[{TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:kubernetes GenerateName: Namespace:default SelfLink: UID:db8d3a3f-e66e-4bcc-bd11-0f95679fa2f7 ResourceVersion:48 Generation:0 CreationTimestamp:2024-06-17 15:50:05 +0200 CEST DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[component:apiserver provider:kubernetes] Annotations:map[] OwnerReferences:[] Finalizers:[] ManagedFields:[{Manager:k3s Operation:Update APIVersion:v1 Time:2024-06-17 15:50:05 +0200 CEST FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:labels":{".":{},"f:component":{},"f:provider":{}}},"f:spec":{"f:clusterIP":{},"f:internalTrafficPolicy":{},"f:ipFamilyPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:sessionAffinity":{},"f:type":{}}} Subresource:}]} Spec:{Ports:[{Name:https Protocol:TCP AppProtocol:<nil> Port:443 TargetPort:{Type:0 IntVal:6443 StrVal:} NodePort:0}] Selector:map[] ClusterIP:10.43.0.1 ClusterIPs:[10.43.0.1] Type:ClusterIP ExternalIPs:[] SessionAffinity:None LoadBalancerIP: LoadBalancerSourceRanges:[] ExternalName: ExternalTrafficPolicy: HealthCheckNodePort:0 PublishNotReadyAddresses:false SessionAffinityConfig:nil IPFamilies:[IPv4] IPFamilyPolicy:0xc000530620 AllocateLoadBalancerNodePorts:<nil> LoadBalancerClass:<nil> InternalTrafficPolicy:0xc000530640} Status:{LoadBalancer:{Ingress:[]} Conditions:[]}} {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:kube-dns GenerateName: Namespace:kube-system SelfLink: UID:feb11d8f-bf19-424a-a8ea-af9ca2d6b015 ResourceVersion:255 Generation:0 CreationTimestamp:2024-06-17 15:50:10 +0200 CEST DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[k8s-app:kube-dns kubernetes.io/cluster-service:true kubernetes.io/name:CoreDNS objectset.rio.cattle.io/hash:bce283298811743a0386ab510f2f67ef74240c57] Annotations:map[objectset.rio.cattle.io/applied:H4sIAAAAAAAA/4ySQYvbMBCF/0p5Z9m142TjFfRQdimUQgmk7aXsQZYnG9W2JKRJSgj+70WJl00b0vZm8958vHmjI5Q33yhE4ywk9iUEOmNbSKwp7I0mCAzEqlWsII9Q1jpWbJyN6dc1P0hzJM6DcblWzD3lxr01iQBxU3c/LYXsed9BoqvihbIvxZtPxrbv3rets/9EWDUQJLQL1Nr4X/bolU4z3a6hLB4i0wABH9xAvKVdTG7vAkPivlxUV1rUQfkE4LAjjAK9aqg/1dHVMVPev8DPidJnsMR0mtb9LjKFLE71Tpg/bdNeDy7Q4+f1X/baqriFRKNpVlez+7ouy+W8UkVV36lmURab2eZuSZvlfDYv9GKZ8k7si4i3ahkFoiedVptyf1xBoizyeZUXeVlAvAoR8vul9CRg/Ac1mP6wcr3Rh/SojH3uac1Kd6lXFzhNHV8indOcy19Up+LZaddD4uvjCqO4dGas/S33l4ff3ANxMPqVne567X8SiNSTZhduHHMcx18BAAD//5X9LCMyAwAA objectset.rio.cattle.io/id: objectset.rio.cattle.io/owner-gvk:k3s.cattle.io/v1, Kind=Addon objectset.rio.cattle.io/owner-name:coredns objectset.rio.cattle.io/owner-namespace:kube-system prometheus.io/port:9153 prometheus.io/scrape:true] OwnerReferences:[] Finalizers:[] ManagedFields:[{Manager:deploy@x000m1301 Operation:Update APIVersion:v1 Time:2024-06-17 15:50:10 +0200 CEST FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:objectset.rio.cattle.io/applied":{},"f:objectset.rio.cattle.io/id":{},"f:objectset.rio.cattle.io/owner-gvk":{},"f:objectset.rio.cattle.io/owner-name":{},"f:objectset.rio.cattle.io/owner-namespace":{},"f:prometheus.io/port":{},"f:prometheus.io/scrape":{}},"f:labels":{".":{},"f:k8s-app":{},"f:kubernetes.io/cluster-service":{},"f:kubernetes.io/name":{},"f:objectset.rio.cattle.io/hash":{}}},"f:spec":{"f:clusterIP":{},"f:clusterIPs":{},"f:internalTrafficPolicy":{},"f:ipFamilyPolicy":{},"f:ports":{".":{},"k:{\"port\":53,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":53,\"protocol\":\"UDP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":9153,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} Subresource:}]} Spec:{Ports:[{Name:dns Protocol:UDP AppProtocol:<nil> Port:53 TargetPort:{Type:0 IntVal:53 StrVal:} NodePort:0} {Name:dns-tcp Protocol:TCP AppProtocol:<nil> Port:53 TargetPort:{Type:0 IntVal:53 StrVal:} NodePort:0} {Name:metrics Protocol:TCP AppProtocol:<nil> Port:9153 TargetPort:{Type:0 IntVal:9153 StrVal:} NodePort:0}] Selector:map[k8s-app:kube-dns] ClusterIP:10.43.0.10 ClusterIPs:[10.43.0.10] Type:ClusterIP ExternalIPs:[] SessionAffinity:None LoadBalancerIP: LoadBalancerSourceRanges:[] ExternalName: ExternalTrafficPolicy: HealthCheckNodePort:0 PublishNotReadyAddresses:false SessionAffinityConfig:nil IPFamilies:[IPv4] IPFamilyPolicy:0xc000530840 AllocateLoadBalancerNodePorts:<nil> LoadBalancerClass:<nil> InternalTrafficPolicy:0xc000530860} Status:{LoadBalancer:{Ingress:[]} Conditions:[]}} {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:metrics-server GenerateName: Namespace:kube-system SelfLink: UID:a6f856eb-bded-4044-9277-4d2a8a95e12d ResourceVersion:300 Generation:0 CreationTimestamp:2024-06-17 15:50:11 +0200 CEST DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[kubernetes.io/cluster-service:true kubernetes.io/name:Metrics-server objectset.rio.cattle.io/hash:a5d3bc601c871e123fa32b27f549b6ea770bcf4a] Annotations:map[objectset.rio.cattle.io/applied:H4sIAAAAAAAA/4SQQWsbMRCF/0p5Z9nNep04FfRQWnopBUNKL6WHWe04VleWhGa8xZj970UbFxLaJCchvZn3vqczKPvvXMSnCIuxgcHgYw+LOy6jdwyDAyv1pAR7BsWYlNSnKPWaul/sVFiXxaelI9XAS5/e+uoA86yefkcui/txgMXQyiNlbMybLz727z/0fYqvWkQ6MGxFLN7JQriMXObjgf31bcnkqsVw7HghJ1E+YDII1HGYO1ahRFaWuujCUfRRhIWWY016Onbh+vqE6wWePckeFnTdt527uWrc7abhZtXuqF11q83uev2uu2HabK46t1tTJfxvdTy8P1NKMrtayefPdPDhtE3BuxMstoV3XD4dKdwpuQEGORUV2B/nvzl71SwXAXa9bg1ySZpcCrD49nELA6Vyz7qdJy4L008D4cBOU5l/81YWlPO/4NM0/QkAAP//sKxN444CAAA objectset.rio.cattle.io/id: objectset.rio.cattle.io/owner-gvk:k3s.cattle.io/v1, Kind=Addon objectset.rio.cattle.io/owner-name:metrics-server-service objectset.rio.cattle.io/owner-namespace:kube-system] OwnerReferences:[] Finalizers:[] ManagedFields:[{Manager:deploy@x000m1301 Operation:Update APIVersion:v1 Time:2024-06-17 15:50:11 +0200 CEST FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:objectset.rio.cattle.io/applied":{},"f:objectset.rio.cattle.io/id":{},"f:objectset.rio.cattle.io/owner-gvk":{},"f:objectset.rio.cattle.io/owner-name":{},"f:objectset.rio.cattle.io/owner-namespace":{}},"f:labels":{".":{},"f:kubernetes.io/cluster-service":{},"f:kubernetes.io/name":{},"f:objectset.rio.cattle.io/hash":{}}},"f:spec":{"f:internalTrafficPolicy":{},"f:ipFamilyPolicy":{},"f:ports":{".":{},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} Subresource:}]} Spec:{Ports:[{Name:https Protocol:TCP AppProtocol:<nil> Port:443 TargetPort:{Type:1 IntVal:0 StrVal:https} NodePort:0}] Selector:map[k8s-app:metrics-server] ClusterIP:10.43.247.77 ClusterIPs:[10.43.247.77] Type:ClusterIP ExternalIPs:[] SessionAffinity:None LoadBalancerIP: LoadBalancerSourceRanges:[] ExternalName: ExternalTrafficPolicy: HealthCheckNodePort:0 PublishNotReadyAddresses:false SessionAffinityConfig:nil IPFamilies:[IPv4] IPFamilyPolicy:0xc000530a20 AllocateLoadBalancerNodePorts:<nil> LoadBalancerClass:<nil> InternalTrafficPolicy:0xc000530a30} Status:{LoadBalancer:{Ingress:[]} Conditions:[]}} {TypeMeta:{Kind: APIVersion:} ObjectMeta:{Name:traefik GenerateName: Namespace:kube-system SelfLink: UID:fbd8161e-709b-4281-b2b5-7f886b257407 ResourceVersion:615 Generation:0 CreationTimestamp:2024-06-17 15:50:24 +0200 CEST DeletionTimestamp:<nil> DeletionGracePeriodSeconds:<nil> Labels:map[app.kubernetes.io/instance:traefik-kube-system app.kubernetes.io/managed-by:Helm app.kubernetes.io/name:traefik helm.sh/chart:traefik-25.0.3_up25.0.0] Annotations:map[meta.helm.sh/release-name:traefik meta.helm.sh/release-namespace:kube-system] OwnerReferences:[] Finalizers:[service.kubernetes.io/load-balancer-cleanup] ManagedFields:[{Manager:helm Operation:Update APIVersion:v1 Time:2024-06-17 15:50:24 +0200 CEST FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:annotations":{".":{},"f:meta.helm.sh/release-name":{},"f:meta.helm.sh/release-namespace":{}},"f:labels":{".":{},"f:app.kubernetes.io/instance":{},"f:app.kubernetes.io/managed-by":{},"f:app.kubernetes.io/name":{},"f:helm.sh/chart":{}}},"f:spec":{"f:allocateLoadBalancerNodePorts":{},"f:externalTrafficPolicy":{},"f:internalTrafficPolicy":{},"f:ipFamilyPolicy":{},"f:ports":{".":{},"k:{\"port\":80,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}},"k:{\"port\":443,\"protocol\":\"TCP\"}":{".":{},"f:name":{},"f:port":{},"f:protocol":{},"f:targetPort":{}}},"f:selector":{},"f:sessionAffinity":{},"f:type":{}}} Subresource:} {Manager:k3s Operation:Update APIVersion:v1 Time:2024-06-17 15:50:25 +0200 CEST FieldsType:FieldsV1 FieldsV1:{"f:metadata":{"f:finalizers":{".":{},"v:\"service.kubernetes.io/load-balancer-cleanup\"":{}}},"f:status":{"f:loadBalancer":{"f:ingress":{}}}} Subresource:status}]} Spec:{Ports:[{Name:web Protocol:TCP AppProtocol:<nil> Port:80 TargetPort:{Type:1 IntVal:0 StrVal:web} NodePort:32160} {Name:websecure Protocol:TCP AppProtocol:<nil> Port:443 TargetPort:{Type:1 IntVal:0 StrVal:websecure} NodePort:30985}] Selector:map[app.kubernetes.io/instance:traefik-kube-system app.kubernetes.io/name:traefik] ClusterIP:10.43.5.92 ClusterIPs:[10.43.5.92] Type:LoadBalancer ExternalIPs:[] SessionAffinity:None LoadBalancerIP: LoadBalancerSourceRanges:[] ExternalName: ExternalTrafficPolicy:Cluster HealthCheckNodePort:0 PublishNotReadyAddresses:false SessionAffinityConfig:nil IPFamilies:[IPv4] IPFamilyPolicy:0xc000530ba0 AllocateLoadBalancerNodePorts:0xc0003d0b0c LoadBalancerClass:<nil> InternalTrafficPolicy:0xc000530bc0} Status:{LoadBalancer:{Ingress:[{IP:192.168.127.2 Hostname: Ports:[]}]} Conditions:[]}}]
2024/07/09 13:23:58 [DEBUG]   watching kubernetes services
2024/07/09 13:23:58 [DEBUG]   create port mapping for port 80, protocol TCP
2024/07/09 13:23:58 [DEBUG]   create port mapping for port 443, protocol TCP
2024/07/09 13:23:58 [DEBUG]   kubernetes service update: <unknown>/<unknown> has -0 +0 service port
2024/07/09 13:23:58 [DEBUG]   kubernetes service update: kube-system/traefik has -0 +2 service port
2024/07/09 13:23:58 [DEBUG]   calling /services/forwarder/expose API for the following port binding: {HostIP:0.0.0.0 HostPort:443}
2024/07/09 13:23:58 [DEBUG]   kubernetes service update: <unknown>/<unknown> has -0 +0 service port
2024/07/09 13:23:58 [DEBUG]   kubernetes service update: <unknown>/<unknown> has -0 +0 service port
2024/07/09 13:23:58 [DEBUG]   kubernetes service update: <unknown>/<unknown> has -0 +0 service port
2024/07/09 13:23:58 [DEBUG]   sending a HTTP POST to /services/forwarder/expose API with expose request: &{0.0.0.0:443 192.168.127.2:443 }
2024/07/09 13:23:58 [DEBUG]   calling /services/forwarder/expose API for the following port binding: {HostIP:0.0.0.0 HostPort:80}
2024/07/09 13:23:58 [DEBUG]   sending a HTTP POST to /services/forwarder/expose API with expose request: &{0.0.0.0:80 192.168.127.2:80 }
2024/07/09 13:23:58 [DEBUG]   portStorage add status: map[fbd8161e-709b-4281-b2b5-7f886b257407:map[443/TCP:[{HostIP:0.0.0.0 HostPort:443}] 80/TCP:[{HostIP:0.0.0.0 HostPort:80}]]]
2024/07/09 13:23:58 [DEBUG]   forwarding to wsl-proxy to add port mapping: {Remove:false Ports:map[443/TCP:[{HostIP:0.0.0.0 HostPort:443}] 80/TCP:[{HostIP:0.0.0.0 HostPort:80}]] ConnectAddrs:[]}
2024/07/09 13:23:58 [DEBUG]   kubernetes service: port mapping added kube-system/traefik:map[80:TCP 443:TCP]
2024/07/09 13:24:02 [DEBUG]   received an event: {Status: start ContainerID: af4d41e0b147205deccec869c18530f52b832eae204ce56c26f623e96f170839 Ports: map[]}
2024/07/09 13:24:03 [DEBUG]   received an event: {Status: start ContainerID: 07b46af379ba11a7b8c511daf53115504c1de7fc057a23aaeaa6a7ffb422ef3e Ports: map[]}
2024/07/09 13:24:03 [DEBUG]   received an event: {Status: start ContainerID: bedda313f9ec858ae62a34fb8c44b0065aa5591ac57f7ebe897971e852568b12 Ports: map[]}
2024/07/09 13:24:03 [DEBUG]   received an event: {Status: start ContainerID: 60be1572bfae16a716715592bb875fcca6e932233faca44dca4a9b0c3bc67899 Ports: map[]}
2024/07/09 13:24:03 [DEBUG]   received an event: {Status: start ContainerID: 7f713d2db18c3f90e3c081ee8e22c62e3f8e1ab45932276d75b4d4ae487c3d19 Ports: map[]}
2024/07/09 13:24:03 [DEBUG]   received an event: {Status: start ContainerID: 3aa1a29f397e9d9c1bbc5fbb917e169674925791fce6304725058ae26d99f5c7 Ports: map[]}
2024/07/09 13:24:04 [DEBUG]   received an event: {Status: start ContainerID: 2b9f06d121da96bf2a879eb5adfc800702970f20cfe037b6db878902d665a6b4 Ports: map[]}
2024/07/09 13:24:04 [DEBUG]   received an event: {Status: start ContainerID: f02fc6a36e1bc2b5e4c773ff83fb109c1fa5d9ba712d8a8afcbd488dae65de7f Ports: map[]}
2024/07/09 13:24:04 [DEBUG]   received an event: {Status: start ContainerID: e8840e826d88d4b5017c80fdd0063af48c92419a2f944281ecb7991ddfe6a09d Ports: map[]}
2024/07/09 13:24:04 [DEBUG]   received an event: {Status: start ContainerID: bc7ab8848e66ee71efb42a1ec986c22b02e836c6a02b94a1005a29cd50e01803 Ports: map[]}
2024/07/09 13:24:04 [DEBUG]   received an event: {Status: start ContainerID: ecbc5648b5580f2900c63a00add6243c04ed7f3a5aedf1acf7bf81d8224e4ac3 Ports: map[]}
2024/07/09 13:26:53 [DEBUG]   received [terminated] signal
2024/07/09 13:26:53 [ERROR]   context cancellation: context canceled
2024/07/09 13:26:53 [DEBUG]   calling /services/forwarder/unexpose API for the following port binding: {HostIP:0.0.0.0 HostPort:443}
2024/07/09 13:26:53 [DEBUG]   sending a HTTP POST to /services/forwarder/unexpose API with expose request: &{0.0.0.0:443 }
2024/07/09 13:26:53 [DEBUG]   kubernetes watcher: context closed [error=context canceled]
2024/07/09 13:26:53 [DEBUG]   calling /services/forwarder/unexpose API for the following port binding: {HostIP:0.0.0.0 HostPort:80}
2024/07/09 13:26:53 [DEBUG]   sending a HTTP POST to /services/forwarder/unexpose API with expose request: &{0.0.0.0:80 }
2024/07/09 13:26:53 [DEBUG]   forwarding to wsl-proxy to remove port mapping: {Remove:true Ports:map[443/TCP:[{HostIP:0.0.0.0 HostPort:443}] 80/TCP:[{HostIP:0.0.0.0 HostPort:80}]] ConnectAddrs:[]}
2024/07/09 13:26:53 [DEBUG]   removing the following container [fbd8161e-709b-4281-b2b5-7f886b257407] port binding: map[443/TCP:[{HostIP:0.0.0.0 HostPort:443}] 80/TCP:[{HostIP:0.0.0.0 HostPort:80}]]
2024/07/09 13:26:53 [FATAL]   error watching services: context canceled
2024/07/09 13:27:18 [INFO]    Starting Rancher Desktop Agent in [AdminInstall=true] mode
2024/07/09 13:27:23 [DEBUG]   checking if container engine API is running at /var/run/docker.sock
2024/07/09 16:43:08 [DEBUG]   received [terminated] signal
2024/07/09 16:43:08 [ERROR]   context cancellation: context canceled
2024/07/09 16:43:08 [INFO]    Rancher Desktop Agent Shutting Down
2024/07/09 16:43:33 [INFO]    Starting Rancher Desktop Agent in [AdminInstall=true] mode
2024/07/09 16:43:38 [DEBUG]   checking if container engine API is running at /var/run/docker.sock
2024/07/09 17:04:21 [DEBUG]   received [terminated] signal
2024/07/09 17:04:21 [ERROR]   context cancellation: context canceled
2024/07/09 17:04:21 [INFO]    Rancher Desktop Agent Shutting Down
2024/07/09 17:04:49 [INFO]    Starting Rancher Desktop Agent in [AdminInstall=true] mode
2024/07/09 17:04:54 [DEBUG]   checking if container engine API is running at /var/run/docker.sock
2024/07/09 17:10:30 [DEBUG]   received [terminated] signal
2024/07/09 17:10:30 [ERROR]   context cancellation: context canceled
2024/07/09 17:10:30 [INFO]    Rancher Desktop Agent Shutting Down
2024/07/09 17:10:55 [INFO]    Starting Rancher Desktop Agent in [AdminInstall=true] mode
2024/07/09 17:11:00 [DEBUG]   checking if container engine API is running at /var/run/docker.sock

wsl-helper.log (again the permission denied errors repeated forever):

Error: could not detect WSL2 VM: could not find WSL2 VM ID: could not dial VM 48B7EBAE-DA49-4B92-94EE-31F889238512: could not dial Hyper-V socket: connect(48b7ebae-da49-4b92-94ee-31f889238512:016a6eb7-facb-11e6-bd58-64006a7986d3) failed: Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat.
Error: could not detect WSL2 VM: could not find WSL2 VM ID: could not dial VM 48B7EBAE-DA49-4B92-94EE-31F889238512: could not dial Hyper-V socket: connect(48b7ebae-da49-4b92-94ee-31f889238512:016a6eb7-facb-11e6-bd58-64006a7986d3) failed: Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat.
Error: could not detect WSL2 VM: could not find WSL2 VM ID: could not dial VM 48B7EBAE-DA49-4B92-94EE-31F889238512: could not dial Hyper-V socket: connect(48b7ebae-da49-4b92-94ee-31f889238512:016a6eb7-facb-11e6-bd58-64006a7986d3) failed: Ein Verbindungsversuch ist fehlgeschlagen, da die Gegenstelle nach einer bestimmten Zeitspanne nicht richtig reagiert hat, oder die hergestellte Verbindung war fehlerhaft, da der verbundene Host nicht reagiert hat.
time="2024-07-09T13:24:00+02:00" level=info msg="Got WSL2 VM" guid=48b7ebae-da49-4b92-94ee-31f889238512
Error: could not listen on npipe:////./pipe/docker_engine: open //./pipe/docker_engine: Zugriff verweigert
time="2024-07-09T13:24:03+02:00" level=info msg="Got WSL2 VM" guid=48b7ebae-da49-4b92-94ee-31f889238512
Error: could not listen on npipe:////./pipe/docker_engine: open //./pipe/docker_engine: Zugriff verweigert
time="2024-07-09T13:24:06+02:00" level=info msg="Got WSL2 VM" guid=48b7ebae-da49-4b92-94ee-31f889238512

docker.log:

time="2024-07-09T13:27:24.010384700+02:00" level=info msg="Removing stale sandbox bbb070c05b637400e69c47c7493ac679171dd51a432046ee7b0fc8a35eba8cc8 (af4d41e0b147205deccec869c18530f52b832eae204ce56c26f623e96f170839)"
time="2024-07-09T13:27:24.017123100+02:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 8a6fad1cd747d3e0e56a8bc5437421ebfde9040f4e10cd930af85d49152398c0 8ddc0faa6f44fdb65a7ae180d0c4f858d8a2c13ee4bcc924c411c19d705dda7a], retrying...."
time="2024-07-09T13:27:24.035288500+02:00" level=info msg="Removing stale sandbox f7093513f19efc77b0b8dfc4404e4a3744df11aeae0e1a5270bb56324bb46981 (7f713d2db18c3f90e3c081ee8e22c62e3f8e1ab45932276d75b4d4ae487c3d19)"
time="2024-07-09T13:27:24.041503600+02:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 8a6fad1cd747d3e0e56a8bc5437421ebfde9040f4e10cd930af85d49152398c0 2b486cc6b2357221177c9f27402ed1f490101490f02db1179918d0e5126ade9d], retrying...."
time="2024-07-09T13:27:24.065178900+02:00" level=info msg="Removing stale sandbox 59f9d6d4996e9cc687db09aa8e5f7294220408a973b3066127926c263436f79f (3aa1a29f397e9d9c1bbc5fbb917e169674925791fce6304725058ae26d99f5c7)"
time="2024-07-09T13:27:24.071938600+02:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 8a6fad1cd747d3e0e56a8bc5437421ebfde9040f4e10cd930af85d49152398c0 3bbee5ca84600c68c90c09837728e86fa2f0dfb3a817b187a892feb90dd4760f], retrying...."
time="2024-07-09T13:27:24.090617600+02:00" level=info msg="Removing stale sandbox 805b38efbafcd2a22c36990813ea2ab7289eab07727dd9662d5970205da03feb (60be1572bfae16a716715592bb875fcca6e932233faca44dca4a9b0c3bc67899)"
time="2024-07-09T13:27:24.097089900+02:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 8a6fad1cd747d3e0e56a8bc5437421ebfde9040f4e10cd930af85d49152398c0 1659398e1e19b0fa880fe00a57d55d3e3055cb7a5a77a0c41b88523bda980e83], retrying...."
time="2024-07-09T13:27:24.116382800+02:00" level=info msg="Removing stale sandbox a6b7063afdb75768c9356be711d3743f3ff7d74e0d7befe195648711270eea62 (07b46af379ba11a7b8c511daf53115504c1de7fc057a23aaeaa6a7ffb422ef3e)"
time="2024-07-09T13:27:24.123336400+02:00" level=warning msg="Error (Unable to complete atomic operation, key modified) deleting object [endpoint 8a6fad1cd747d3e0e56a8bc5437421ebfde9040f4e10cd930af85d49152398c0 4f1d4e1884e1e47140952019e0579bff90b02e646873278715921c0bcf1a90a6], retrying...."
time="2024-07-09T13:27:24.175637900+02:00" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
time="2024-07-09T13:27:24.212149300+02:00" level=info msg="Loading containers: done."
time="2024-07-09T13:27:24.232408700+02:00" level=warning msg="WARNING: No blkio throttle.read_bps_device support"
time="2024-07-09T13:27:24.233105600+02:00" level=warning msg="WARNING: No blkio throttle.write_bps_device support"
time="2024-07-09T13:27:24.233627800+02:00" level=warning msg="WARNING: No blkio throttle.read_iops_device support"
time="2024-07-09T13:27:24.234107500+02:00" level=warning msg="WARNING: No blkio throttle.write_iops_device support"
time="2024-07-09T13:27:24.235068200+02:00" level=info msg="Docker daemon" commit=e63daec8672d77ac0b2b5c262ef525c7cf17fd20 containerd-snapshotter=false storage-driver=overlay2 version=25.0.5
time="2024-07-09T13:27:24.235990900+02:00" level=info msg="Daemon has completed initialization"
time="2024-07-09T13:27:24.255774500+02:00" level=info msg="API listen on /var/run/docker.sock"
time="2024-07-09T13:27:24.255777500+02:00" level=info msg="API listen on /mnt/wsl/rancher-desktop/run/docker.sock"
error accepting client connection: bad file descriptor
error accepting client connection: invalid argument
error accepting client connection: invalid argument
[repeated 1000 times]
error accepting client connection: bad file descriptor
[repeated 1000 times]
error accepting client connection: bad file descriptor
error accepting client connection: bad file descriptor
could not connect to docker: dial unix /mnt/wsl/rancher-desktop/run/docker.sock: connect: connection refused
time="2024-07-09T16:43:37.027830100+02:00" level=info msg="Starting up"

(hmm maybe I've duplicated the docker log now, messed up scrolling)

wsl.log:


2024-07-09T15:04:23.489Z: Error trying to start wsl-proxy in default namespace: c [Error]: wsl.exe exited with code 1
    at ChildProcess.<anonymous> (C:\Program Files\Rancher Desktop\resources\app.asar\dist\app\background.js:2:156858)
    at ChildProcess.emit (node:events:513:28)
    at Process.onexit (node:internal/child_process:291:12) {
  command: [
    'wsl.exe',
    '--distribution',
    'rancher-desktop',
    '--exec',
    '/usr/local/bin/wsl-proxy',
    '-debug',
    'false'
  ],
  code: 1,
  [Symbol(child-process.command)]: 'wsl.exe --distribution rancher-desktop --exec /usr/local/bin/wsl-proxy -debug false'
}
2024-07-09T15:04:23.776Z: Registered distributions: Ubuntu,rancher-desktop-data,rancher-desktop
2024-07-09T15:04:24.062Z: Registered distributions: Ubuntu,rancher-desktop-data,rancher-desktop
2024-07-09T15:04:24.351Z: Registered distributions: Ubuntu,rancher-desktop-data,rancher-desktop
2024-07-09T15:04:24.351Z: data distro already registered
2024-07-09T15:04:30.935Z: Installing C:\Program Files\Rancher Desktop\resources\resources\linux\internal\trivy as /mnt/c/Program Files/Rancher Desktop/resources/resources/linux/internal/trivy into /usr/local/bin/trivy ...
2024-07-09T15:04:32.026Z: Installing C:\Program Files\Rancher Desktop\resources\resources\linux\internal\rancher-desktop-guestagent as /mnt/c/Program Files/Rancher Desktop/resources/resources/linux/internal/rancher-desktop-guestagent into /usr/local/bin//rancher-desktop-guestagent ...
2024-07-09T15:10:24.163Z: Registered distributions: Ubuntu,rancher-desktop
2024-07-09T15:10:31.238Z: /sbin/init exited gracefully.
2024-07-09T15:10:31.681Z: Registered distributions: Ubuntu,rancher-desktop
2024-07-09T15:10:31.802Z: WSL: executing: /usr/local/bin/wsl-proxy -debug false: Error: wsl.exe exited with code 1

2024-07-09T15:10:31.802Z: Error trying to start wsl-proxy in default namespace: c [Error]: wsl.exe exited with code 1
    at ChildProcess.<anonymous> (C:\Program Files\Rancher Desktop\resources\app.asar\dist\app\background.js:2:156858)
    at ChildProcess.emit (node:events:513:28)
    at Process.onexit (node:internal/child_process:291:12) {
  command: [
    'wsl.exe',
    '--distribution',
    'rancher-desktop',
    '--exec',
    '/usr/local/bin/wsl-proxy',
    '-debug',
    'false'
  ],
  code: 1,
  [Symbol(child-process.command)]: 'wsl.exe --distribution rancher-desktop --exec /usr/local/bin/wsl-proxy -debug false'
}
2024-07-09T15:10:32.015Z: Registered distributions: Ubuntu,rancher-desktop-data,rancher-desktop
2024-07-09T15:10:32.299Z: Registered distributions: Ubuntu,rancher-desktop-data,rancher-desktop
2024-07-09T15:10:32.638Z: Registered distributions: Ubuntu,rancher-desktop-data,rancher-desktop
2024-07-09T15:10:32.638Z: data distro already registered
2024-07-09T15:10:38.222Z: Installing C:\Program Files\Rancher Desktop\resources\resources\linux\internal\trivy as /mnt/c/Program Files/Rancher Desktop/resources/resources/linux/internal/trivy into /usr/local/bin/trivy ...
2024-07-09T15:10:38.718Z: Installing C:\Program Files\Rancher Desktop\resources\resources\linux\internal\rancher-desktop-guestagent as /mnt/c/Program Files/Rancher Desktop/resources/resources/linux/internal/rancher-desktop-guestagent into /usr/local/bin//rancher-desktop-guestagent ...

Edit: Oh I just noticed some errors are about docker.exe, so these likely occur when WSL is disabled and it (apparently also with errors) tries to use the Windows docker.

Expected Behavior

Installation works? Or I can at least revert my decision to use Kubernetes or somehow make this work?

Additional Information

Windows host powershell:

>  wsl -l -v
  NAME                    STATE           VERSION
* Ubuntu                  Running         2
  rancher-desktop-data    Stopped         2
  rancher-desktop         Running         2
  Ubuntu-20.04            Stopped         1
> wsl.exe --status
Standard Distribution: Ubuntu
Standardversion: 2

Das Windows-Subsystem für Linux wurde zuletzt aktualisiert am 13.06.2022
Automatische Updates sind aktiviert.

Kernel-Version: 5.10.102.1
> wsl cat /proc/version
Linux version 5.10.102.1-microsoft-standard-WSL2 (oe-user@oe-host) (x86_64-msft-linux-gcc (GCC) 9.3.0, GNU ld (GNU Binutils) 2.34.0.20200220) #1 SMP Wed Mar 2 00:30:59 UTC 2022
>  (get-item C:\windows\system32\wsl.exe).VersionInfo.FileVersion
10.0.19041.3636 (WinBuild.160101.0800)

Inside WSL (Ubuntu 20.04.6 LTS (Focal Fossa)):

$ uname -a
Linux **** 5.10.102.1-microsoft-standard-WSL2 #1 SMP Wed Mar 2 00:30:59 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux

As WSL is not installed from Windows Store it's hard to get out it's version.

Rancher Desktop Version

1.14.1

Rancher Desktop K8s Version

N/A

Which container engine are you using?

dockerd

What operating system are you using?

Windows

Operating System / Build Version

Windows 10 Enterprise Version 22H2 build 19045.4529

What CPU architecture are you using?

x64

Linux only: what package format did you use to install Rancher Desktop?

None

Windows User Only

Yes some VPN, but likely unrelated as this is not a networking issue, but long before?

Rancher Desktop also installed via chocolately.

@rklec rklec added the kind/bug Something isn't working label Jul 9, 2024
@rklec rklec changed the title Kubernets unable to start due to rdx-proxy.tar error Kubernetes unable to start due to rdx-proxy.tar error Jul 9, 2024
@mook-as
Copy link
Contributor

mook-as commented Jul 9, 2024

Error: could not listen on npipe:////./pipe/docker_engine: open //./pipe/docker_engine: Zugriff verweigert
time="2024-07-09T15:16:59+02:00" level=info msg="Got WSL2 VM" guid=48b7ebae-da49-4b92-94ee-31f889238512

Hmm, that's probably the issue: we fail to set up the (Windows-side) docker socket listener, so running docker.exe load … fails because it couldn't talk to the server.

Given your woes of installing WSL and whatnot, I guess your IT people might have done some policy things to make this not work? You might be able to get something working by switching to containerd; that might be an acceptable solution depending on what you want to do with Rancher Desktop.

Either way, it would be useful to see if you can figure out why access was denied. If we can have a good way to knowing that is going to happen, we might be able to listen on a TCP port instead? (But that might take a while to implement.)

@rklec
Copy link
Author

rklec commented Jul 9, 2024

You might be able to get something working by switching to containerd;

Surprisingly, this has worked. 😆

Either way, it would be useful to see if you can figure out why access was denied.

If I would understand that thing then maybe I could help. See maybe the security products I have mentioned.
So it tries to access the pipe npipe:////./pipe/docker_engine to communicate? I have no idea why that fails.

@jandubois
Copy link
Member

  • I do have docker already installed in the WSL manually before and I was able to start it with dockerd. Unfortunately, my tries making it a real service or so always failed so far (and it was not important).

I wonder if your experiments did set up some kind of service that already creates the named pipe instead, making it unavailable to Rancher Desktop.

When Rancher Desktop is not running, could you run the following command in PowerShell:

[System.IO.Directory]::GetFiles("\\.\\pipe\\") | findstr docker

If it finds the pipe, it means there is a conflict with something else already running on your system.

@rklec
Copy link
Author

rklec commented Jul 9, 2024

Well I use containerd now, but okay, I exit rancher and indeed it finds sth:

> [System.IO.Directory]::GetFiles("\\.\\pipe\\") | findstr docker
\\.\\pipe\\docker_engine

That said, I would argue against that this is me setting sth. up:

  • There is Windows docker.exe already installed, also as a windows service, that may be it, but not related to WSL?
  • Otherwise, my statement above was more about setting it up as a service inside of WSL aka systemd (which does not work) or so. Nothing fancy like using pipes or whatever.

@jandubois
Copy link
Member

  • There is Windows docker.exe already installed, also as a windows service, that may be it, but not related to WSL?

Do you have "Docker Desktop" installed on the machine? Maybe it is auto-starting or something?

Anyways, I think this is a local conflict on your machine; if you manage to resolve this, the Rancher Desktop likely will start working even with moby instead of containerd.

@rklec
Copy link
Author

rklec commented Jul 9, 2024

Docker Desktop is not installed. But the docker service on Windows is started:
grafik

And indeed, if I stop that service, it works: [System.IO.Directory]::GetFiles("\\.\\pipe\\") | findstr docker finds nothing anymore.

@rklec
Copy link
Author

rklec commented Jul 9, 2024

Ah and indeed, now also Rancher seems to work even if I switch back to dockerd.,

@rklec rklec closed this as completed Jul 9, 2024
@rklec
Copy link
Author

rklec commented Jul 10, 2024

The only suggestion I would have is: could not you catch this error and show a more meaningful error message? I mean form the error itself, one would not come up with "Ah yeah, this is a conflicting docker installation!". Maybe you could hint at that potential cause, when the error happens though, so one does not need to search for GitHub or so, especially as I guess this could be a quite common error, is not it?

@jandubois
Copy link
Member

could not you catch this error and show a more meaningful error message?

Yes, I think we should detect if the named pipe already exists. I'll edit the issue subject line to reflect this.

I guess this could be a quite common error, is not it?

I'm not sure about that; I don't remember seeing other GitHub issues or Slack threads about this. Normally this can only happen when people run Docker Desktop at the same time. It is quite rare that people try to setup the docker service on Windows without it.

@jandubois jandubois reopened this Jul 10, 2024
@jandubois jandubois changed the title Kubernetes unable to start due to rdx-proxy.tar error Detect when \.\pipe\docker_engine already exists during startup Jul 10, 2024
@jandubois jandubois added kind/enhancement New feature or request platform/windows runtime/moby and removed kind/bug Something isn't working labels Jul 10, 2024
@jandubois jandubois added triage/next-candidate Discuss if it should be moved to "Next" milestone area/diagnostics labels Sep 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/diagnostics kind/enhancement New feature or request platform/windows runtime/moby triage/next-candidate Discuss if it should be moved to "Next" milestone
Projects
None yet
Development

No branches or pull requests

3 participants