-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docker driver: add support for btrfs #7923
Comments
Any idea what that's about? I've never seen this error before. Any chance that you are using |
To help debug, do you mind sharing the result of:
I suspect that we have an issue with btrfs here:
|
Yep - I am using btrfs.
For reference, the btrfs mounts directly on my system are
|
@solarnz I noticed you are using arch linux |
@medyagh sure,
It looks like it has been loaded into the kernel. I also tried forcing docker to use the overlay2 storage driver, including removing the |
@solarnz could u plz paste the output of
u would need to change the docker daemon settings on your system to use overlay2 |
I have modified my docker daemon.json file to include the setting to use overlay2,
Minikube still couldn't start.
I grabbed the output of
and it does appear that it is using the overlay2 storage driver. |
Hey @solarnz I noticed your local docker is running could you try running:
which will force docker in minikube to run systemd. (Sometimes, conflicting cgroup managers can cause issues) |
|
@solarnz - can you add the output of |
hi there, I have same similar issues. The relevant part of the log files from docker systemd outputs is this one (other errors are recovered by further attemps to start kubelet service):
I entered the docker container bash and /dev/mapper/ is only having the control not the mapped partition from the host... can this be a cgroup or volume binding issue? |
The issues is generated by code "workaround" in google/cadvisor at https://github.com/google/cadvisor/blob/366d59d3b625bd7761040ce152d5213fbf19c88a/fs/fs.go#L203 and https://github.com/google/cadvisor/blob/366d59d3b625bd7761040ce152d5213fbf19c88a/fs/fs.go#L540
that executes a stat command on not existing /dev/mapper/xxx in docker container at
|
btrfs is not currently supported by minikube, we test against overlayfs driver. I would be happy to accept PRs that adds btrfs support to minikbue's innner docker setup. |
Using the hints from @marcominetti above, I observed that the device Comparing with my host, |
Thanks @kppullin... The following command worked for me export MISSING_MOUNT_BIND=nvme0n1p3_crypt
docker exec -ti lss /bin/bash -c "ln -s /dev/dm-0 /dev/mapper/$MISSING_MOUNT_BIND" I execute it immediately after the logged task "Creating docker container (CPUs=4, Memory=16384MB) ..." is finished
|
this is task is available for anyone who likes to pick it up |
OS: openSUSE Leap 15.3 x86_64 I had two problems with minikube:
Here's what I did to fix it:
|
I just wanted to say, suggestions by @kppullin @medyagh and @marcoceppi worked for me. I can either link my volume:
Or run with:
Based on comments by @marcoceppi, it appears this would need to be fixed in cAdvisor? |
@braderhart if you mean me with marcoceppi yep, i think that a good focus to start with should be cAdvisor at least to avoid the exception, they were open to receive a PR... Don't know if the tentative/workaround code for btrfs is still there now... In any case, because our workaround is based on mounting devs into the minikube container, the real solution might be in the docker initialization code within minikube itself (creating the mapped symlinks to devs or better to bind mount the devs) |
@marcominetti Do you have time to assist with this? I can confirm the solution you mentioned fixes the issue for me, where I have
|
Yes, sure, I'll try to delve into the code of minikube and cAdvisor. Can someone here review and accept an eventual PR against minikube? |
I'd be happy to review any PR that fixes this issue. |
Kubernetes 1.23 will support |
It appears that Kubernetes 1.23 is released: https://www.kubernetes.dev/resources/release/ As I run minikube v1.24.0, it seems that kubernetes 1.22 is used. Is there a way to use kubernetes 1.23 to I can use docker with btrfs? Or should I wait for minikube 1.25 to run with kubernetes 1.23? |
|
I can confirm this works! (openSUSE tumbleweed, full disk encryption with cryptsetup / dm-crypt, btrfs)
❤️ |
Glad to hear @LeoniePhiline, thanks for testing! |
I believe this has been fixed with k8s 1.23, I'm going to close this, but if it's not resolved feel free to respond and I'll reopen the issue, thanks! |
Steps to reproduce the issue:
minikube start --driver=docker --v=5 --alsologtostderr
I'm at a loss as to how to proceed any further here, I'm not sure if this is related to my system configuration or if it's a bug in minikube.
Full output of failed command:
Optional: Full output of
minikube logs
command:==> container status <==
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
==> describe nodes <==
E0428 17:25:06.962887 60236 logs.go:178] command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
stdout:
stderr:
The connection to the server localhost:8443 was refused - did you specify the right host or port?
output: "\n** stderr ** \nThe connection to the server localhost:8443 was refused - did you specify the right host or port?\n\n** /stderr **"
==> dmesg <==
...
==> kernel <==
07:25:06 up 4:03, 0 users, load average: 1.57, 1.39, 1.16
Linux minikube 5.6.7-arch1-1 #1 SMP PREEMPT Thu, 23 Apr 2020 09:13:56 +0000 x86_64 x86_64 x86_64 GNU/Linux
PRETTY_NAME="Ubuntu 19.10"
==> kubelet <==
-- Logs begin at Tue 2020-04-28 07:10:28 UTC, end at Tue 2020-04-28 07:25:07 UTC. --
Apr 28 07:25:03 minikube kubelet[25347]: I0428 07:25:03.360443 25347 state_mem.go:88] [cpumanager] updated default cpuset: ""
Apr 28 07:25:03 minikube kubelet[25347]: I0428 07:25:03.360460 25347 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Apr 28 07:25:03 minikube kubelet[25347]: I0428 07:25:03.360477 25347 policy_none.go:43] [cpumanager] none policy: Start
Apr 28 07:25:03 minikube kubelet[25347]: W0428 07:25:03.360515 25347 fs.go:540] stat failed on /dev/mapper/cryptroot with error: no such file or directory
Apr 28 07:25:03 minikube kubelet[25347]: F0428 07:25:03.360543 25347 kubelet.go:1383] Failed to start ContainerManager failed to get rootfs info: failed to get device for dir "/var/lib/kubelet": could not find device with major: 0, minor: 28 in cached partitions map
Apr 28 07:25:03 minikube systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
Apr 28 07:25:03 minikube systemd[1]: kubelet.service: Failed with result 'exit-code'.
...
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.312374 25556 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Apr 28 07:25:04 minikube kubelet[25556]: W0428 07:25:04.344205 25556 fs.go:206] stat failed on /dev/mapper/cryptroot with error: no such file or directory
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355387 25556 server.go:646] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355796 25556 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355815 25556 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355895 25556 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355903 25556 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355908 25556 container_manager_linux.go:306] Creating device plugin manager: true
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.355995 25556 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.356007 25556 client.go:92] Start docker client with request timeout=2m0s
Apr 28 07:25:04 minikube kubelet[25556]: W0428 07:25:04.360846 25556 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.360869 25556 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.365791 25556 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.371571 25556 docker_service.go:258] Docker Info: &{ID:JJU7:OSC4:67QH:5P6G:ZRID:BJZK:5B3A:SRU5:K4BX:YQBV:2H22:MGXF Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:11 Driver:overlay2 DriverStatus:[[Backing Filesystem btrfs] [Supports d_type true] [Native Overlay Diff false]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:36 SystemTime:2020-04-28T07:25:04.366616394Z LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.6.7-arch1-1 OperatingSystem:Ubuntu 19.10 (containerized) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc0001b2fc0 NCPU:4 MemTotal:16670576640 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:minikube Labels:[provider=docker] ExperimentalBuild:false ServerVersion:19.03.2 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster: Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:449e926990f8539fd00844b26c07e2f1e306c760 Expected:449e926990f8539fd00844b26c07e2f1e306c760} RuncCommit:{ID: Expected:} InitCommit:{ID: Expected:} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[]}
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.371647 25556 docker_service.go:271] Setting cgroupDriver to cgroupfs
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378056 25556 remote_runtime.go:59] parsed scheme: ""
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378073 25556 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378104 25556 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378112 25556 clientconn.go:933] ClientConn switching balancer to "pick_first"
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378163 25556 remote_image.go:50] parsed scheme: ""
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378173 25556 remote_image.go:50] scheme "" not registered, fallback to default scheme
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378184 25556 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock 0 }] }
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378192 25556 clientconn.go:933] ClientConn switching balancer to "pick_first"
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378223 25556 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Apr 28 07:25:04 minikube kubelet[25556]: I0428 07:25:04.378276 25556 kubelet.go:317] Watching apiserver
...
The text was updated successfully, but these errors were encountered: