Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker says You should always set the Memory limit when using Memoryswap limit, see usage. #10935

Closed
ojab opened this issue Mar 27, 2021 · 11 comments
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. os/linux priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.

Comments

@ojab
Copy link

ojab commented Mar 27, 2021

Steps to reproduce the issue:

  1. minikube start --driver=docker

Full output of failed command:
(without --alsologtostderr because it's 76KB of likely useless data)

$ minikube start --driver=docker
* minikube v1.18.1 on "Linux From Scratch" "SVN"
* Using the docker driver based on existing profile
* Starting control plane node minikube in cluster minikube
* docker "minikube" container is missing, will recreate.
* Creating docker container (CPUs=2, Memory=7500MB) ...
! StartHost failed, but will try again: recreate: creating host: create: creating: create kic node: create container: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --memory-swap=7500mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.18@sha256:ddd0c02d289e3a6fb4bba9a94435840666f4eb81484ff3e707b69c1c484aa45e: exit status 125
stdout:

stderr:
docker: Error response from daemon: You should always set the Memory limit when using Memoryswap limit, see usage.
See 'docker run --help'.

* docker "minikube" container is missing, will recreate.

docker docs says that --memory also should be passed if we pass --memory-swap.

$ docker info
Client:
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Build with BuildKit (Docker Inc., v0.5.1)

Server:
 Containers: 1
  Running: 0
  Paused: 0
  Stopped: 1
 Images: 52
 Server Version: 20.10.5
 Storage Driver: overlay2
  Backing Filesystem: extfs
  Supports d_type: true
  Native Overlay Diff: false
 Logging Driver: json-file
 Cgroup Driver: cgroupfs
 Cgroup Version: 2
 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
 Swarm: inactive
 Runtimes: io.containerd.runc.v2 io.containerd.runtime.v1.linux runc
 Default Runtime: runc
 Init Binary: docker-init
 containerd version: 269548fa27e0089a8b8278fc4fc781d7f65a939b
 runc version: 12644e614e25b05da6fd08a38ffa0cfe1903fdec
 init version: de40ad0
 Security Options:
  seccomp
   Profile: default
  cgroupns
 Kernel Version: 5.11.2-ojab
 OSType: linux
 Architecture: x86_64
 CPUs: 8
 Total Memory: 29.4GiB
 Name: ojab.ru
 ID: JFI3:KW2F:YLS7:CRFD:LCDN:QDBK:CU5S:5SXH:RHUO:2RTS:7UHE:PLNY
 Docker Root Dir: /var/lib/docker
 Debug Mode: false
 Registry: https://index.docker.io/v1/
 Labels:
 Experimental: false
 Insecure Registries:
  127.0.0.0/8
 Live Restore Enabled: false
 Product License: Community Engine

WARNING: API is accessible on http://localhost:2375 without encryption.
         Access to the remote API is equivalent to root access on the host. Refer
         to the 'Docker daemon attack surface' section in the documentation for
         more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
WARNING: Support for cgroup v2 is experimental
@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 27, 2021

I think we can get in this state, when it says we have support for memory-swap but not for memory

        memcgSwap := hasMemorySwapCgroup()
        memcg := HasMemoryCgroup()

It's a weird state, so we should look for it. And avoid using swap if so, but it's probably a cgroups v2 issue.

                if memcg {
                        runArgs = append(runArgs, fmt.Sprintf("--memory=%s", p.Memory))
                }
                if memcgSwap {
                        // Disable swap by setting the value to match
                        runArgs = append(runArgs, fmt.Sprintf("--memory-swap=%s", p.Memory))
                }

Apparently the bug was introduced in 7b0bf57

WARNING: Support for cgroup v2 is experimental

@afbjorklund afbjorklund added os/linux co/docker-driver Issues related to kubernetes in container priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done. labels Mar 27, 2021
@ojab
Copy link
Author

ojab commented Mar 27, 2021

yeah, there is W0327 13:37:34.424561 35031 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. in the full log. and I don't see /sys/fs/cgroup/memory/memsw.limit_in_bytes on my system.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 27, 2021

What cgroupfs mounts do you have, currently ? mount | grep cgroup or something

The regular (v1) setup looks like:

tmpfs on /sys/fs/cgroup type tmpfs (ro,nosuid,nodev,noexec,mode=755)
cgroup2 on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,nsdelegate)
cgroup on /sys/fs/cgroup/systemd type cgroup (rw,nosuid,nodev,noexec,relatime,xattr,name=systemd)
cgroup on /sys/fs/cgroup/rdma type cgroup (rw,nosuid,nodev,noexec,relatime,rdma)
cgroup on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,blkio)
cgroup on /sys/fs/cgroup/hugetlb type cgroup (rw,nosuid,nodev,noexec,relatime,hugetlb)
cgroup on /sys/fs/cgroup/net_cls,net_prio type cgroup (rw,nosuid,nodev,noexec,relatime,net_cls,net_prio)
cgroup on /sys/fs/cgroup/cpu,cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,cpu,cpuacct)
cgroup on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,devices)
cgroup on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,memory)
cgroup on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,cpuset)
cgroup on /sys/fs/cgroup/perf_event type cgroup (rw,nosuid,nodev,noexec,relatime,perf_event)
cgroup on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,pids)
cgroup on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,freezer)

A pure cgroups v2 would be cleaner.

https://www.kernel.org/doc/Documentation/cgroup-v2.txt

Unlike v1, cgroup v2 has only single hierarchy.  The cgroup v2
hierarchy can be mounted with the following mount command::

  # mount -t cgroup2 none $MOUNT_POINT

@ojab
Copy link
Author

ojab commented Mar 27, 2021

cgroup on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime) on my desktop system
cgroup2 on /sys/fs/cgroup type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate,memory_recursiveprot) on my laptop fedora-34 system.

cgroup-v1 is not mounted.

Looks like memory.limit_in_bytes is cgroup-v1 remnant and we should use memory.high.

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 27, 2021

Yeah, it looks like the cgroups checks are broken. And memcgSwap should depend on memcg, like Docker says.

Basically the only useful thing that this code does is to mimic the output from Docker, when running on Ubuntu kernel:

WARNING: No swap limit support

https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 27, 2021

Compare https://github.com/moby/moby/tree/master/pkg/sysinfo

v1

// applyMemoryCgroupInfo adds the memory cgroup controller information to the info.
func applyMemoryCgroupInfo(info *SysInfo, cgMounts map[string]string) []string {
        var warnings []string
        mountPoint, ok := cgMounts["memory"]
        if !ok {
                warnings = append(warnings, "Your kernel does not support cgroup memory limit")
                return warnings
        }
        info.MemoryLimit = ok

        info.SwapLimit = cgroupEnabled(mountPoint, "memory.memsw.limit_in_bytes")
        if !info.SwapLimit {
                warnings = append(warnings, "Your kernel does not support swap memory limit")
        }

v2

func applyMemoryCgroupInfoV2(info *SysInfo, controllers map[string]struct{}, _ string) []string {
        var warnings []string
        if _, ok := controllers["memory"]; !ok {
                warnings = append(warnings, "Unable to find memory controller")
                return warnings
        }

        info.MemoryLimit = true
        info.SwapLimit = true

@ojab
Copy link
Author

ojab commented Mar 27, 2021

So something like #10936 should be enough or more ththoughtful rework is needed?

@afbjorklund
Copy link
Collaborator

afbjorklund commented Mar 27, 2021

Probably needs the same for docker (not only podman), and not sure we need to check any files at all for v2 ?

But something like that... Would be nice to have some tests for cgroups v2 during the minikube regression testing.

@ojab
Copy link
Author

ojab commented Mar 27, 2021

memcg could be compiled out in kernel and cgroupv2 will be mounted without memorycg then.

Unfortunately I don't have time right now to do a proper PR with tests and stuff, so either I can satisfy bots to get it into mergeable state as is (possibly with minor changes based on review comments) or I can drop it and wait for the proper fix, please advise what should be done here.

ojab added a commit to ojab/minikube that referenced this issue Mar 27, 2021
@afbjorklund
Copy link
Collaborator

There is no rush, this only affects v2. We can do a proper fix, and see what went wrong in #10371

@sharifelgamal sharifelgamal added the kind/support Categorizes issue or PR as a support question. label Mar 30, 2021
@medyagh
Copy link
Member

medyagh commented May 5, 2021

@ojab I believe we fixed this in v1.19.0 do u mind trying the latest release and see if u still have this issue? if yes please reopen this

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/docker-driver Issues related to kubernetes in container kind/support Categorizes issue or PR as a support question. os/linux priority/awaiting-more-evidence Lowest priority. Possibly useful, but not yet enough support to actually get it done.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants