Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docker-in-docker feature breaks with Codespaces+Kind and docker v27+ (docker's ipv6 breaking changes) #1023

Closed
jeremysf opened this issue Jun 28, 2024 · 4 comments · Fixed by #1068
Assignees

Comments

@jeremysf
Copy link

We have been using Visual Studio Code, GitHub Codespaces and the docker-in-docker feature with Kubernetes' kind project (https://kind.sigs.k8s.io/) for several years now.

Recently, with the release of docker version 27, things broke. When trying to use the kind command line utility to create a new kind cluster (i.e. launch the kind docker container which encapsulates a Kubernetes cluster) we get the following error:

Creating cluster "kind" ...
ERROR: failed to create cluster: failed to ensure docker network: command "docker network create -d=bridge -o com.docker.network.bridge.enable_ip_masquerade=true -o com.docker.network.driver.mtu=1500 --ipv6 --subnet fc00:f853:ccd:e793::/64 kind" failed with error: exit status 1
Command Output: Error response from daemon: Failed to Setup IP tables: Unable to enable NAT rule:  (iptables failed: ip6tables --wait -t nat -I POSTROUTING -s fc00:f853:ccd:e793::/64 ! -o br-94aea5e559a6 -j MASQUERADE: ip6tables v1.8.7 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
 (exit status 3))

The issue is I think related to this change by docker:

https://docs.docker.com/engine/release-notes/27.0/#ipv6

I think what is needed is the ability to do this from the release notes:

To restore the behavior of earlier releases, no ip6tables at all, set "ip6tables": false in daemon.json, or use the CLI option --ip6tables=false. Alternatively, leave ip6tables enabled, publish ports, and enable direct routing.

Temporarily, I am able to get things working again with the following feature configuration in our devcontainer.json:

    "features": {
        "ghcr.io/devcontainers/features/common-utils:1": {
            "installZsh": true,
            "upgradePackages": false,
            "uid": "1000",
            "gid": "1000",
            "installOhMyZsh": "true",
            "nonFreePackages": "true"
        },
        "ghcr.io/devcontainers/features/docker-in-docker:2.2.1": {
            "version": "26.1.3",
            "enableNonRootDocker": true,
            "moby": false
        }
    },

Is it possible to extend the feature with an option to disable ip6tables and/or to pass in additional command line options to the launch of docker?

@samruddhikhandale
Copy link
Member

Hi 👋

Thanks for raising this issue, this seems important to fix so that we can avoid pining to older docker version.

@gauravsaini04 Can you help with the fix?

Suggestion:

  • Create a new Feature boolean option ip6tables (default: true)
  • (Preferable) Test if we can pass --ip6tables=false on the condition when we start docker with dockerd
    • If not, let's look at creating or modifying the daemon.json file to include the ip6tables setting

@gauravsaini04
Copy link
Contributor

gauravsaini04 commented Aug 19, 2024

Have created pr for it here.
Still working on it as tests seem to fail for the new changes, might be need to go for the second method of daemon.json key addition of ip6tables.

@gauravsaini04
Copy link
Contributor

Second method of creating a daemon.json file at /etc/docker location and then adding "ip6tables": <ip6tables_value> entry in it has been added to this pr

@gauravsaini04
Copy link
Contributor

PR #1068 raised earlier was merged to the codebase

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants