Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Move podman CNI config to different directory #11194

Open
afbjorklund opened this issue Apr 25, 2021 · 11 comments
Open

Move podman CNI config to different directory #11194

afbjorklund opened this issue Apr 25, 2021 · 11 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.

Comments

@afbjorklund
Copy link
Collaborator

afbjorklund commented Apr 25, 2021

Kubernetes has a problem to handle third-party packages using CNI:

kubernetes/kubernetes#100309

Any installed config files will be used if present, in alphabetical order...
There is no way to select a specific config, especially one appearing later.

Since containers/podman#2370

Podman now has a configuration option to select a different directory:

/etc/containers/containers.conf

# The network table contains settings pertaining to the management of
# CNI plugins.

[network]

# Path to directory where CNI plugin binaries are located.
#
# cni_plugin_dirs = ["/usr/libexec/cni"]

# The network name of the default CNI network to attach pods to.
# default_network = "podman"

# Path to the directory where CNI configuration files are located.
#
# network_config_dir = "/etc/cni/net.d/"

network.network_config_dir

Changing this to a different directory, is the easiest way to fix kubeadm.

/etc/cni/net.d/87-podman-bridge.conflist

Another option would be to delete the file, and use --network=host.
But that would require any podman users to change, breaking some.

Error: error configuring network namespace for container f56bea2ef5b840309583da9c1b18b416f94c750d9b30a0036e02a49622b653e6: CNI network "podman" not found

Podman has the opposite side, they don't normally install Kubernetes.
So there is no incentive to change the podman default cni packaging.

# Path to the directory where CNI configuration files are located.
#
# network_config_dir = "/etc/cni/net.d/"
network_config_dir = "/etc/containers/net.d/"
@afbjorklund
Copy link
Collaborator Author

This would avoid having to do workarounds like in PR #10384

@afbjorklund afbjorklund added the kind/feature Categorizes issue or PR as related to a new feature. label Apr 25, 2021
@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Apr 25, 2021

This also needs to be done for any cri-o and for containerd packages.

/etc/crictl.yaml
/etc/cni/net.d/100-crio-bridge.conf
/etc/cni/net.d/200-loopback.conf
/etc/cni/net.d/10-containerd-net.conflist

See #11184 (comment)

The container runtime and the container network should be configured.

criSocket
--cri-socket

Too bad that cniName can't be selected, only implicitly (lexicographic order)

@prezha
Copy link
Contributor

prezha commented Apr 28, 2021

This also needs to be done for any cri-o and for containerd packages.

/etc/crictl.yaml
/etc/cni/net.d/100-crio-bridge.conf
/etc/cni/net.d/200-loopback.conf
/etc/cni/net.d/10-containerd-net.conflist

See #11184 (comment)

The container runtime and the container network should be configured.

criSocket
--cri-socket

Too bad that cniName can't be selected, only implicitly (lexicographic order)

crio and podman allow you to specify the cni name?

a thought: if just the podman is the issue, can we then just rename the /etc/cni/net.d/87-podman-bridge.conflist to something like /etc/cni/net.d/999-podman-bridge.conflist (assuming 999 would always be the last one in lexicographic order)?

then, the default_network = "podman" from /etc/containers/containers.conf (as given above in #11194 (comment)) should still pick the right (in this example: /etc/cni/net.d/999-podman-bridge.conflist) config?


to answer to myself: no, that would not solve the problem as coredns would still pickit up anyway right at the end of last kubeadm phase (applying plugins) - as being probably the only config it will find in /etc/cni/net.d

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented Apr 29, 2021

podman is not the problem here, the network works fine. we just need to "hide" it from kubernetes.

/etc/containers/net.d/87-podman-bridge.conflist

@michaelhenkel
Copy link
Contributor

as mentioned in #10384, cri-o has the same problem

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented May 3, 2021

as mentioned in #10384, cri-o has the same problem

That is true, but it is actually a container runtime (for Kubernetes) so it can theoretically keep the configuration when selected.

Podman doesn't run as a CRI, so we can move the Podman configuration away from the Kubernetes configuration for good...
For CRI-O (and containerd, soon dockerd) we need to make sure that they don't install any CNI configuration when not choosen.

Unfortunately it comes this way in the .deb packaging.

See #11184 (comment)

Upstream knows it broken, so they don't install both at once.

“The patient says, "Doctor, it hurts when I do this."
The doctor says, "Then don't do that!”

@afbjorklund
Copy link
Collaborator Author

afbjorklund commented May 3, 2021

a thought: if just the podman is the issue, can we then just rename the /etc/cni/net.d/87-podman-bridge.conflist to something like /etc/cni/net.d/999-podman-bridge.conflist (assuming 999 would always be the last one in lexicographic order)?

It doesn't matter what we rename it to, since it's the only configuration file available it will always the first and the last.

The kindnet configuration (and some other CNI too, like flannel) is not created until after the Kubernetes cluster has booted...

@spowelljr spowelljr added the priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. label May 17, 2021
@mauilion
Copy link

mauilion commented Jun 7, 2021

This is also breaking expected functionality like "minikube start --cni=false"

In this configuration I would expect that no cni is configured. What happens instead is that the podman cni is used.

@afbjorklund
Copy link
Collaborator Author

This is also breaking expected functionality like "minikube start --cni=false"

In this configuration I would expect that no cni is configured. What happens instead is that the podman cni is used.

Kubernetes doesn't have any feature to choose CNI, installing the Podman (or the cri-o or the containerd) package will break it...

This "alphabetical" configuration means that no other system components are able to use CNI, without changing config directory.

@sharifelgamal sharifelgamal changed the title Remove CNI configuration from the podman installation Move podman CNI config to different directory Jun 14, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Sep 12, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Oct 12, 2021
@spowelljr spowelljr added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Oct 13, 2021
@spowelljr spowelljr added priority/backlog Higher priority than priority/awaiting-more-evidence. and removed priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Feb 16, 2022
@afbjorklund afbjorklund added this to the 1.27.0-candidate milestone Mar 2, 2022
@spowelljr spowelljr modified the milestones: 1.27.0-previous, 1.29.0 Nov 28, 2022
@spowelljr spowelljr modified the milestones: 1.31.0, 1.32.0 Jul 19, 2023
@spowelljr spowelljr removed this from the 1.32.0 milestone Jul 19, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/backlog Higher priority than priority/awaiting-more-evidence.
Projects
None yet
Development

No branches or pull requests

7 participants