Description
Is your feature request related to a problem? Please describe.
containerd versions 1.4 and older require adding any custom registry information via registry.mirrors
/registry.configs
sections in the main etc/containerd/config.toml
. This mechanism has a few downsides, including that it requires editing the main containerd configuration in-place (which cannot be done in K3s since it'll be automatically re-rendered), and also it requires restarting containerd to take effect. This older mechanism is still available and is what k3s currently uses, but it has been deprecated.
With 1.5, containerd added support for a new config_path
configuration option which imitates the behavior of dockerd's certs.d
directory, with some additional functionality on top. This new system resolves the problems of the 1.4-and-earlier mechanism, making it very easy for an operator to manage custom registry configurations.
Describe the solution you'd like
K3s already exposes a custom registries.yaml
configuration to add container registry configurations into containerd/config.toml
. I think that the existing K3s configuration options could continue to be supported via the new config_path
mechanism, where K3s would just write files into the config_path
structure, rather than directly to containerd/config.toml
. Meanwhile, users who wish to manage custom certs on-the-fly would be able to do so by adding them to the config_path
directly on the host filesystem - e.g. via a DaemonSet with a hostPath mount, or externally via separate configuration management tooling (e.g. ansible or similar).
Implementation steps:
- The
containerd/config.toml
templates would be updated to specify a reasonableconfig_path
path by default. For example, this could beetc/containerd/certs.d/
within the K3s install - so a newetc/containerd/certs.d
directory next to the existing K3s-renderedetc/containerd/config.toml
. It would probably make sense to allow customizing theconfig_path
location via a flag and/or envvar. - The existing
containerd/config.toml
PrivateRegistryConfig
templating would be migrated to writing files into the configuredconfig_path
. This would effectively migrate users ofregistries.yaml
to usingconfig_path
.
I think the K3s config_path
output implementation should be permissive of any other "unknown" files that the user may have added to the directory. This has the potential downside of leaving "leftover" registry configurations lying around in config_path
if they are removed from the user's registries.yaml
configuration. The user would be able to delete these leftover files from config_path
manually after removing them from registries.yaml
.
Also, it may be worth thinking about deprecating the current registries.yaml
mechanism entirely in favor of having users add their custom registry configs to config_path
directly. Given that in-place editing of the main etc/containerd/config.toml
would no longer be necessary under the new system, the custom registries.yaml
mechanism wouldn't be providing much value. But there would be no rush since registries.yaml
could continue to work on top of config_path
as described above.
Describe alternatives you've considered
Alternative options:
- Using the current
containerd/config.toml
-based configuration mechanism exposed by K3s, requiring restarting the cluster after any changes. - Configuring K3s to use dockerd and using its
certs.d
support. - Providing a custom
config.toml.tmpl
which specifiesconfig_path
, effectively implementing the suggested config change manually. - Disabling K3s-provided containerd in favor of a separate 1.5+ containerd with
config_path
enabled (but at this point why use K3s?)
Additional context
I encountered this when trying to set up a local Harbor registry. The most finicky part has been getting K3s' containerd-based kubelets to acknowledge the CA cert that was being used by Harbor. After looking through containerd docs, I noticed that they had recently added support for directory-based configuration which would allow me to automatically add the CA certs to the nodes without requiring reconfiguring and restarting the whole cluster.
Backporting
- Needs backporting to older releases
Metadata
Assignees
Type
Projects
Status
Done Issue