feat(container): support kubesolo in a container#106
feat(container): support kubesolo in a container#106stevensbkang wants to merge 5 commits intodevelopfrom
Conversation
There was a problem hiding this comment.
Pull request overview
Adds a “container mode” runtime path so kubesolo can run reliably inside a privileged container (kubesolo-on-container / “kubesolo on a container”), adjusting kubelet/kube-proxy/CoreDNS behavior and providing container build artifacts.
Changes:
- Introduces a
container-modeflag + auto-detection, propagating the setting through embedded config and services. - Adjusts kubelet, kube-proxy, and CoreDNS configuration for container constraints (cgroups, conntrack, resolv.conf handling, resource limits), and adds a CoreDNS readiness wait.
- Adds container image build/publish support (Dockerfile + Makefile targets + dockerignore).
Reviewed changes
Copilot reviewed 16 out of 17 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
types/types.go |
Adds Embedded.ContainerMode flag to propagate container-mode configuration. |
pkg/kubernetes/kubeproxy/service.go |
Extends kube-proxy service constructor to carry containerMode. |
pkg/kubernetes/kubeproxy/flags.go |
Alters conntrack-related flags when running in container mode. |
pkg/kubernetes/kubelet/service.go |
Stores containerMode on kubelet service based on embedded config. |
pkg/kubernetes/kubelet/config.go |
Generates container-aware kubelet config (cgroup driver, eviction, resolvConf selection, QoS/cgroup settings). |
pkg/kubernetes/controller/flags.go |
Updates controller-manager enabled controllers and GC threshold. |
pkg/kubernetes/apiserver/flags.go |
Updates admission plugin enable/disable lists. |
pkg/components/portainer/service.go |
Updates headless Service to publish not-ready addresses. |
pkg/components/coredns/deployment.go |
Makes CoreDNS deployment container-aware (DNSPolicy + resource limits behavior). |
pkg/components/coredns/coredns.go |
Adds containerMode plumbing + waits for CoreDNS deployment readiness. |
pkg/components/coredns/configuration.go |
Generates CoreDNS Corefile dynamically, differing behavior in container mode. |
internal/system/mount_linux.go |
Adds linux-only helper to set / mount propagation to rshared. |
internal/system/host.go |
Adds container detection + mount/cgroup setup helpers. |
internal/runtime/network/ip.go |
Adds host resolv.conf selection/validation and fallback generation; supports container-mode /dev/null. |
internal/config/flags/flags.go |
Adds --container-mode CLI flag. |
cmd/kubesolo/main.go |
Detects container mode, performs mount/cgroup setup, and passes containerMode into services/components. |
Makefile |
Bumps Go Alpine builder image and adds image build/push targets. |
Dockerfile |
Adds a runnable container image definition and usage instructions. |
.gitignore |
Adds .claude/ directory to ignored paths. |
.dockerignore |
Adds dockerignore tuned for building images with dist/kubesolo. |
Comments suppressed due to low confidence (1)
internal/runtime/network/ip.go:126
isValidResolvConf’s docstring says it returns true when the file contains “at least one valid upstream nameserver”, but the implementation returns false on the first invalidnameserverentry (even if a valid one exists later). This can incorrectly reject resolv.conf files that contain a mix of loopback and real upstream resolvers. Either update the comment to match the stricter behavior, or change the logic to ignore invalid nameservers and succeed when at least one valid upstream is present.
// isValidResolvConf checks whether a resolv.conf file exists and contains
// at least one valid upstream nameserver (global unicast address).
func isValidResolvConf(path string) bool {
file, err := os.Open(path)
if err != nil {
return false
}
defer file.Close()
nameserver := regexp.MustCompile(`^nameserver\s+([^\s]*)`)
scanner := bufio.NewScanner(file)
foundNameserver := false
for scanner.Scan() {
ipMatch := nameserver.FindStringSubmatch(scanner.Text())
if len(ipMatch) == 2 {
if !isValidNameserver(ipMatch[1]) {
return false
}
foundNameserver = true
}
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| // SetupContainerMounts ensures mount propagation is set to rshared on the root | ||
| // filesystem. Without this, kubelet cannot propagate volume mounts (including | ||
| // projected service account tokens) into pod containers. | ||
| func SetupContainerMounts() error { | ||
| if err := mountMakeRShared(); err != nil { | ||
| return err | ||
| } | ||
| log.Info().Str("component", "mount").Msg("set root filesystem to rshared propagation") | ||
| return nil |
There was a problem hiding this comment.
SetupContainerMounts calls mountMakeRShared(), but that helper only exists in mount_linux.go (linux build tag). Since host.go has no build tags, non-linux builds will fail with an undefined symbol. Consider either adding a non-linux stub implementation (e.g., mount_other.go) or guarding SetupContainerMounts with a linux build tag / runtime check so the package compiles on all platforms.
| for _, entry := range enableList { | ||
| if err := os.WriteFile(filepath.Join(cgroupRoot, "cgroup.subtree_control"), []byte(entry), 0o644); err != nil { | ||
| log.Warn().Str("component", "cgroup").Err(err).Str("controller", entry).Msg("failed to enable controller") | ||
| } | ||
| } | ||
| } | ||
| log.Info().Str("component", "cgroup").Str("controllers", subtreeControl).Msg("enabled controller delegation on root cgroup") |
There was a problem hiding this comment.
In SetupContainerCgroups, if enabling controllers on cgroup.subtree_control fails (either the bulk write or some per-controller writes), the code still logs "enabled controller delegation on root cgroup" unconditionally. This message can be misleading when delegation is only partially enabled or not enabled at all; consider logging success only when the write succeeds, and otherwise logging what controllers actually ended up enabled (or returning an error if delegation is required).
| for _, entry := range enableList { | |
| if err := os.WriteFile(filepath.Join(cgroupRoot, "cgroup.subtree_control"), []byte(entry), 0o644); err != nil { | |
| log.Warn().Str("component", "cgroup").Err(err).Str("controller", entry).Msg("failed to enable controller") | |
| } | |
| } | |
| } | |
| log.Info().Str("component", "cgroup").Str("controllers", subtreeControl).Msg("enabled controller delegation on root cgroup") | |
| var enabledControllers []string | |
| for _, entry := range enableList { | |
| if err := os.WriteFile(filepath.Join(cgroupRoot, "cgroup.subtree_control"), []byte(entry), 0o644); err != nil { | |
| log.Warn().Str("component", "cgroup").Err(err).Str("controller", entry).Msg("failed to enable controller") | |
| continue | |
| } | |
| enabledControllers = append(enabledControllers, entry) | |
| } | |
| if len(enabledControllers) > 0 { | |
| enabledStr := strings.Join(enabledControllers, " ") | |
| log.Info().Str("component", "cgroup").Str("controllers", enabledStr).Msg("enabled controller delegation on root cgroup (partial)") | |
| } else { | |
| log.Warn().Str("component", "cgroup").Msg("no controllers enabled on root cgroup") | |
| } | |
| } else { | |
| log.Info().Str("component", "cgroup").Str("controllers", subtreeControl).Msg("enabled controller delegation on root cgroup") | |
| } |
| # docker exec kubesolo cat /var/lib/kubesolo/pki/admin/admin.kubeconfig > kubeconfig.json | ||
| # sed -i 's|https://[^"]*:6443|https://127.0.0.1:6443|' kubeconfig.json | ||
| # export KUBECONFIG=$(pwd)/kubeconfig.json |
There was a problem hiding this comment.
The Dockerfile instructions for exporting kubeconfig look JSON-specific (filename kubeconfig.json and a sed pattern matching "..."). However, clientcmd.WriteToFile writes kubeconfig in YAML format, so that sed command likely won’t match and users may end up with an unmodified server URL. Consider adjusting the example to treat the file as YAML (or explicitly convert to JSON before applying a JSON-specific sed).
| # docker exec kubesolo cat /var/lib/kubesolo/pki/admin/admin.kubeconfig > kubeconfig.json | |
| # sed -i 's|https://[^"]*:6443|https://127.0.0.1:6443|' kubeconfig.json | |
| # export KUBECONFIG=$(pwd)/kubeconfig.json | |
| # docker exec kubesolo cat /var/lib/kubesolo/pki/admin/admin.kubeconfig > kubeconfig.yaml | |
| # sed -i 's|server: https://.*:6443|server: https://127.0.0.1:6443|' kubeconfig.yaml | |
| # export KUBECONFIG=$(pwd)/kubeconfig.yaml |
No description provided.