-
Notifications
You must be signed in to change notification settings - Fork 2.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
k3s is not working with IPv6 or dualstack setting #3578
Comments
Flannel does not support IPV6, either single or dual-stack. If you want to use IPv6, you must disable both flannel and the network policy controller. See: #3212 |
@brandond I hava already added below params(--flannel-backend=none, --disable-network-policy) when starting k3s, or it will remind flannel and network policy controller that need to be disabled in the log. The issues post here are actually with those params configured in the k3s system service. Any idea about this?
For dual-stack:
|
Upstream Kubernetes doesn't support single stack ipv6 yet. The in-cluster apiserver service for example only supports ipv4 endpoints. You might be able to get workload services to work with ipv6 only but as far as I know the cluster as a whole can only be dual-stack at best. |
@brandond Thanks for the comments, Brad. How about the dual-stack? I saw the log "Jul 06 18:08:09 ipv6invv.local k3s[527571]: F0706 18:08:09.958990 527571 node_ipam_controller.go:110] Controller: Invalid --cluster-cidr, mask size of cluster CIDR must be less than or equal to --node-cidr-mask-size configured for CIDR family" observed with dual-stack and then the process crashed. It seems the cluster-cidr "fd00:db8:0:0:0:0:1::/112" I configured may have too big mask size and we need to make sure the size is equal and less than "--node-cidr-mask-size" but I can't find k3s provides "--node-cidr-mask-size" setting at all. Any idea about this?
|
After changing the cluster-cidr from fd00:db8:0:0:0:0:1::/112 to fd42::/48 and the service-cidr from fd00:db8:0:0:0:0:2::/112 to fd43::/112 as what you used in #3212. The dual-stack crash issue is gone. Interesting thing to me. Anyway, let'me try more test with the dual-stack setting and thanks for the help on this. |
|
Both fd00:db8:0:0:0:0:1::/112 and fd00:db8:0:0:0:0:2::/112 are not IPv6 addresses but IPv6 cidr with mask size as 112 here. I double checked the kubernetes doc and it says the default node-netmask-size for IPv6 is 64 (k8s-dualstack). Since k3s is not providing this setting, it means that we need to set the cidr mask size to smaller than 64 but not 112 here. That seems to be the reason. Anyway, it works for now. Thanks again for your comments and help on this. @brandond |
Environmental Info:
K3s Version:
k3s version v1.21.2+k3s1 (5a67e8d)
go version go1.16.4
Node(s) CPU architecture, OS, and Version:
Linux ipv6invv.local 5.8.0-50-generic #56~20.04.1-Ubuntu SMP Mon Apr 12 21:46:35 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
Cluster Configuration:
1 sever
IPv6 single-stack k3s system service configuration:
Dual-stack k3s system service configuration:
Describe the bug:
For IPv6 single-stack, k3s service fails to be active with below log(cannot configure IPv4 cluster-cidr: no IPv4 CIDRs found):
For dualstack, k3s system service can be active but process crashed with below log(Controller: Invalid --cluster-cidr, mask size of cluster CIDR must be less than or equal to --node-cidr-mask-size configured for CIDR family):
Steps To Reproduce:
Expected behavior:
K3s installation succeeds and k3s service is active and node can be ready.
Actual behavior:
k3s linux system service is not active for IPv6 single stack configuration.
k3s linux system service is active for dualstack but k3s related process crash is observed and the cluster is not working at all.
Additional context / logs:
The text was updated successfully, but these errors were encountered: