CNTRLPLANE-2683: Add network policies to openshift-etcd pods#1544
CNTRLPLANE-2683: Add network policies to openshift-etcd pods#1544dusk125 wants to merge 3 commits intoopenshift:mainfrom
Conversation
|
@dusk125: This pull request references CNTRLPLANE-2683 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
WalkthroughAdds a new Kubernetes NetworkPolicy manifest for etcd operands in the openshift-etcd namespace defining five NetworkPolicy resources: a namespace-wide default-deny, a documentation-only hostNetwork note, and three allow rules for installer/pruner→API server, guard→etcd, and operand helpers→DNS. Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes ✨ Finishing touches🧪 Generate unit tests (beta)
Comment |
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: dusk125 The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Fix all issues with AI agents
In `@manifests/0000_12_etcd-operand_networkpolicies.yaml`:
- Around line 96-131: The NetworkPolicy allow-operand-helpers-to-dns currently
only permits egress to pods selected in openshift-dns, which will fail if the
DNS daemonset runs with hostNetwork: true; add a second, ports-only egress rule
to the spec (after the existing to:{...} rule) that allows TCP/UDP ports 53 and
5353 with no namespaceSelector/podSelector so DNS queries succeed to node IPs as
a fallback.
| # Allow guard, installer, and pruner pods to access DNS | ||
| # These pods may need to resolve service names. | ||
| apiVersion: networking.k8s.io/v1 | ||
| kind: NetworkPolicy | ||
| metadata: | ||
| name: allow-operand-helpers-to-dns | ||
| namespace: openshift-etcd | ||
| annotations: | ||
| include.release.openshift.io/self-managed-high-availability: "true" | ||
| include.release.openshift.io/single-node-developer: "true" | ||
| spec: | ||
| podSelector: | ||
| matchExpressions: | ||
| - key: app | ||
| operator: In | ||
| values: | ||
| - guard | ||
| - installer | ||
| - pruner | ||
| egress: | ||
| - to: | ||
| - namespaceSelector: | ||
| matchLabels: | ||
| kubernetes.io/metadata.name: openshift-dns | ||
| podSelector: | ||
| matchLabels: | ||
| dns.operator.openshift.io/daemonset-dns: default | ||
| ports: | ||
| - protocol: TCP | ||
| port: 5353 | ||
| - protocol: UDP | ||
| port: 5353 | ||
| - protocol: TCP | ||
| port: 53 | ||
| - protocol: UDP | ||
| port: 53 |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# Explore repo structure and find DNS-related configs
find . -type f -name "*.yaml" -o -name "*.yml" | head -20Repository: openshift/cluster-etcd-operator
Length of output: 802
🏁 Script executed:
# Search for openshift-dns references in the codebase
rg "openshift-dns" -t yamlRepository: openshift/cluster-etcd-operator
Length of output: 179
🏁 Script executed:
# Look for other NetworkPolicy files to understand DNS handling patterns
fd "network" -t f | grep -i "policy\|network"Repository: openshift/cluster-etcd-operator
Length of output: 5518
🏁 Script executed:
# Check if there are any comments or documentation about DNS networking
rg "hostNetwork" -t yaml -A 3 -B 3Repository: openshift/cluster-etcd-operator
Length of output: 8866
🏁 Script executed:
cat -n manifests/0000_12_etcd-operand_networkpolicies.yamlRepository: openshift/cluster-etcd-operator
Length of output: 5017
🏁 Script executed:
# Search for openshift-dns pod definitions or daemonset
rg "daemonset-dns|openshift-dns" -A 10 -B 2 --all-typesRepository: openshift/cluster-etcd-operator
Length of output: 488
🏁 Script executed:
# Check if there are any direct references to DNS pod specs
rg "dns\.operator\.openshift\.io" -A 5 -B 5 --all-typesRepository: openshift/cluster-etcd-operator
Length of output: 488
🏁 Script executed:
rg "daemonset-dns|openshift-dns" -A 5 -B 2Repository: openshift/cluster-etcd-operator
Length of output: 929
🌐 Web query:
Kubernetes NetworkPolicy podSelector hostNetwork pods destination matching behavior
💡 Result:
spec.podSelectorselects the pods the policy applies to (the “protected” pods) in the policy’s namespace. (kubernetes.io)- In ingress rules,
from: podSelector/namespaceSelectormatches source pods (by their pod IPs). In egress rules,to: podSelector/namespaceSelectormatches destination pods (by their pod IPs). (kubernetes.io)
What changes with hostNetwork: true pods
- Kubernetes explicitly says NetworkPolicy behavior for hostNetwork pods is undefined, and in the most common implementation the CNI cannot distinguish hostNetwork pod traffic, so it ignores hostNetwork pods for
podSelector/namespaceSelectormatching. Their traffic is treated like ordinary traffic to/from the node IP. (kubernetes.io) - Because a hostNetwork pod uses the node’s IP, it typically won’t match “destination pod” (
to: podSelector) or “source pod” (from: podSelector) the way a normal pod does; instead, match it viaipBlock(node CIDR / node IPs) if you need to allow/deny it. (kubernetes.io)
Practical “destination matching” implication (egress)
If you write an egress rule like to: { podSelector: ... }, that rule generally matches only destinations that are reachable at pod IPs. A hostNetwork destination is reached at a node IP, so it’s commonly treated as “node traffic” and does not match that podSelector; use ipBlock for the node IP(s) instead. (kubernetes.io)
Citations:
- 1: https://kubernetes.io/docs/concepts/services-networking/network-policies/?utm_source=openai
- 2: https://kubernetes.io/docs/concepts/services-networking/network-policies/?utm_source=openai
- 3: https://kubernetes.io/docs/concepts/services-networking/network-policies/?utm_source=openai
- 4: https://kubernetes.io/docs/concepts/services-networking/network-policies/?utm_source=openai
- 5: https://kubernetes.io/docs/concepts/services-networking/network-policies/?utm_source=openai
🌐 Web query:
OpenShift DNS operator daemonset hostNetwork configuration
💡 Result:
OpenShift’s DNS Operator (cluster-dns-operator) manages the CoreDNS DaemonSet (openshift-dns/dns-default) and the node resolver DaemonSet (openshift-dns/node-resolver). The supported configuration surface is the DNS custom resource (dns.operator.openshift.io/default)—there is no supported/GA knob to set hostNetwork: true/false for those DaemonSets. If you manually edit the DaemonSet, the operator will typically reconcile it back. [1]
What is supported:
- Control where CoreDNS (
dns-default) runs viaspec.nodePlacement(nodeSelector/tolerations) ondns.operator/default. [1] - Configure forwarding/zones via
spec.servers, etc. (not shown here). [1]
If your real goal is “my workload uses hostNetwork and I want it to use cluster DNS”:
- By default, host-network pods use the node’s
/etc/resolv.conf(not the cluster DNS service IP). [2] - Set the pod’s DNS policy to
ClusterFirstWithHostNetso it uses the cluster DNS service:
spec:
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNetSources:
[1] OKD DNS Operator docs (daemonsets and spec.nodePlacement) https://docs.okd.io/4.21/networking/networking_operators/dns-operator.html
[2] cluster-dns-operator README (hostNetwork pods use node /etc/resolv.conf) https://github.com/openshift/cluster-dns-operator
DNS egress rule may not work if DNS daemonset uses hostNetwork.
The policy currently uses podSelector to match DNS destination pods, but if openshift-dns/dns-default runs with hostNetwork: true (standard practice), the podSelector destination won't match because hostNetwork pods are reachable at node IPs, not pod IPs. This mirrors the pattern already established in the same file for etcd (lines 73–74): guard traffic to hostNetwork etcd uses ports-only egress without destination selectors.
Add a ports-only fallback rule to ensure DNS queries succeed regardless of DNS pod networking:
Add fallback DNS egress rule
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-dns
podSelector:
matchLabels:
dns.operator.openshift.io/daemonset-dns: default
ports:
- protocol: TCP
port: 5353
- protocol: UDP
port: 5353
- protocol: TCP
port: 53
- protocol: UDP
port: 53
+ - ports:
+ - protocol: TCP
+ port: 5353
+ - protocol: UDP
+ port: 5353
+ - protocol: TCP
+ port: 53
+ - protocol: UDP
+ port: 53
policyTypes:
- Egress🤖 Prompt for AI Agents
In `@manifests/0000_12_etcd-operand_networkpolicies.yaml` around lines 96 - 131,
The NetworkPolicy allow-operand-helpers-to-dns currently only permits egress to
pods selected in openshift-dns, which will fail if the DNS daemonset runs with
hostNetwork: true; add a second, ports-only egress rule to the spec (after the
existing to:{...} rule) that allows TCP/UDP ports 53 and 5353 with no
namespaceSelector/podSelector so DNS queries succeed to node IPs as a fallback.
|
/retest-required |
1 similar comment
|
/retest-required |
|
@dusk125: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
This PR adds network policies for the pods that exist in the openshift-etcd namespace that are managed by the cluster-etcd-operator: guard, installer, revision pruner pods.
It was suggested to have an empty policy for etcd (as it's a hostNetwork pod so ignores network policies), to act as documentation and proof that it wasn't accidentally missed.