Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
133 changes: 133 additions & 0 deletions manifests/0000_12_etcd-operand_networkpolicies.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,133 @@
# NetworkPolicy for etcd operand pods in the openshift-etcd namespace.
#
# The etcd static pods use hostNetwork: true and bypass NetworkPolicy entirely.
# The guard, installer, and pruner pods run on the pod network and are subject to NetworkPolicy.
---
# Documentation-only policy for etcd static pods (hostNetwork bypasses NetworkPolicy)
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: etcd-host-network-documentation
namespace: openshift-etcd
annotations:
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
description: "Documentation-only policy - etcd pods use hostNetwork and bypass NetworkPolicy"
spec:
podSelector:
matchLabels:
app: etcd
policyTypes: []
---
# Allow installer and pruner pods to access the API server
# These pods manage etcd static pod manifests and need API access.
# We allow all egress because the API server IP address and port can vary.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-installer-pruner-to-apiserver
namespace: openshift-etcd
annotations:
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
spec:
podSelector:
matchExpressions:
- key: app
operator: In
values:
- installer
- pruner
egress:
- {}
policyTypes:
- Egress
Comment on lines +41 to +44
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

find . -name "0000_12_etcd-operand_networkpolicies.yaml" -type f

Repository: openshift/cluster-etcd-operator

Length of output: 128


🌐 Web query:

Kubernetes NetworkPolicy egress empty object {} allow all traffic semantics

💡 Result:

In Kubernetes NetworkPolicy, the egress list is a list of allow rules. An empty rule object means “no constraints”, so:

  • egress: - {} allows all egress traffic (all destinations, all ports / protocols supported by the plugin) from the pods selected by podSelector. (v1-34.docs.kubernetes.io)
  • By contrast, a policy that selects pods and has policyTypes: [Egress] but provides no egress rules (for example, egress: [] or omitting egress) is a “default deny egress” policy and allows no egress traffic from those selected pods. (v1-34.docs.kubernetes.io)

Citations:


🏁 Script executed:

cat -n ./manifests/0000_12_etcd-operand_networkpolicies.yaml | sed -n '20,50p'

Repository: openshift/cluster-etcd-operator

Length of output: 1227


egress: - {} is allow-all egress and overly broad.

Line 41-42 permits installer/pruner pods unrestricted egress to any destination and port. While the manifest comment (line 24) states the API server IP can vary, the API server ports are well-defined (443, 6443). Constrain egress to these ports to prevent unintended outbound traffic while maintaining functionality.

🔧 Tighten to API-server ports
 spec:
   podSelector:
     matchExpressions:
     - key: app
       operator: In
       values:
       - installer
       - pruner
   egress:
-  - {}
+  - ports:
+    - protocol: TCP
+      port: 443
+    - protocol: TCP
+      port: 6443
   policyTypes:
   - Egress
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
egress:
- {}
policyTypes:
- Egress
egress:
- ports:
- protocol: TCP
port: 443
- protocol: TCP
port: 6443
policyTypes:
- Egress
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@manifests/0000_12_etcd-operand_networkpolicies.yaml` around lines 41 - 44,
Replace the open-ended egress rule "egress: - {}" with explicit egress rules
that only allow TCP traffic to the Kubernetes API server ports (443 and 6443);
specifically, change the egress section (and keep policyTypes: - Egress) to
include two egress rules or one rule with two ports that target the API server
IP/CIDR via to.ipBlock.cidr (or the known API server IP(s)) and protocol TCP,
and remove the allow-all entry—add a short manifest comment noting the API
server IP may vary and should be populated with the cluster's API server CIDR or
IP list.

---
# Allow guard pods to access etcd for health checks
# Guard pods probe etcd health endpoints. Since etcd uses hostNetwork,
# we allow egress to etcd's health check ports without destination restrictions.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-guard-to-etcd
namespace: openshift-etcd
annotations:
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
spec:
podSelector:
matchLabels:
app: guard
egress:
- ports:
- protocol: TCP
port: 2379
- protocol: TCP
port: 9980
policyTypes:
- Egress
---
# Allow guard, installer, and pruner pods to access DNS
# These pods may need to resolve service names.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-operand-helpers-to-dns
namespace: openshift-etcd
annotations:
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
spec:
podSelector:
matchExpressions:
- key: app
operator: In
values:
- guard
- installer
- pruner
egress:
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: openshift-dns
podSelector:
matchLabels:
dns.operator.openshift.io/daemonset-dns: default
ports:
- protocol: TCP
port: 5353
- protocol: UDP
port: 5353
- protocol: TCP
port: 53
- protocol: UDP
port: 53
Comment on lines +70 to +105
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

# Explore repo structure and find DNS-related configs
find . -type f -name "*.yaml" -o -name "*.yml" | head -20

Repository: openshift/cluster-etcd-operator

Length of output: 802


🏁 Script executed:

# Search for openshift-dns references in the codebase
rg "openshift-dns" -t yaml

Repository: openshift/cluster-etcd-operator

Length of output: 179


🏁 Script executed:

# Look for other NetworkPolicy files to understand DNS handling patterns
fd "network" -t f | grep -i "policy\|network"

Repository: openshift/cluster-etcd-operator

Length of output: 5518


🏁 Script executed:

# Check if there are any comments or documentation about DNS networking
rg "hostNetwork" -t yaml -A 3 -B 3

Repository: openshift/cluster-etcd-operator

Length of output: 8866


🏁 Script executed:

cat -n manifests/0000_12_etcd-operand_networkpolicies.yaml

Repository: openshift/cluster-etcd-operator

Length of output: 5017


🏁 Script executed:

# Search for openshift-dns pod definitions or daemonset
rg "daemonset-dns|openshift-dns" -A 10 -B 2 --all-types

Repository: openshift/cluster-etcd-operator

Length of output: 488


🏁 Script executed:

# Check if there are any direct references to DNS pod specs
rg "dns\.operator\.openshift\.io" -A 5 -B 5 --all-types

Repository: openshift/cluster-etcd-operator

Length of output: 488


🏁 Script executed:

rg "daemonset-dns|openshift-dns" -A 5 -B 2

Repository: openshift/cluster-etcd-operator

Length of output: 929


🌐 Web query:

Kubernetes NetworkPolicy podSelector hostNetwork pods destination matching behavior

💡 Result:

  • spec.podSelector selects the pods the policy applies to (the “protected” pods) in the policy’s namespace. (kubernetes.io)
  • In ingress rules, from: podSelector/namespaceSelector matches source pods (by their pod IPs). In egress rules, to: podSelector/namespaceSelector matches destination pods (by their pod IPs). (kubernetes.io)

What changes with hostNetwork: true pods

  • Kubernetes explicitly says NetworkPolicy behavior for hostNetwork pods is undefined, and in the most common implementation the CNI cannot distinguish hostNetwork pod traffic, so it ignores hostNetwork pods for podSelector / namespaceSelector matching. Their traffic is treated like ordinary traffic to/from the node IP. (kubernetes.io)
  • Because a hostNetwork pod uses the node’s IP, it typically won’t match “destination pod” (to: podSelector) or “source pod” (from: podSelector) the way a normal pod does; instead, match it via ipBlock (node CIDR / node IPs) if you need to allow/deny it. (kubernetes.io)

Practical “destination matching” implication (egress)

If you write an egress rule like to: { podSelector: ... }, that rule generally matches only destinations that are reachable at pod IPs. A hostNetwork destination is reached at a node IP, so it’s commonly treated as “node traffic” and does not match that podSelector; use ipBlock for the node IP(s) instead. (kubernetes.io)

Citations:


🌐 Web query:

OpenShift DNS operator daemonset hostNetwork configuration

💡 Result:

OpenShift’s DNS Operator (cluster-dns-operator) manages the CoreDNS DaemonSet (openshift-dns/dns-default) and the node resolver DaemonSet (openshift-dns/node-resolver). The supported configuration surface is the DNS custom resource (dns.operator.openshift.io/default)—there is no supported/GA knob to set hostNetwork: true/false for those DaemonSets. If you manually edit the DaemonSet, the operator will typically reconcile it back. [1]

What is supported:

  • Control where CoreDNS (dns-default) runs via spec.nodePlacement (nodeSelector/tolerations) on dns.operator/default. [1]
  • Configure forwarding/zones via spec.servers, etc. (not shown here). [1]

If your real goal is “my workload uses hostNetwork and I want it to use cluster DNS”:

  • By default, host-network pods use the node’s /etc/resolv.conf (not the cluster DNS service IP). [2]
  • Set the pod’s DNS policy to ClusterFirstWithHostNet so it uses the cluster DNS service:
spec:
  hostNetwork: true
  dnsPolicy: ClusterFirstWithHostNet

Sources:
[1] OKD DNS Operator docs (daemonsets and spec.nodePlacement) https://docs.okd.io/4.21/networking/networking_operators/dns-operator.html
[2] cluster-dns-operator README (hostNetwork pods use node /etc/resolv.conf) https://github.com/openshift/cluster-dns-operator


DNS egress rule may not work if DNS daemonset uses hostNetwork.

The policy currently uses podSelector to match DNS destination pods, but if openshift-dns/dns-default runs with hostNetwork: true (standard practice), the podSelector destination won't match because hostNetwork pods are reachable at node IPs, not pod IPs. This mirrors the pattern already established in the same file for etcd (lines 73–74): guard traffic to hostNetwork etcd uses ports-only egress without destination selectors.

Add a ports-only fallback rule to ensure DNS queries succeed regardless of DNS pod networking:

Add fallback DNS egress rule
  egress:
  - to:
    - namespaceSelector:
        matchLabels:
          kubernetes.io/metadata.name: openshift-dns
      podSelector:
        matchLabels:
          dns.operator.openshift.io/daemonset-dns: default
    ports:
    - protocol: TCP
      port: 5353
    - protocol: UDP
      port: 5353
    - protocol: TCP
      port: 53
    - protocol: UDP
      port: 53
+  - ports:
+    - protocol: TCP
+      port: 5353
+    - protocol: UDP
+      port: 5353
+    - protocol: TCP
+      port: 53
+    - protocol: UDP
+      port: 53
  policyTypes:
  - Egress
🤖 Prompt for AI Agents
In `@manifests/0000_12_etcd-operand_networkpolicies.yaml` around lines 96 - 131,
The NetworkPolicy allow-operand-helpers-to-dns currently only permits egress to
pods selected in openshift-dns, which will fail if the DNS daemonset runs with
hostNetwork: true; add a second, ports-only egress rule to the spec (after the
existing to:{...} rule) that allows TCP/UDP ports 53 and 5353 with no
namespaceSelector/podSelector so DNS queries succeed to node IPs as a fallback.

policyTypes:
- Egress
---
# Default-deny policy for the openshift-etcd namespace.
# This policy selects all pods in the namespace and enables default-deny for both
# ingress and egress by specifying policyTypes without any allow rules.
#
# NetworkPolicies are additive (use OR logic):
# - This policy enables default-deny for all pods
# - Subsequent policies add specific allow rules
# - If any policy allows traffic, that traffic is permitted
# - Policies cannot override or block traffic allowed by other policies
#
# Note: etcd static pods use hostNetwork: true and bypass all NetworkPolicy rules.
# This policy only affects pods running on the pod network (guard, installer, pruner).
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: openshift-etcd
annotations:
include.release.openshift.io/self-managed-high-availability: "true"
include.release.openshift.io/single-node-developer: "true"
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress