L7FlowExporter doesn't work as expected when the annotation 'visibility.antrea.io/l7-export' is egress or ingress only #6902
Description
Describe the bug
According to the guide https://github.com/antrea-io/antrea/blob/main/docs/network-flow-visibility.md#layer-7-network-flow-exporter, L7FlowExporter will work with the following three annotations:
visibility.antrea.io/l7-export=ingress
visibility.antrea.io/l7-export=egress
visibility.antrea.io/l7-export=both
However, when I set up a Kind cluster with Antrea 2.1 and enabled the L7FlowExporter, I noticed that the L7 flows will be generated only when the annotation is visibility.antrea.io/l7-export=both
. There will be no L7 flows when the Pod annotation is visibility.antrea.io/l7-export=egress
or visibility.antrea.io/l7-export=ingress
To Reproduce
- Create a Kind cluster with Antrea installed and L7FlowExporter/FlowExporter configured with an ipfix-collector, sample configs and yamls are:
antrea-agent.conf: |
featureGates:
FlowExporter: true
L7FlowExporter: true
flowExporter:
enable: true
flowCollectorAddr: "kube-system/ipfix-collector:4739:tcp"
---
# Source: ipfix-collector/templates/ipfix-collector.yaml
apiVersion: v1
kind: Service
metadata:
labels:
app: ipfix-collector
name: ipfix-collector
namespace: kube-system
spec:
selector:
app: ipfix-collector
ports:
- name: ipfix-udp
port: 4739
protocol: UDP
targetPort: 4739
- name: ipfix-tcp
port: 4739
protocol: TCP
targetPort: 4739
- name: http-tcp
port: 8080
protocol: TCP
targetPort: 8080
---
# Source: ipfix-collector/templates/ipfix-collector.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: ipfix-collector
name: ipfix-collector
namespace: kube-system
spec:
replicas: 1
selector:
matchLabels:
app: ipfix-collector
template:
metadata:
labels:
app: ipfix-collector
spec:
containers:
- args:
- --ipfix.port=4739
- --ipfix.transport=tcp
image: antrea/ipfix-collector:latest
imagePullPolicy: IfNotPresent
name: ipfix-collector
ports:
- containerPort: 4739
- containerPort: 8080
nodeSelector:
kubernetes.io/os: linux
kubernetes.io/arch: amd64
- Create two Pods named
sample-pod
andsample-pod-1
with the following yaml:
sample-pod Pod IP: 172.2.1.6
sample-pod-1 Pod IP: 172.2.1.7
apiVersion: v1
kind: Pod
metadata:
name: sample-pod
spec:
containers:
- name: tool-container
image: nicolaka/netshoot
command: ['tail', '-f', '/dev/null']
imagePullPolicy: IfNotPresent
- Run
kubectl annotate pod sample-pod visibility.antrea.io/l7-export=egress
to add the annotation to thesample-pod
Pod. - Run a simple http server in another Pod
sample-pod-1
via pythonpython3 -m http.server 8000
- Run curl command inside of the
sample-pod
Podcurl http://172.2.1.7:8000
to generate egress traffic - Run a proxy for the service ipfix-collector 'kubectl port-forward service/ipfix-collector 8080:8080 -n kube-system &' and execute
curl http://localhost:8080/records?format=json
to get the flow records. The expected http flow records are not showing.
However, an expected flow record similar like following should be reported:
"\nIPFIX-HDR:\n version: 10, Message Length: 378\n Exported Time: 1736237192 (2025-01-07 08:06:32 +0000 UTC)\n Sequence No.: 2515, Observation Domain ID: 776858148\nDATA SET:\n DATA RECORD-0:\n flowStartSeconds: 1736237185 \n flowEndSeconds: 1736237187 \n flowEndReason: 3 \n sourceTransportPort: 52406 \n destinationTransportPort: 8000 \n protocolIdentifier: 6 \n packetTotalCount: 6 \n octetTotalCount: 397 \n packetDeltaCount: 6 \n octetDeltaCount: 397 \n sourceIPv4Address: 172.2.1.6 \n destinationIPv4Address: 172.2.1.7 \n reversePacketTotalCount: 6 \n reverseOctetTotalCount: 830 \n reversePacketDeltaCount: 6 \n reverseOctetDeltaCount: 830 \n sourcePodName: sample-pod \n sourcePodNamespace: default \n sourceNodeName: test-worker \n destinationPodName: sample-pod-1 \n destinationPodNamespace: default \n destinationNodeName: test-worker \n destinationServicePort: 0 \n destinationServicePortName: \n ingressNetworkPolicyName: \n ingressNetworkPolicyNamespace: \n ingressNetworkPolicyType: 0 \n ingressNetworkPolicyRuleName: \n ingressNetworkPolicyRuleAction: 0 \n egressNetworkPolicyName: \n egressNetworkPolicyNamespace: \n egressNetworkPolicyType: 0 \n egressNetworkPolicyRuleName: \n egressNetworkPolicyRuleAction: 0 \n tcpState: TIME_WAIT \n flowType: 1 \n egressName: \n egressIP: \n appProtocolName: http \n httpVals: {\"0\":{\"hostname\":\"172.2.1.7\",\"url\":\"/\",\"http_user_agent\":\"curl/8.7.1\",\"http_content_type\":\"text/html\",\"http_method\":\"GET\",\"protocol\":\"HTTP/1.1\",\"status\":200,\"length\":355}} \n egressNodeName: \n destinationClusterIPv4: 0.0.0.0 \n",
I captured the traffic on the interface antrea-l7-tap0 and antrea-l7-tap1, only one direction traffic can be captured when the annotation is 'ingress' or 'egress', I suspect the suricata can't handle the traffic when there is only one way traffic being forwarded to it.
cc @antoninbas
Versions:
- Antrea version (Docker image tag): 2.1.0
- Kubernetes version: Kind 1.32