Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

IPFIX start and stop timestamps are wrong/too large #1417

Closed
robcowart opened this issue Oct 20, 2020 · 28 comments · Fixed by #1479, #1582 or #2308
Closed

IPFIX start and stop timestamps are wrong/too large #1417

robcowart opened this issue Oct 20, 2020 · 28 comments · Fixed by #1479, #1582 or #2308
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug.

Comments

@robcowart
Copy link

Describe the bug
IPFIX start and stop timestamps are wrong/too large.

To Reproduce
I simply installed a new Kubernetes lab environment, and installed Antrea.

Expected
Start and Stop timestamps sent in the IPFIX record should be the actual start and stop time.

Actual behavior
As seen in the PCAP output below, the timestamps are ~85 1/2 years in the future.

StartTime: Feb  7, 2106 07:28:01.000000000 CET
EndTime: Feb  7, 2106 07:28:01.000000000 CET

The hex value is... ff ff ff f1 88 6e 09 00

Versions:
Please provide the following information:

  • Antrea v0.10.1
  • Kubernetes 1.19.3 (fresh install)
  • Container runtime: containerd 1.3.7
  • Linux kernel 5.4.0-51-generic (all nodes are Ubuntu 20.04 fresh install)
  • Open vSwitch (installed using apt)
~# modinfo openvswitch
filename:       /lib/modules/5.4.0-51-generic/kernel/net/openvswitch/openvswitch.ko
alias:          net-pf-16-proto-16-family-ovs_ct_limit
alias:          net-pf-16-proto-16-family-ovs_meter
alias:          net-pf-16-proto-16-family-ovs_packet
alias:          net-pf-16-proto-16-family-ovs_flow
alias:          net-pf-16-proto-16-family-ovs_vport
alias:          net-pf-16-proto-16-family-ovs_datapath
license:        GPL
description:    Open vSwitch switching datapath
srcversion:     7E9C4A3E7257126A845CF56
depends:        nf_conntrack,nf_nat,nf_conncount,libcrc32c,nf_defrag_ipv6,nsh
retpoline:      Y
intree:         Y
name:           openvswitch
vermagic:       5.4.0-51-generic SMP mod_unload
sig_id:         PKCS#7
signer:         Build time autogenerated kernel key
sig_key:        56:AA:1E:D4:49:F6:D7:44:76:6B:68:17:21:DC:0C:4D:B1:40:9F:BB
sig_hashalgo:   sha512
signature:      94:5E:58:BE:D0:77:E2:B3:C6:A8:70:F8:70:1D:21:2A:3F:FA:BD:FA:
		0D:79:30:F4:22:53:D9:4F:74:6A:AA:7E:41:BA:87:2F:AB:77:D3:12:
		88:73:BC:C5:06:4D:B0:74:F2:E2:86:31:C5:29:C0:B8:3F:35:47:0C:
		27:B5:92:72:0F:B3:4C:2B:68:2C:EA:FA:A1:B5:A6:60:43:2B:46:41:
		12:B2:05:0A:13:27:A5:88:75:28:F6:1A:36:3F:19:38:23:A9:C5:CD:
		42:78:8B:32:BA:21:4E:09:65:FA:81:28:A6:3A:2F:A9:69:F5:39:B8:
		B9:9A:49:C8:1B:CE:82:97:01:26:BB:55:DA:88:E9:62:7F:83:4A:0F:
		21:DC:3A:BF:97:90:09:3C:00:75:8F:98:33:EA:F8:48:F3:54:FC:89:
		FB:04:0A:6E:02:0A:2A:E0:BE:E7:96:27:17:76:83:7A:E9:EF:E3:33:
		7D:F5:7F:E5:FA:34:B1:33:58:B0:60:00:AA:15:66:0C:CB:E2:62:7F:
		F7:35:9B:16:EF:8A:5C:23:78:16:1F:73:F5:BE:0F:81:47:18:B7:E7:
		BA:F9:C6:99:74:4D:F4:22:E7:CA:C5:7C:8C:0C:7E:59:BA:83:3B:7E:
		03:AF:46:1A:F0:BE:1A:00:CA:13:FF:BA:A4:89:D4:72:3C:F2:99:53:
		9A:51:80:2E:70:B3:4C:B7:36:D5:68:1F:A6:E2:C0:C4:45:52:08:8A:
		0D:33:EC:B9:86:7D:1C:4F:D6:F0:97:AE:0C:BA:82:5C:69:20:88:F4:
		84:FA:7E:CB:6B:64:4E:5E:EE:FF:85:40:F3:36:2F:48:A0:8C:44:F8:
		2B:7D:02:AB:24:BB:39:8B:45:57:12:00:AB:96:FC:45:0A:B4:CB:6E:
		87:C8:EE:96:2A:81:08:BC:7F:0D:48:93:21:F9:DE:20:13:D5:75:B5:
		5E:74:F8:54:75:24:E8:3D:07:80:98:DA:17:34:6A:67:B0:F0:AA:59:
		67:6F:35:3F:B5:52:E9:DC:F7:06:3E:09:B4:BB:32:A4:59:81:33:24:
		00:83:D7:6A:C9:0A:E0:C8:7B:D0:8A:0F:82:9A:E3:70:88:9F:52:FB:
		44:49:3F:B5:64:D2:F5:04:A3:44:2E:23:3E:03:25:4B:19:E0:AB:5D:
		92:36:BC:E0:B4:56:64:FF:FC:B1:4A:0A:3B:EB:4F:FD:36:A1:2F:68:
		69:7D:F4:00:4F:B7:A5:8C:E4:6A:9F:E6:46:83:B8:F0:62:20:2E:A5:
		CF:0E:B1:92:59:5E:9C:B3:8C:B3:C0:30:03:66:95:EE:DC:43:20:F6:
		2E:0D:DF:6A:16:61:F2:02:B6:C0:5A:BF

Additional context
Add any other context about the problem here, such as Antrea logs, kubelet logs, etc.

Cisco NetFlow/IPFIX
    Version: 10
    Length: 190
    Timestamp: Oct 20, 2020 17:55:28.000000000 CEST
        ExportTime: 1603209328
    FlowSequence: 0
    Observation Domain Id: 4036327851
    Set 1 [id=256] (1 flows)
        FlowSet Id: (Data) (256)
        FlowSet Length: 174
        [Template Frame: 4]
        Flow 1
            [Duration: 0.000000000 seconds (seconds)]
                StartTime: Feb  7, 2106 07:28:01.000000000 CET
                EndTime: Feb  7, 2106 07:28:01.000000000 CET
            SrcAddr: 10.244.0.2
            DstAddr: 192.168.8.11
            SrcPort: 45684
            DstPort: 6443
            Protocol: TCP (6)
            Permanent Packets: 0
            Permanent Octets: 0
            Packets: 0
            Octets: 0
            Permanent Packets: 0 (Reverse Type 86 PACKETS_TOTAL)
            Permanent Octets: 0 (Reverse Type 85 BYTES_TOTAL)
            Packets: 0 (Reverse Type 2 PKTS)
            Octets: 0 (Reverse Type 1 BYTES)
            Enterprise Private entry: (Arthur18) Type 101: Value (hex bytes): 63 6f 72 65 64 6e 73 2d 66 39 66 64 39 37 39 64 …
            Enterprise Private entry: (Arthur18) Type 100: Value (hex bytes): 6b 75 62 65 2d 73 79 73 74 65 6d
            Enterprise Private entry: (Arthur18) Type 104: Value (hex bytes): 6b 6d 61 73 74 65 72 31
            Enterprise Private entry: (Arthur18) Type 103: Value (hex bytes): 
            Enterprise Private entry: (Arthur18) Type 102: Value (hex bytes): 
            Enterprise Private entry: (Arthur18) Type 105: Value (hex bytes): 
            Enterprise Private entry: (Arthur18) Type 106: Value (hex bytes): 0a 60 00 01
            Enterprise Private entry: (Arthur18) Type 108: Value (hex bytes): 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e 65 74 …

And the hex dump...

0000   00 0a 00 be 5f 8f 08 70 00 00 00 00 f0 95 79 ab   ...._..p......y.
0010   01 00 00 ae ff ff ff f1 88 6e 09 00 ff ff ff f1   .........n......
0020   88 6e 09 00 0a f4 00 02 c0 a8 08 0b b2 74 19 2b   .n...........t.+
0030   06 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00   ................
0040   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00   ................
0050   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00   ................
0060   00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00   ................
0070   00 17 63 6f 72 65 64 6e 73 2d 66 39 66 64 39 37   ..coredns-f9fd97
0080   39 64 36 2d 76 67 34 34 78 0b 6b 75 62 65 2d 73   9d6-vg44x.kube-s
0090   79 73 74 65 6d 08 6b 6d 61 73 74 65 72 31 00 00   ystem.kmaster1..
00a0   00 0a 60 00 01 18 64 65 66 61 75 6c 74 2f 6b 75   ..`...default/ku
00b0   62 65 72 6e 65 74 65 73 3a 68 74 74 70 73         bernetes:https
@robcowart robcowart added the kind/bug Categorizes issue or PR as related to a bug. label Oct 20, 2020
@antoninbas
Copy link
Contributor

@dreamtalen do you think you could take a look?

@dreamtalen
Copy link
Contributor

do you think you could take a look?

Sure, I will try. @robcowart Could you please share me the steps of capturing this packet with wrong start and stop time?

@robcowart
Copy link
Author

robcowart commented Oct 21, 2020

The environment is a new lab, consisting of...

  • 1 x control plane/master node and 3 x worker node on bare metal
  • All are running the software version mentioned above.
  • Time is sync'd on all nodes using NTP and the timezone is UTC.
  • The cluster was setup using kubeadm init --pod-network-cidr=10.244.0.0/16

Antrea was deployed by...

  • downloaded https://github.com/vmware-tanzu/antrea/releases/download/v0.10.1/antrea.yml
  • modified section for antrea-agent.conf changing the following values...
AntreaProxy: true
FlowExporter: true
flowCollectorAddr: "192.168.9.21:4738:udp" <--- the port is different than the standard 4739 and is on purpose
flowPollInterval: "5s"
flowExportFrequency: 12
  • applied using... kubectl apply -f antrea.yml

That's it. Just a really simple starting point. When setup in this manner, the dates I am receiving are as shown above.

Interestingly ExportTime: 1603209328 is correct, which shows that it isn't the nodes themselves. I suspect that the issue is related to a missed or improper conversion happening somewhere. flowStartSeconds and flowEndSeconds should be unsigned values according to the IPFIX specifications. However Golang's time library uses signed values. Personally I would consider converting the timestamps to uint64 values when they are first fetched and then handling them as uint64 throughout the rest of the code rather than Time types. Just my $.02

@robcowart
Copy link
Author

It may be completely unrelated to this issue, but I have also noticed that when flow export is enabled, the antrea-agent will consume 100% of a CPU core. It returns to normal after disabling flow export.

@antoninbas
Copy link
Contributor

I deployed a cluster with Antrea and the same config as you (except for flowCollectorAddr of course), but didn't observe high CPU usage. I even deployed some Pods to generate a lot of iperf traffic, but Antrea resource usage stayed pretty low:

 110208 systemd+  20   0   13076    584    544 S  62.8   0.0   0:08.93 iperf
 110326 systemd+  20   0   13076    604    564 S  55.8   0.0   0:05.23 iperf
 108606 systemd+  20   0   24064   3548    608 S  52.8   0.0   0:43.18 iperf
 110099 systemd+  20   0   13076    524    484 S  15.6   0.0   0:06.36 iperf
   6092 root      20   0 1340004  98204  67632 S   4.0   1.2   4:19.20 kubelet
   6117 root      20   0 1345216 111984  40696 S   3.0   1.4   2:55.19 dockerd
     16 root      20   0       0      0      0 S   1.7   0.0   0:25.35 ksoftirqd/1
 108006 root      20   0  970844  52520  35500 S   1.0   0.7   0:01.97 antrea-agent
   6171 root      20   0 1391676  48896  25156 S   0.7   0.6   0:32.51 containerd
  95107 ubuntu    20   0 4825808   1.2g  28968 S   0.7  16.1   0:37.64 java

I generated a lot of traffic (but not that many connections). I left it running for a while but things were stable. Is there anything useful in the antrea-agent logs? Anything specific I should be doing to reproduce?

On a side not we should probably have a mechanism to enable Go profiling and the pprof HTTP server in the Antrea components so that when a user runs into an issue like this one, it is easy to collect information (goroutine backtrace, CPU profile, etc.). I'll open an issue to this effect.

@robcowart
Copy link
Author

@antoninbas I am going to walk through the whole process again and will document each step for repeatability. If the CPU utilization remains I feel like I should open a separate issue unless you feel it should stay here.

@dreamtalen
Copy link
Contributor

Hi Rob, I failed to reproduce the wrong start and stop timestamps on my side. I'm not sure if it's related to differenct IPFIX collectors we are using. Would you please try our ELK Flow Collector at https://github.com/vmware-tanzu/antrea/blob/master/docs/network-flow-visibility.md? There will be a Kibana dashboard which is easy to use and recommeneded for Antrea flow visualization. Also, please tell me the tool you used to catch this IPFIX record so I can have a try too. Thanks!

@antoninbas
Copy link
Contributor

@robcowart Thanks! Yes please open a separate issue when you get the steps documented.

@dreamtalen @zyiou could we take a deeper look at the timestamp thing? I know that the timestamps look correct (at least flowStartSeconds does) when looking at the records using the ELK stack configuration we provide. But there is something fishy when looking at. the records in Wireshark. https://tools.ietf.org/html/rfc7011#section-6.1.7 does state:

The dateTimeSeconds data type is an unsigned 32-bit integer in
network byte order containing the number of seconds since the UNIX
epoch, 1 January 1970 at 00:00 UTC, as defined in [POSIX.1].
dateTimeSeconds is encoded identically to the IPFIX Message Header
Export Time field. It can represent dates between 1 January 1970 and
7 February 2106 without wraparound; see Section 5.2 for wraparound
considerations.

Whereas in our case we are using a signed 64-bit integer:

The RFC says that the encoding should be the same as the ExportTime field, for which we use a uint32 (https://github.com/vmware/go-ipfix/blob/9ed57c1a11c1df64715bc23ee9ff4d7730d148a1/pkg/entities/message.go#L29). So I think there is definitely something wrong here. Please take a look. And thanks @robcowart for finding this.

@robcowart
Copy link
Author

The collector is irrelevant in this case as I was seeing this on the wire via a PCAP before it reached the collector. Seeing it also in the collector just confirms what was seen on the wire.

Can you share anything else about your environment? As I mentioned above I am on Ubuntu 20.04, K8S 1.19.3 via kubeadm init. How does this compare to what you are testing?

I will walk through my environment setup from scratch again, but I will be a few days.

@zyiou
Copy link
Contributor

zyiou commented Oct 26, 2020

@antoninbas I remembered that flowStartSeconds and flowEndSeconds should be uint32 initially, but we manually modified the data type to int64 when matching logstash fields and exporter fields. @srikartati I will change them back to uint32 if there is no further concern. Thanks!

@srikartati
Copy link
Member

srikartati commented Oct 27, 2020

Thanks @robcowart for raising this issue.

Currently, we consider uint 64-bit datatype for dateTimeSeconds, which is a bug.
https://github.com/vmware/go-ipfix/blob/master/pkg/entities/ie.go#L370

As pointed correctly by @antoninbas, according to RFC, dateTimeSeconds should be unsigned 32 bit, but dateTimeMilliseconds still be unsigned 64 bit.
The bug is in the format for dateTimeSeconds--should be changed to uint32 type and size should be 4 bytes. I think the confusion was because of int64 format for unix seconds in go time package.
@zyiou Agreed. Let's fix this in go-ipfix and do a patch release. Then, we should fix this in Antrea. Please change the type for logstash field too accordingly. Thanks.

In addition, I concur with Rob's suggestion that we should let the user of go-ipfix lib give directly in uint32 for dateTimeSeconds and uint64 for dateTimeMilliseconds rather than expecting in time.Unix() format that is provided in go time package.

@srikartati
Copy link
Member

srikartati commented Oct 27, 2020

An orthogonal point: Right now, flowEndTime is not properly encoded in the flow record. Conntrack flows do not add the stopTime until the conntrack flow is deleted.
https://github.com/torvalds/linux/blob/9ff9b0d392ea08090cd1780fb196f36dbb586529/net/netfilter/nf_conntrack_core.c#L664

Currently, we consider stopTime as flowEndTime for every flow record of conntrack flow, so we see some arbitrary values.
One option is to use the current time as flowEndTime for every flow record .
Second option is to encode it as 0 in each flow record when the flow is active and put the time stamp in the last flow record when the flow becomes inactive, that is when the flow is deleted from conntrack table.

@antoninbas @robcowart Any suggestions?

@antoninbas
Copy link
Contributor

I would suggest using 0 to make it clear that the value is not valid yet. I couldn't find any guidance for flowEnd* fields in the IPFIX RFCs, though I may have missed it. On the one hand, it makes you wonder whether you should only export data after a flow "terminates". On the other hand, the existence of fields like octetDeltaCount would imply that it is not the case and that exporting information half-way through a flow's lifetime is ok.

BTW, a related question: when a flow ends, are we guaranteed to export correct final packet / byte count values? Are does this guarantee only hold if the polling interval is less than the time-wait delay (120s by default IIRC)? Have we considered listening to conntrack events as an alternative / complementary implementation to polling, at least for Linux?

@robcowart
Copy link
Author

Most network devices send flow records based on one or more configurable timeout settings. While they will track a flow for a longer period of time, they will send flows more frequently. This is necessary for a number of reasons.

  • Many server-to-server TCP connections can live for weeks, months or longer (e.g. the connection between Kibana or Logstash to Elasticsearch). They essentially never end unless stopped or there is some error. Without a periodic update the user could literally wait months for the information. What good is the rest of their data until the picture is complete? Not much.

  • Now consider that the above long lived flow was a hacker exfiltrating data from your database. The flow record that could reveal their activity is never sent until the damage is already done, and you loose the opportunity to shut them down. Not good.

  • If all of the information about a flow record is included at the end of the flow. How do you distribute it over the life of the flow. All at the beginning? All at the end? Unfortunately most visualization tools, like Kibana, and even many purpose built flow applications, do not provide the ability to smooth the data into buckets over the life of the flow. Even if they did, each query would still have to look into the future to determine if any flows had start times within the current visualized window. This is very tricky at best, if not close to impossible. However when the device send regular updates about the flows it is much easier.

If I explained those well, you should see that it is very desirable to send information about a long-lived flow over the life of the flow. Most network devices will provide various options to control this. Some are simple:

inactive_timeout = 15 - export flow after it is inactive for 15 seconds.
active_timeout = 60 - export flow every 60 seconds while it is active.

Others provide more granular control:

expiry-interval 10 - the interval between expiry checks, i.e. every 10 secs it is checked which flows are ready for export
icmp 60 - export icmp flows every 60 seconds
tcp-generic 60 - export tcp flows every 60 seconds
tcp-fin 20 - export flow 20 seconds after observing a TCP finish flag.
tcp-rst 20 - export flow 20 seconds after observing a TCP reset flag.
udp 60 - export udp flows every 60 seconds
flow-generic 60 - applies to any IP protocol not specified by other timeouts
max-active-life 604800 - track no flows longer than 1 week (although it can pickup tracking again if the session is still active, usually with a new flow ID)

I explain this difference just to show the options that are common. The expectation is that it should be possible to at least provide for the former. The only detail I would add is that the inactive and active periods should be tracked per flow, not globally. For example, if the 60 second export period was global, i.e. every 60 seconds all active flows are exported. You can end up with 59 seconds of no data, and a flood of records from across the whole infrastructure simultaneously. This can easily overwhelm some collection systems, especially Linux systems with default networking kernel parameters. Tracking the inactive and active timeouts per flow will better distribute the load for the collecting system and provide more timely and accurate reporting. In 2020 I would say that it is generally accepted that 1-min is a good compromise between data granularity and overall volume of records generated, with short-lived flows being exported quickly due to the inactive timeout.

Now that we have established that records should be sent periodically over the lifetime of a network flow, the next question is which fields, or "information elements" (IEs), should be included. While there are both delta and total IEs for things like bytes and packets. Almost every vendor sends only the delta values, where the delta is the quantity since the previous record for this flow was exported (or since the flow started if sending the initial record).

Sending only delta values is usually not an issue as it is easy enough at query time to do a sum of the deltas, when a total is needed. I have seen a few examples of vendors sending the total values as well, but it isn't really important. What should be avoided is sending only total values. That makes it very challenging to work with the data later, especially in combination with records form other sources that provide only delta values. There is nothing wrong with including the total values, but I would simply remove them and gain a bit of efficiency.

Regarding the start and end timestamps. I have to admit that the RFCs aren't really clear here. My feeling is that for active flows flowEndSeconds should be set the time that the last measurement was taken. However I am going to spend some time going through my collection of PCAPs from different vendors and try to determine if there is any consensus best practice. I will get back to you on this.

The last thing I will mention is UDP vs. TCP. I noticed that in the Antrea repo examples that the flowexporter was configured to use TCP. However I set mine up to use UDP. Like many other network-related data sources (SNMP, syslog, etc.) netflow was designed to be sent via UDP. The logic of this is that establishing a TCP session has much more overhead and requires much more network traffic (especially in the reverse direction for ACKs and such) than UDP. During an outage, it is generally undesirable that the overhead of the management traffic attempting to re-establish session is competing with applications to do the same. This is so embedded in the networking mindset that many devices, as well as collectors, do not even support TCP. In fact IPFIX is the only flow standard to even specify optional support for TCP. Since the Antrea exporter supports both, this isn't an issue. However you should be aware that if people follow the documented example config exactly, some of them might encounter issues because the collector doesn't support TCP.

Hopefully this was helpful. Let me know if there are any questions that I can help answer.

@antoninbas
Copy link
Contributor

@srikartati @zyiou @dreamtalen could we investigate the possibility of implementing inactive_timeout and active_timeout as @robcowart suggested to limit burstiness and maybe have a more compliant implementation? As part of this it may be interesting to add flowEndReason to the records (https://www.iana.org/assignments/ipfix/ipfix.xhtml#ipfix-flow-end-reason), to indicate whether the record was sent because of the active_timeout, inactive_timeout, or something else. Looking at flowEndReason, it seems that flowEndSeconds should indeed be the time at which the flow was last measured to be active.

@srikartati
Copy link
Member

Thanks @robcowart for detailed comments.

@antoninbas Yes, an IPFIX field based on status flag in polled connections was in plans to be implemented, which is similar to flowEndReason. active_timeut and inactive_timeout suggestion is definitely interesting--currently we have only export frequency for all flows. We do not differentiate active and inactive flows.
Considering flowEndSeconds as the time when the last polling done before exporting the flow make sense.

BTW, a related question: when a flow ends, are we guaranteed to export correct final packet / byte count values? Are does this guarantee only hold if the polling interval is less than the time-wait delay (120s by default IIRC)? Have we considered listening to conntrack events as an alternative / complementary implementation to polling, at least for Linux?

This is a good point. Currently, we do not have any maximum for polling interval, so the final counts are accurate if it is less than the time wait delay. Regarding event based tracking of connections, there are some ideas to use ebf scripts to get the flow information in addition to polling. We do not have any concrete proposal for now.

@zyiou
Copy link
Contributor

zyiou commented Nov 13, 2020

Closes this issue (fixed in PR #1479)

@zyiou zyiou closed this as completed Nov 13, 2020
@robcowart
Copy link
Author

@zyiou, @srikartati this issue should be reopened. I have tested v0.11.1 and while the issue is partially fixed, there is still an issue. The flow start time is now correct. However the flow end time is still way in the future. It is not as far in the future as when I originally reported this issue. However it is still wrong.

Cisco NetFlow/IPFIX
    Version: 10
    Length: 184
    Timestamp: Nov 22, 2020 12:27:29.000000000 CET
    FlowSequence: 2
    Observation Domain Id: 4241174506
    Set 1 [id=256] (1 flows)
        FlowSet Id: (Data) (256)
        FlowSet Length: 168
        [Template Frame: 36]
        Flow 1
            [Duration: 682868247.000000000 seconds (seconds)]
                StartTime: Nov 22, 2020 12:26:33.000000000 CET
                EndTime: Jul 14, 2042 03:04:00.000000000 CEST
            SrcAddr: 10.244.2.2
            DstAddr: 192.168.8.11
            SrcPort: 43058
            DstPort: 6443
            Protocol: TCP (6)
            Permanent Packets: 85
            Permanent Octets: 6565
            Packets: 0
            Octets: 0
            Permanent Packets: 88 (Reverse Type 86 PACKETS_TOTAL)
            Permanent Octets: 49145 (Reverse Type 85 BYTES_TOTAL)
            Packets: 0 (Reverse Type 2 PKTS)
            Octets: 0 (Reverse Type 1 BYTES)
            Enterprise Private entry: ((Unknown)) Type 101: Value (hex bytes): 63 6f 72 65 64 6e 73 2d 66 39 66 64 39 37 39 64 36 2d 72 62 67 6c 78
            Enterprise Private entry: ((Unknown)) Type 100: Value (hex bytes): 6b 75 62 65 2d 73 79 73 74 65 6d
            Enterprise Private entry: ((Unknown)) Type 104: Value (hex bytes): 6b 6e 6f 64 65 32
            Enterprise Private entry: ((Unknown)) Type 103: Value (hex bytes): 
            Enterprise Private entry: ((Unknown)) Type 102: Value (hex bytes): 
            Enterprise Private entry: ((Unknown)) Type 105: Value (hex bytes): 
            Enterprise Private entry: ((Unknown)) Type 106: Value (hex bytes): 0a 60 00 01
            Enterprise Private entry: ((Unknown)) Type 109: Value (hex bytes): 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e 65 74 65 73 3a 68 74 74 70 73
            Enterprise Private entry: ((Unknown)) Type 110: Value (hex bytes): 
            Enterprise Private entry: ((Unknown)) Type 111: Value (hex bytes): 
            Enterprise Private entry: ((Unknown)) Type 112: Value (hex bytes): 
            Enterprise Private entry: ((Unknown)) Type 113: Value (hex bytes): 

@srikartati
Copy link
Member

@robcowart You are right. Just checked the code. We are getting the stoptime from conntrack table, instead we should record the stopTime as the time when the flow record is created (time.Now) from the discussion in this issue.
https://github.com/vmware-tanzu/antrea/blob/master/pkg/agent/flowexporter/exporter/exporter.go#L252

@zyiou Is it possible for you to add this in one of the PRs you already opened? Thanks.

@srikartati srikartati reopened this Nov 23, 2020
@zyiou
Copy link
Contributor

zyiou commented Nov 23, 2020

Sure. I can add this in #1582 when I finished the upgrade to go-ipfix v0.3.1

srikartati pushed a commit that referenced this issue Dec 3, 2020
* Move to go-ipfix package v0.3.1
- Changes for set and information element struct and interfaces
- Fixes #1417 by changing the flowEndSeconds to using time.Now()
- Move to using antrea/ipfix-collector v0.3.1 (based on #1523)
dreamtalen pushed a commit to dreamtalen/antrea that referenced this issue Dec 10, 2020
* Move to go-ipfix package v0.3.1
- Changes for set and information element struct and interfaces
- Fixes antrea-io#1417 by changing the flowEndSeconds to using time.Now()
- Move to using antrea/ipfix-collector v0.3.1 (based on antrea-io#1523)
antoninbas pushed a commit that referenced this issue Dec 23, 2020
* Move to go-ipfix package v0.3.1
- Changes for set and information element struct and interfaces
- Fixes #1417 by changing the flowEndSeconds to using time.Now()
- Move to using antrea/ipfix-collector v0.3.1 (based on #1523)
@robcowart
Copy link
Author

I have tested v0.12.0 and there are still issues. Previously (see my comment from 22 Nov 2020) the "start" time was fixed, but the "end" time was wrong. Now the opposite is true. As you can in the PCAP below... "end" is now correct, but "start" is again wrong. If one of the two had to be wrong, it is most important that the end time is correct. However it would be best for both to be correct.

Cisco NetFlow/IPFIX
    Version: 10
    Length: 186
    Timestamp: Jan 14, 2021 05:35:33.000000000 CET
    FlowSequence: 6
    Observation Domain Id: 4241174507
    Set 1 [id=256] (1 flows)
        FlowSet Id: (Data) (256)
        FlowSet Length: 170
        [Template Frame: 407]
        Flow 1
            [Duration: -678313707.000000000 seconds (seconds)]
                StartTime: Jul 14, 2042 03:04:00.000000000 CEST
                EndTime: Jan 14, 2021 05:35:33.000000000 CET
            SrcPort: 57442
            DstPort: 6443
            Protocol: TCP (6)
            Permanent Packets: 0
            Permanent Octets: 0
            Packets: 0
            Octets: 0
            SrcAddr: 10.244.3.2
            DstAddr: 192.168.8.11
            Permanent Packets: 0 (Reverse Type 86 PACKETS_TOTAL)
            Permanent Octets: 0 (Reverse Type 85 BYTES_TOTAL)
            Packets: 0 (Reverse Type 2 PKTS)
            Octets: 0 (Reverse Type 1 BYTES)
            Enterprise Private entry: (Project Antrea) Type 101: Value (hex bytes): 63 6f 72 65 64 6e 73 2d 66 39 66 64 39 37 39 64 36 2d 63 70 36 66 62
            Enterprise Private entry: (Project Antrea) Type 100: Value (hex bytes): 6b 75 62 65 2d 73 79 73 74 65 6d
            Enterprise Private entry: (Project Antrea) Type 104: Value (hex bytes): 6b 6e 6f 64 65 33
            Enterprise Private entry: (Project Antrea) Type 103: Value (hex bytes): 
            Enterprise Private entry: (Project Antrea) Type 102: Value (hex bytes): 
            Enterprise Private entry: (Project Antrea) Type 105: Value (hex bytes): 
            Enterprise Private entry: (Project Antrea) Type 108: Value (hex bytes): 01 bb
            Enterprise Private entry: (Project Antrea) Type 109: Value (hex bytes): 64 65 66 61 75 6c 74 2f 6b 75 62 65 72 6e 65 74 65 73 3a 68 74 74 70 73
            Enterprise Private entry: (Project Antrea) Type 110: Value (hex bytes): 
            Enterprise Private entry: (Project Antrea) Type 111: Value (hex bytes): 
            Enterprise Private entry: (Project Antrea) Type 112: Value (hex bytes): 
            Enterprise Private entry: (Project Antrea) Type 113: Value (hex bytes): 
            Enterprise Private entry: (Project Antrea) Type 106: Value (hex bytes): 0a 60 00 01

@srikartati
Copy link
Member

@robcowart Thanks for reporting this. Did you use the Flow Aggregator service when you tested or directly exported flow records to the flow collector from Flow Exporter?

@robcowart
Copy link
Author

Directly exported from flow exporter.

@zyiou
Copy link
Contributor

zyiou commented Jan 15, 2021

@robcowart Does it happen every time you export flow records or it happens occasionally?
@srikartati I checked all the recent exported flow records that flowStartSeconds are correct. Since the startTime in case above seems to be default value for empty time, is it possible that the startTime from conntrack table is not filled? If so, do we need to check and set value if startTime is invalid?

@srikartati
Copy link
Member

@zyiou Yes, that is my follow up comment. If this case is not happening for every flow, then we need to look at conntrack dumping API from github.com/ti-mo/conntrack package from which we get startTime timestamp. It could be possible that start timestamp is not filled, then we should check if it is invalid or not and set it the first time when we convert netlink connection to antrea connection here.

@robcowart I just want to make sure the antrea configmap parameter is the default value: ovsDatapathType: system and not netdev. Could you confirm that?

@robcowart
Copy link
Author

@zyiou I checked the PCAP of records that I made. It isn't all of the records, but it is at least 80% or more. If it is relevant... this environment is not very busy. It has only a few example deployments to test the IPFIX exporter.

@srikartati ovsDatapathType commented out, so I assume that the default of system is in effect.

@srikartati
Copy link
Member

@robcowart Thanks for confirming. Could you please check if the following sysctl config is set in your system? /proc/sys/net/netfilter/nf_conntrack_timestamp

@robcowart
Copy link
Author

Every node in the cluster has the same value...

knode2:~# cat /proc/sys/net/netfilter/nf_conntrack_timestamp
1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Categorizes issue or PR as related to a bug.
Projects
None yet
5 participants