You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/analyticsBlog.asciidoc
+2-2Lines changed: 2 additions & 2 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -25,7 +25,7 @@ endif::backend-xhtml11[]
25
25
26
26
= How do we track TRex performance Using ElasticSearch, Grafana and Pandas
27
27
The ability to monitor TRex performance on many setups/configurations on a daily basis may have a large impact on our ability to identify TRex performance degradation.
28
-
For a long time our monitoring method was based on hard coded boundaries, we have defined maximum and minimum values for each test/'s result and any exeception triggered a notification.
28
+
For a long time our monitoring method was based on hard coded boundaries, we have defined maximum and minimum values for each test/'s result and any exception triggered a notification.
29
29
This monitoring method had a lot of false positives which in turn increased the investigation time. Moreover this method introduced more complexity through the need to maintain the golden results for many test cases on various platforms.
30
30
31
31
A new method was required which would enable us to:
@@ -104,7 +104,7 @@ image:images/blog/figure5.jpg[title="figure5",align="left",width={p_width}, link
104
104
[small]#Figure 5# +
105
105
106
106
https://github.com/cisco-system-traffic-generator/trex-core/blob/master/doc/AnalyticsConnect.py/[View this on GitHub] +
107
-
More on: https://developers.google.com/analytics/devguides/reporting/core/v3/quickstart/service-py/[Google API on Documantation] [4]
107
+
More on: https://developers.google.com/analytics/devguides/reporting/core/v3/quickstart/service-py/[Google API on Documentation] [4]
Copy file name to clipboardExpand all lines: doc/packet_builder_yaml.asciidoc
+7-7Lines changed: 7 additions & 7 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -113,7 +113,7 @@ The format should be *YAML*
113
113
| uint64 | sub fields of this header | 64
114
114
| other class type | name of other class. for example, "c-mac-addr"; take fields from there, optionally overload them later | The size taken from that class
115
115
| Payload | xref:Payload[Payload] | total packet size - all header until now
116
-
| vlen_t | in case of varible size header this include the size to the end of varible size header see example xref:IpvOption[Ipv4Option] |total size of the object
116
+
| vlen_t | in case of variable size header this include the size to the end of variable size header see example xref:IpvOption[Ipv4Option] |total size of the object
117
117
|=================
118
118
119
119
@@ -189,7 +189,7 @@ The format should be *YAML*
189
189
| next_headers | string or type | a name of class that define the next or just an array | "none" | xref:Next_headers[Next_headers] |
* There would be a spare field in the Stream object so GUI could add more metadata for reconstructing the builder types
319
319
for example in this example Ethernet/IP/TCP/IP/TCP you can't extrac from buffer alone that Payload is IP/TCP only the builder known that in build time.
320
320
* Ip total length need to keep the total_pkt_size - this ip header . this should work for internal header too.
321
-
* When GUI add header ("external") the total size of this header should be calculated ( varible size should be given a default - ipv4)
321
+
* When GUI add header ("external") the total size of this header should be calculated ( variable size should be given a default - ipv4)
Copy file name to clipboardExpand all lines: doc/release_notes_old.asciidoc
+6-6Lines changed: 6 additions & 6 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -663,7 +663,7 @@ which was released on v2.16
663
663
== Release 2.13 ==
664
664
665
665
* Significantly improve performance and scale for stateful case with high active flows (70%-300% better) see here link:trex_manual.html#_more_active_flows[More active flows]
666
-
* Stateful with low active repeatable flows - an optimization was removed, for example `cap2/imix_fast_1g.yaml`. users that want to get this in high performance are adviced to move to Stateles mode. Removing this support improved the common case.
666
+
* Stateful with low active repeatable flows - an optimization was removed, for example `cap2/imix_fast_1g.yaml`. users that want to get this in high performance are adviced to move to Stateless mode. Removing this support improved the common case.
667
667
* Support NAT without IPv4.option and UDP flows- for ASA link:https://trex-tgn.cisco.com/youtrack/issue/trex-274[trex-274]
668
668
* Scapy server is restart automatically - for future Stateless GUI link:https://trex-tgn.cisco.com/youtrack/issue/trex-291[trex-291]
669
669
* Add minimum ipg for remote pcap link:https://trex-tgn.cisco.com/youtrack/issue/trex-281[trex-281]
@@ -691,7 +691,7 @@ which was released on v2.16
691
691
692
692
* Improve support for Mellanox ConnectX-4 cards (100/50/25GbE)
693
693
** Only CentOs/RedHat 7.2 and up is supported due to OFED issues
* Stateless neighboring protocols infra and first protocols support/Python API - link:trex_stateless.html#_neighboring_protocols[Neighboring Protocols]
@@ -799,7 +799,7 @@ See link:trex_manual.html#_configuration_yaml_parameter_of_cfg_option[here] and
799
799
* DPDK 16.07
800
800
* ASYNC ZMQ is compressed by default. It improves response time see link:http://trex-tgn.cisco.com/youtrack/issue/trex-232[trex-232]
801
801
** You will need to update the GUI
802
-
* Support Ubuntu 16.01 - Stateful serverr is Python 3.0 and Python 3.5 for ZMQ library
802
+
* Support Ubuntu 16.01 - Stateful server is Python 3.0 and Python 3.5 for ZMQ library
803
803
* XL710/X710 low latency was improved - see link:http://trex-tgn.cisco.com/youtrack/issue/trex-214[trex-214]
804
804
* Support graceful shutdown command
805
805
* Console - support L1 BPS using `-m 10bpsl1` see link:http://trex-tgn.cisco.com/youtrack/issue/trex-230[trex-230]
@@ -906,7 +906,7 @@ For XL710/X710 there is a need to upgrade the firmware to 5.04 (or later)
906
906
** Add tx/rx graphs
907
907
* Python API: add an API for reading events as warning/errors
908
908
* HLTAPI support for per stream stats
909
-
* support VALN mode for per stream stats for 82599 using `--vlan` switch at server invocation
909
+
* support VLAN mode for per stream stats for 82599 using `--vlan` switch at server invocation
910
910
* A peek into TRex stateless GUI version for evaluation still without many features like packet builder, advance packet builder, per stream stats link:https://www.dropbox.com/s/vs9gojtdc5ewv05/setupCiscoTrex1.96-SNAPSHOT.exe?dl=0[TRex Stateless GUI Download]
911
911
** Only pcap file packet builder is supported in this version
912
912
@@ -1103,7 +1103,7 @@ optional arguments:
1103
1103
-s SPEEDUP, --speedup SPEEDUP
1104
1104
Factor to accelerate the injection. effectively means
1105
1105
IPG = IPG / SPEEDUP
1106
-
--force Set if you want to stop active ports before appyling
1106
+
--force Set if you want to stop active ports before applying
Copy file name to clipboardExpand all lines: doc/trex_appendix_asa_5585.asciidoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,7 +8,7 @@ TRex with ASA 5585
8
8
9
9
include::trex_ga.asciidoc[]
10
10
11
-
When running TRex aginst ASA 5585, you have to notice following things:
11
+
When running TRex against ASA 5585, you have to notice following things:
12
12
13
13
* ASA can't forward ipv4 options, so there is a need to use --learn-mode 1 (or 3) in case of NAT. In this mode, bidirectional UDP flows are not supported.
14
14
--learn-mode 1 support TCP sequence number randomization in both sides of the connection (client to server and server client). For this to work, TRex must learn
One use case which shows the performance gain that can be acheived by using SR-IOV is when a user wants to create a pool of TRex VMs that tests a pool of virtual DUTs (e.g. ASAv,CSR etc.)
20
+
One use case which shows the performance gain that can be achieved by using SR-IOV is when a user wants to create a pool of TRex VMs that tests a pool of virtual DUTs (e.g. ASAv,CSR etc.)
21
21
When using newly supported SR-IOV, compute, storage and networking resources can be controlled dynamically (e.g by using OpenStack)
Copy file name to clipboardExpand all lines: doc/trex_astf.asciidoc
+5-5Lines changed: 5 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1280,7 +1280,7 @@ We might change the JSON format in the future as this is a first version
1280
1280
1281
1281
*Goal*:: Use the TRex ASTF advanced simulator.
1282
1282
1283
-
It is like the simple simulator but simulates multiple templates and flows exacly like TRex server would do with one DP core.
1283
+
It is like the simple simulator but simulates multiple templates and flows exactly like TRex server would do with one DP core.
1284
1284
1285
1285
[source,bash]
1286
1286
----
@@ -1552,7 +1552,7 @@ This example will delay the server response by 500 msec.
1552
1552
1553
1553
prog_s = ASTFProgram()
1554
1554
prog_s.recv(len(http_req))
1555
-
prog_s.delay_rand(100000,500000); # delay random number betwean 100msec-500msec
1555
+
prog_s.delay_rand(100000,500000); # delay random number between 100msec-500msec
1556
1556
prog_s.send(http_response)
1557
1557
1558
1558
----
@@ -1646,7 +1646,7 @@ Usually in case of very long flows there is need to cap the number of active flo
1646
1646
1647
1647
By default `send()` command waits for the ACK on the last byte. To make it non-blocking, especially in case big BDP (large window is required) it is possible to work in non-blocking mode, this way to achieve full pipeline.
1648
1648
1649
-
Have a look at `astf/htttp_eflow2.py` example.
1649
+
Have a look at `astf/http_eflow2.py` example.
1650
1650
1651
1651
.Non-blocking send
1652
1652
[source,python]
@@ -1691,9 +1691,9 @@ Have a look at `astf/htttp_eflow2.py` example.
1691
1691
See link:cp_astf_docs/api/profile_code.html#astfprogram-class[astf-program] for more info.
1692
1692
1693
1693
==== Tutorial: L7 emulation - Elephant flows with non-blocking send with tick var
1694
-
Same as the previous example, only instead of using `loop count` we are using time as a measurement. In the prevous example we calculated the received bytes in advance and use the rcv command with the right bytes values. However, sending & receiving data according to time is tricky and errors/dtops might occur (see stats from runnnig example below).
1694
+
Same as the previous example, only instead of using `loop count` we are using time as a measurement. In the prevous example we calculated the received bytes in advance and use the rcv command with the right bytes values. However, sending & receiving data according to time is tricky and errors/dtops might occur (see stats from running example below).
Copy file name to clipboardExpand all lines: doc/trex_astf_vs_nginx.asciidoc
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -442,7 +442,7 @@ NGINX installed on a 2-socket setup with 8 cores/socket (total of 16 cores/32 th
442
442
The total number of packets was approximately 600KPPS (Tx+Rx). The number of active flows was 12K.
443
443
444
444
TRex with one core could scale to about 25Gb/sec, 3MPPS of the same HTTP profile.
445
-
The main issue with NGINX and Linux setup is the tunning. It is very hard to let the hardware utilizing the full server resource (half of the server was idel in this case and still experiance a lot of drop)
445
+
The main issue with NGINX and Linux setup is the tunning. It is very hard to let the hardware utilizing the full server resource (half of the server was idel in this case and still experience a lot of drop)
446
446
TRex is not perfect too, we couldn't reach 100% CPU utilization without a drop (CPU was 84%) need to look if we can improve this but at least we are in the right place for optimization.
Copy file name to clipboardExpand all lines: doc/trex_book.asciidoc
+8-8Lines changed: 8 additions & 8 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -1004,7 +1004,7 @@ TRex port 0 ( server VLAN1) <-> | DUT | <-> TRex port 1 ( client-VLAN1)
1004
1004
In this case, traffic on vlan0 is sent as before, while for traffic on vlan1, the order is reversed (client traffic sent on port1 and server traffic on port0).
1005
1005
TRex divides the flows evenly between the vlans. This results in an equal amount of traffic on each port.
@@ -1435,7 +1435,7 @@ For example, in this case 16.0.0.1->48.0.0.1 ICMP will be the flow for latency.
1435
1435
==== Clustering example
1436
1436
1437
1437
In this example we have one DUT with four 10gb interfaces and one TRex with two 40Gb/sec interfaces and we want to convert the traffic from 2 TRex interfaces to 4 DUT Interfaces.
@@ -1550,7 +1550,7 @@ DUT should have a static route to move packets from client to server and vice ve
1550
1550
1551
1551
An example of one flow generation
1552
1552
1553
-
1. next hop resolotion. TRex resolve all the next hop option e.g. 11.10.0.1/4050 11.11.0.1/4051
1553
+
1. next hop resolution. TRex resolve all the next hop option e.g. 11.10.0.1/4050 11.11.0.1/4051
1554
1554
2. Choose template by CPS, 50% probability for each. take template #1
1555
1555
3. SRC_IP=12.1.1.2, DEST_IP=13.1.1.2
1556
1556
4. Allocate src_port for 12.1.1.2 ==>src_port=1025 for the first flow of client=12.1.1.2
@@ -1688,7 +1688,7 @@ groups:
1688
1688
1689
1689
1690
1690
----
1691
-
<1> We added more clusters beacuse more IPs will be generated (+mask)
1691
+
<1> We added more clusters because more IPs will be generated (+mask)
1692
1692
1693
1693
1694
1694
@@ -1827,7 +1827,7 @@ to the command line options, where <sample> is the sample rate. The number of fl
1827
1827
1828
1828
[NOTE]
1829
1829
============
1830
-
This feature changes the TTL of the sampled flows to 255 and expects to receive packets with TTL 254 or 255 (one routing hop). If you have more than one hop in your setup, use `--hops` to change it to a higher value. More than one hop is possible if there are number of routers betwean TRex client side and TRex server side.
1830
+
This feature changes the TTL of the sampled flows to 255 and expects to receive packets with TTL 254 or 255 (one routing hop). If you have more than one hop in your setup, use `--hops` to change it to a higher value. More than one hop is possible if there are number of routers between TRex client side and TRex server side.
0 commit comments