- Introduction
- Linux network queues overview
- Fitting the sysctl variables into the Linux network flow
- Ingress - they're coming
- Egress - they're leaving
- How to check - perf
- What, Why and How - network and sysctl parameters
- Ring Buffer - rx,tx
- Interrupt Coalescence (IC) - rx-usecs, tx-usecs, rx-frames, tx-frames (hardware IRQ)
- Interrupt Coalescing (soft IRQ) and Ingress QDisc
- Egress QDisc - txqueuelen and default_qdisc
- TCP Read and Write Buffers/Queues
- Honorable mentions - TCP FSM and congestion algorithm
- Network tools
- References
Sometimes people are looking for sysctl cargo cult values that bring high throughput and low latency with no trade-offs and work in every situation. That's not realistic, although we can say that the newer kernel versions are very well tuned by default. In fact, you might hurt performance if you mess with the defaults.
This brief tutorial shows where some of the most used and quoted sysctl/network parameters are located into the Linux network flow, it was heavily inspired by the illustrated guide to the Linux networking stack and many of Marek Majkowski's posts.
- Packets arrive at the NIC
- NIC will verify
MAC(if not on promiscuous mode) andFCSand decide to drop or to continue - NIC will DMA packets to RAM, in a region previously prepared (mapped) by the driver
- NIC will enqueue references to the packets at receive ring buffer queue
rxuntilrx-usecstimeout orrx-frames - NIC will raise a
hard IRQ - CPU will run the
IRQ handlerthat runs the driver's code -
- The driver will
schedule NAPI, clear thehard IRQ, and return
- The driver will
- The driver raises a
soft IRQ (NET_RX_SOFTIRQ) - NAPI will poll data from the receive ring buffer until
netdev_budget_usecstimeout ornetdev_budgetanddev_weightpackets - Linux will also allocate memory to
sk_buff - Linux fills the metadata: protocol, interface, setmacheader, removes ethernet
- Linux will pass the skb to the kernel stack (
netif_receive_skb) - It will set the network header, clone
skbto taps (i.e. tcpdump) and pass it to tc ingress - Packets are handled to a qdisc sized
netdev_max_backlogwith its algorithm defined bydefault_qdisc - It calls
ip_rcvand packets are handed to IP - It calls netfilter (
PREROUTING) - It looks at the routing table, if forwarding or local
- If it's local it calls netfilter (
LOCAL_IN) - It calls the L4 protocol (for instance
tcp_v4_rcv) - It finds the right socket
- It goes to the tcp finite state machine
- Enqueue the packet to the receive buffer and sized as
tcp_rmemrules- If
tcp_moderate_rcvbufis enabled, the kernel will auto-tune the receive buffer
- If
- Kernel will signalize that there is data available to apps (epoll or any polling system)
- Application wakes up and reads the data
- Application sends message (
sendmsgor other) - TCP send message allocates skb_buff
- It enqueues skb to the socket write buffer of
tcp_wmemsize - Builds the TCP header (src and dst port, checksum)
- Calls L3 handler (in this case
ipv4ontcp_write_xmitandtcp_transmit_skb) - L3 (
ip_queue_xmit) does its work: build ip header and call netfilter (LOCAL_OUT) - Calls output route action
- Calls netfilter (
POST_ROUTING) - Fragment the packet (
ip_output) - Calls L2 send function (
dev_queue_xmit) - Feeds the output (QDisc) queue of
txqueuelenlength using its default_qdisc algorithm. - The driver code enqueue the packets at the
ring buffer tx - The driver will do a
soft IRQ (NET_TX_SOFTIRQ)aftertx-usecstimeout ortx-frames - Re-enable hard IRQ to NIC
- Driver will map all the packets (to be sent) to some DMA'ed region
- NIC fetches the packets (via DMA) from RAM to transmit
- After the transmission NIC will raise a
hard IRQto signal its completion - The driver will handle this IRQ (turn it off)
- And schedule (
soft IRQ) the NAPI poll system - NAPI will handle the receive packets signaling and free the RAM
If you want to see the network tracing within Linux you can use perf.
docker run -it --rm --cap-add SYS_ADMIN --entrypoint bash ljishen/perf
apt-get update
apt-get install iputils-ping
# this is going to trace all events (not syscalls) to the subsystem net:* while performing the ping
perf trace --no-syscalls --event 'net:*' ping globo.com -c1 > /dev/null
- What - the driver receive/send queue a single or multiple queues with a fixed size, usually implemented as FIFO, it is located at RAM
- Why - buffer to smoothly accept bursts of connections without dropping them, you might need to increase these queues when you see drops or overrun, aka there are more packets coming than the kernel is able to consume them, the side effect might be increased latency.
- How:
- Check command:
ethtool -g ethX - Change command:
ethtool -G ethX rx value tx value - How to monitor:
ethtool -S ethX | grep -e "err" -e "drop" -e "over" -e "miss" -e "timeout" -e "reset" -e "restar" -e "collis" -e "over" | grep -v "\: 0"
- Check command:
- What - number of microseconds/frames to wait before raising a hardIRQ, from the NIC perspective it'll DMA data packets until this timeout/number of frames
- Why - reduce CPUs usage, hard IRQ, might increase throughput at cost of latency.
- How:
- Check command:
ethtool -c ethX - Change command:
ethtool -C ethX rx-usecs value tx-usecs value - How to monitor:
cat /proc/interrupts
- Check command:
- What - maximum number of microseconds in one NAPI polling cycle. Polling will exit when either
netdev_budget_usecshave elapsed during the poll cycle or the number of packets processed reachesnetdev_budget. - Why - instead of reacting to tons of softIRQ, the driver keeps polling data; keep an eye on
dropped(# of packets that were dropped becausenetdev_max_backlogwas exceeded) andsqueezed(# of times ksoftirq ran out ofnetdev_budgetor time slice with work remaining). - How:
- Check command:
sysctl net.core.netdev_budget_usecs - Change command:
sysctl -w net.core.netdev_budget_usecs value - How to monitor:
cat /proc/net/softnet_stat; or a better tool
- Check command:
- What -
netdev_budgetis the maximum number of packets taken from all interfaces in one polling cycle (NAPI poll). In one polling cycle interfaces which are registered to polling are probed in a round-robin manner. Also, a polling cycle may not exceednetdev_budget_usecsmicroseconds, even ifnetdev_budgethas not been exhausted. - How:
- Check command:
sysctl net.core.netdev_budget - Change command:
sysctl -w net.core.netdev_budget value - How to monitor:
cat /proc/net/softnet_stat; or a better tool
- Check command:
- What -
dev_weightis the maximum number of packets that kernel can handle on a NAPI interrupt, it's a Per-CPU variable. For drivers that support LRO or GRO_HW, a hardware aggregated packet is counted as one packet in this. - How:
- Check command:
sysctl net.core.dev_weight - Change command:
sysctl -w net.core.dev_weight value - How to monitor:
cat /proc/net/softnet_stat; or a better tool
- Check command:
- What -
netdev_max_backlogis the maximum number of packets, queued on the INPUT side (the ingress qdisc), when the interface receives packets faster than kernel can process them. - How:
- Check command:
sysctl net.core.netdev_max_backlog - Change command:
sysctl -w net.core.netdev_max_backlog value - How to monitor:
cat /proc/net/softnet_stat; or a better tool
- Check command:
- What -
txqueuelenis the maximum number of packets, queued on the OUTPUT side. - Why - a buffer/queue to face connection burst and also to apply tc (traffic control).
- How:
- Check command:
ip link show dev ethX - Change command:
ip link set dev ethX txqueuelen N - How to monitor:
ip -s link
- Check command:
- What -
default_qdiscis the default queuing discipline to use for network devices. - Why - each application has different load and need to traffic control and it is used also to fight against bufferbloat
- How:
- Check command:
sysctl net.core.default_qdisc - Change command:
sysctl -w net.core.default_qdisc value - How to monitor:
tc -s qdisc ls dev ethX
- Check command:
The policy that defines what is memory pressure is specified at tcp_mem and tcp_moderate_rcvbuf.
- What -
tcp_rmem- min (size used under memory pressure), default (initial size), max (maximum size) - size of receive buffer used by TCP sockets. - Why - the application buffer/queue to the write/send data, understand its consequences can help a lot.
- How:
- Check command:
sysctl net.ipv4.tcp_rmem - Change command:
sysctl -w net.ipv4.tcp_rmem="min default max"; when changing default value, remember to restart your user space app (i.e. your web server, nginx, etc) - How to monitor:
cat /proc/net/sockstat
- Check command:
- What -
tcp_wmem- min (size used under memory pressure), default (initial size), max (maximum size) - size of send buffer used by TCP sockets. - How:
- Check command:
sysctl net.ipv4.tcp_wmem - Change command:
sysctl -w net.ipv4.tcp_wmem="min default max"; when changing default value, remember to restart your user space app (i.e. your web server, nginx, etc) - How to monitor:
cat /proc/net/sockstat
- Check command:
- What
tcp_moderate_rcvbuf- If set, TCP performs receive buffer auto-tuning, attempting to automatically size the buffer. - How:
- Check command:
sysctl net.ipv4.tcp_moderate_rcvbuf - Change command:
sysctl -w net.ipv4.tcp_moderate_rcvbuf value - How to monitor:
cat /proc/net/sockstat
- Check command:
Accept and SYN Queues are governed by net.core.somaxconn and net.ipv4.tcp_max_syn_backlog. Nowadays net.core.somaxconn caps both queue sizes.
sysctl net.core.somaxconn- provides an upper limit on the value of the backlog parameter passed to thelisten()function, known in userspace asSOMAXCONN. If you change this value, you should also change your application to a compatible value (i.e. nginx backlog).cat /proc/sys/net/ipv4/tcp_fin_timeout- this specifies the number of seconds to wait for a final FIN packet before the socket is forcibly closed. This is strictly a violation of the TCP specification but required to prevent denial-of-service attacks.cat /proc/sys/net/ipv4/tcp_available_congestion_control- shows the available congestion control choices that are registered.cat /proc/sys/net/ipv4/tcp_congestion_control- sets the congestion control algorithm to be used for new connections.cat /proc/sys/net/ipv4/tcp_max_syn_backlog- sets the maximum number of queued connection requests which have still not received an acknowledgment from the connecting client; if this number is exceeded, the kernel will begin dropping requests.cat /proc/sys/net/ipv4/tcp_syncookies- enables/disables syn cookies, useful for protecting against syn flood attacks.cat /proc/sys/net/ipv4/tcp_slow_start_after_idle- enables/disables tcp slow start.
How to monitor:
netstat -atn | awk '/tcp/ {print $6}' | sort | uniq -c- summary by statess -neopt state time-wait | wc -l- counters by a specific state:established,syn-sent,syn-recv,fin-wait-1,fin-wait-2,time-wait,closed,close-wait,last-ack,listening,closingnetstat -st- tcp stats summarynstat -a- human-friendly tcp stats summarycat /proc/net/sockstat- summarized socket statscat /proc/net/tcp- detailed stats, see each field meaning at the kernel docscat /proc/net/netstat-ListenOverflowsandListenDropsare important fields to keep an eye oncat /proc/net/netstat | awk '(f==0) { i=1; while ( i<=NF) {n[i] = $i; i++ }; f=1; next} \ (f==1){ i=2; while ( i<=NF){ printf "%s = %d\n", n[i], $i; i++}; f=0} ' | grep -v "= 0; a human readable/proc/net/netstat
Source: https://commons.wikimedia.org/wiki/File:Tcp_state_diagram_fixed_new.svg
- iperf3 - network throughput
- vegeta - HTTP load testing tool
- netdata - system for distributed real-time performance and health monitoring
- prometheus + grafana + node exporter full dashboard - monitoring stack to graph detailed system behaviour
- https://www.kernel.org/doc/Documentation/sysctl/net.txt
- https://www.kernel.org/doc/Documentation/networking/ip-sysctl.txt
- https://www.kernel.org/doc/Documentation/networking/scaling.txt
- https://www.kernel.org/doc/Documentation/networking/proc_net_tcp.txt
- https://www.kernel.org/doc/Documentation/networking/multiqueue.txt
- http://man7.org/linux/man-pages/man7/tcp.7.html
- http://man7.org/linux/man-pages/man8/tc.8.html
- http://cseweb.ucsd.edu/classes/fa09/cse124/presentations/TCPlinux_implementation.pdf
- https://netdevconf.org/1.2/papers/bbr-netdev-1.2.new.new.pdf
- https://blog.cloudflare.com/how-to-receive-a-million-packets/
- https://blog.cloudflare.com/how-to-achieve-low-latency/
- https://people.redhat.com/pladd/MHVLUG_2017-04_Network_Receive_Stack.pdf
- https://blog.packagecloud.io/eng/2016/06/22/monitoring-tuning-linux-networking-stack-receiving-data/
- https://www.youtube.com/watch?v=6Fl1rsxk4JQ
- https://oxnz.github.io/2016/05/03/performance-tuning-networking/
- https://www.intel.com/content/dam/www/public/us/en/documents/reference-guides/xl710-x710-performance-tuning-linux-guide.pdf
- https://access.redhat.com/sites/default/files/attachments/20150325_network_performance_tuning.pdf
- https://medium.com/@matteocroce/linux-and-freebsd-networking-cbadcdb15ddd
- https://blogs.technet.microsoft.com/networking/2009/08/12/where-do-resets-come-from-no-the-stork-does-not-bring-them/
- https://www.intel.com/content/dam/www/public/us/en/documents/white-papers/multi-core-processor-based-linux-paper.pdf
- http://syuu.dokukino.com/2013/05/linux-kernel-features-for-high-speed.html
- https://www.bufferbloat.net/projects/codel/wiki/Best_practices_for_benchmarking_Codel_and_FQ_Codel/
- https://software.intel.com/en-us/articles/setting-up-intel-ethernet-flow-director
- https://courses.engr.illinois.edu/cs423/sp2014/Lectures/LinuxDriver.pdf
- https://www.coverfire.com/articles/queueing-in-the-linux-network-stack/
- http://vger.kernel.org/~davem/skb.html
- https://www.missoulapubliclibrary.org/ftp/LinuxJournal/LJ13-07.pdf
- https://opensourceforu.com/2016/10/network-performance-monitoring/
- https://www.yumpu.com/en/document/view/55400902/an-adventure-of-analysis-and-optimisation-of-the-linux-networking-stack
- https://lwn.net/Articles/616241/
- https://medium.com/@duhroach/tools-to-profile-networking-performance-3141870d5233
- https://www.lmax.com/blog/staff-blogs/2016/05/06/navigating-linux-kernel-network-stack-receive-path/
- https://fasterdata.es.net/host-tuning/linux/100g-tuning/
- http://tcpipguide.com/free/t_TCPOperationalOverviewandtheTCPFiniteStateMachineF-2.htm
- http://veithen.github.io/2014/01/01/how-tcp-backlog-works-in-linux.html
- https://people.cs.clemson.edu/~westall/853/tcpperf.pdf
- http://tldp.org/HOWTO/Traffic-Control-HOWTO/classless-qdiscs.html
- https://fasterdata.es.net/assets/Papers-and-Publications/100G-Tuning-TechEx2016.tierney.pdf
- https://www.kernel.org/doc/ols/2009/ols2009-pages-169-184.pdf
- https://devcentral.f5.com/articles/the-send-buffer-in-depth-21845
- http://packetbomb.com/understanding-throughput-and-tcp-windows/
- https://www.speedguide.net/bdp.php
- https://www.switch.ch/network/tools/tcp_throughput/
- https://www.ibm.com/support/knowledgecenter/en/SSQPD3_2.6.0/com.ibm.wllm.doc/usingethtoolrates.html
- https://blog.tsunanet.net/2011/03/out-of-socket-memory.html
- https://unix.stackexchange.com/questions/12985/how-to-check-rx-ring-max-backlog-and-max-syn-backlog-size
- https://serverfault.com/questions/498245/how-to-reduce-number-of-time-wait-processes
- https://unix.stackexchange.com/questions/419518/how-to-tell-how-much-memory-tcp-buffers-are-actually-using
- https://eklitzke.org/how-tcp-sockets-work
- https://www.linux.com/learn/intro-to-linux/2017/7/introduction-ss-command
- https://staaldraad.github.io/2017/12/20/netstat-without-netstat/
- https://loicpefferkorn.net/2016/03/linux-network-metrics-why-you-should-use-nstat-instead-of-netstat/
- http://assimilationsystems.com/2015/12/29/bufferbloat-network-best-practice/
- https://wwwx.cs.unc.edu/~sparkst/howto/network_tuning.php
- https://medium.com/@tom_84912/the-alphabet-soup-of-receive-packet-steering-rss-rps-rfs-and-arfs-c84347156d68

