compare FreeBSD and Linux TCP Congestion Control algoritms in VM over emulated 1Gbps x 40ms WAN


Linux VM receiver

test config

Virtual machines (VMs) are hosted by Bhyve in two separate physical boxs (Beelink SER5 AMD Mini PC) that are using FreeBSD 14.1 release OS.
The physical boxes (Beelink SER5 AMD Mini PCs) are connected through a 1Gbps hub, which is connected to a 1Gbps router.
In each test, only one data sender and one data receiver are used, and both are Virtual Machines (VMs).
FreeBSD VM n1fbsd  and Linux VM n1linuxvm  are sending TCP data traffic through the same physical path to the Linux VM receiver n2linuxvm . Senders don't share bandwidth.
Test TCP congestion control performance of CUBIC & NewReno in VM environment with added 40ms delay at the Linux receiver.

The minimum bandwidth delay product (BDP) is 1000Mbps x 40ms == 5 Mbytes.

root@n2linuxvm:~ # tc qdisc add dev enp0s5 root netem delay 40ms
root@n2linuxvm:~ # tc qdisc show dev enp0s5
qdisc netem 8001: root refcnt 2 limit 1000 delay 40ms
root@n2linuxvm:~ #

root@n1fbsd:~ # ping -c 5 -S 192.168.50.37 192.168.50.89
PING 192.168.50.89 (192.168.50.89) from 192.168.50.37: 56 data bytes
64 bytes from 192.168.50.89: icmp_seq=0 ttl=64 time=44.003 ms
64 bytes from 192.168.50.89: icmp_seq=1 ttl=64 time=44.837 ms
64 bytes from 192.168.50.89: icmp_seq=2 ttl=64 time=43.978 ms
64 bytes from 192.168.50.89: icmp_seq=3 ttl=64 time=43.513 ms
64 bytes from 192.168.50.89: icmp_seq=4 ttl=64 time=43.631 ms

--- 192.168.50.89 ping statistics ---
5 packets transmitted, 5 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 43.513/43.993/44.837/0.463 ms
root@n1fbsd:~ #

root@n1linuxvm:~ # ping -c 5 -I 192.168.50.154 192.168.50.89
PING 192.168.50.89 (192.168.50.89) from 192.168.50.154 : 56(84) bytes of data.
64 bytes from 192.168.50.89: icmp_seq=1 ttl=64 time=43.9 ms
64 bytes from 192.168.50.89: icmp_seq=2 ttl=64 time=44.0 ms
64 bytes from 192.168.50.89: icmp_seq=3 ttl=64 time=43.7 ms
64 bytes from 192.168.50.89: icmp_seq=4 ttl=64 time=44.0 ms
64 bytes from 192.168.50.89: icmp_seq=5 ttl=64 time=43.7 ms

--- 192.168.50.89 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4031ms
rtt min/avg/max/mdev = 43.706/43.862/44.015/0.130 ms
root@n1linuxvm:~ # 

root@n1fbsd:~ # cat /etc/sysctl.conf
...
# Don't cache ssthresh from previous connection
net.inet.tcp.hostcache.enable=0
# In crease FreeBSD maximum socket buffer size up to 128MB
kern.ipc.maxsockbuf=134217728
# Increase FreeBSD Max size of automatic send/receive buffer up to 128MB
net.inet.tcp.sendbuf_max=134217728
net.inet.tcp.recvbuf_max=134217728
root@n1fbsd:~ #

root@n2linuxvm:~ # cat /etc/sysctl.conf
...
net.core.rmem_max = 134217728 
net.core.wmem_max = 134217728 
# Increase Linux autotuning TCP buffer max up to 128MB buffers
net.ipv4.tcp_rmem = 4096 131072 134217728
net.ipv4.tcp_wmem = 4096 16384 134217728
# Don't cache ssthresh from previous connection
net.ipv4.tcp_no_metrics_save = 1
root@n2linuxvm:~ #

sender kernel info

FreeBSD 15.0-CURRENT (GENERIC) #166 main-n273322-22429a464a5f-dirty: with receiver-side scaling (RSS) enabled

sender kernel info

Ubuntu 22.04.5 LTS (GNU/Linux 5.15.0-124-generic x86_64)

receiver kernel info

Ubuntu 22.04.3 LTS (GNU/Linux 5.15.0-124-generic x86_64)

iperf3 -B ${src} --cport ${tcp_port} -c ${dst} -l 1M -t 100 -i 1 -f m -VC ${name}

test result

kern.hz value

TCP congestion control algo

iperf3 100 seconds average Bitrate

100 (default)

FreeBSD stock CUBIC (default)

455 Mbits/sec (-46.9%)

FreeBSD stock newreno

497 Mbits/sec (-37.4%)

Linux stock CUBIC (default)

857 Mbits/sec (base)

Linux stock newreno

794 Mbits/sec (base)

TCP throughput: attachment:throughput_chart_freebsd.png TCP congestion window: attachment:cwnd_chart_freebsd.png

TCP throughput: attachment:throughput_chart_linux.png TCP congestion window: attachment:cwnd_chart_linux.png

chengcui/problems/compare_fbsd_linux_tcp_cc (last edited 2024-11-04T16:20:08+0000 by chengcui)