Skip to main content

Performance Measurements of Linux, DanOS, VYOS, VPP, and Linux XDP at 100GE

Tests still being performed - check back at the end of March 2022 for final results

Results:

Table below represent Millions of Packets Per Second (MPPS) send for forwarding via the router software vs packetloss of the end destination of expected packets.
Note:
- droptest % represents the % of loss of the legitimate packets
- some higher rates not tested once significant loss was demonstrated at lower levels
- no optimisations performed on these routers unless otherwise noted below

Single-flow
MPPS1.534.567.59101215182030
Danos0.0%0.0%0.0%0.1%0.1%0.0%11.2%27.3%42.0%  72.0%
VPP            
Linux            
LinXDP            
VYOS            
Single-flow w/ 5 blocking rules
MPPS1.534.567.59101215182030
Danos0.0%0.0%0.0%0.0%12.4%       
VPP            
Linux            
LinXDP            
VYOS            
Single-flow w/ 50 blocking rules
MPPS1.534.567.59101215182030
Danos0.0%0.0%0.1%0.1%13.7%31.1%40.3%     
VPP            
Linux            
LinXDP            
VYOS            
Single-flow w/ 50 blocking rules, GRE tunnel forwarding outcome
MPPS1.534.567.59101215182030
Danos0.0%0.0%10.8%34.3%47.8%56.7%63.1%     
VPP            
Linux            
LinXDP            
VYOS            
Multiple-flow w/ 50 blocking rules
MPPS1.534.567.59101215182030
Danos0.0%0.0%0.0%0.0%0.0%0.0%0.0%0.0%0.0%15.0%27.2% 
VPP            
Linux            
LinXDP            
VYOS            
Multiple-flow w/ 50 blocking rules, 900K routes
MPPS1.534.567.59101215182030
Danos0.0%0.0%0.0%0.0%0.0%0.0%0.0%0.0%0.1%15.1%27.0%49.2%
VPP            
Linux            
LinXDP            
VYOS            
Multiple-flow w/ 50 blocking rules, 900K routes, 50% of traffic dropped
MPPS1.534.567.59101215182030
Danos0.0%0.0%0.0%0.0%0.0%0.0%0.0%0.0%0.0%10.6%22.5%47.2%
VPP            
Linux            
LinXDP            
VYOS            

 

Test Environment & pktgen tool

Network Card:

mlx5_core 0000:3b:00.1: firmware version: 16.27.6120
mlx5_core 0000:3b:00.1: 126.016 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x16 link at 0000:3a:00.0 (capable of 252.048 Gb/s with 16.0 GT/s PCIe x16 link)

We wont go into building pktgen as there's plenty of doco out there on this.  Just for reference purposes on how we ran pktgen:

LD_LIBRARY_PATH=/usr/local/lib64/ /root/pktgen-dpdk/usr/local/bin/pktgen -l 2,4,6 -n 2 -a 3b:00.1 -d librte_net_mlx5.so  -- -p 0x1 -P -m "[4:6].0"

Traffic generated:
Static destination MAC (The test Target)
pktgen:

set 0 rate 10   (This is % of 100GE, adjusted accordingly at 1% = 1.5MPPS)
set 0 size 64
set 0 count 50000000
set 0 proto udp
set 0 dst ip 10.22.23.102
set 0 src ip 10.22.22.101/24
set 0 dst mac XX:XX:XX:XX:d1:7b
set 0 src mac XX:XX:XX:XX:36:75
set 0 type ipv4

Single flow:
Single target IP address
UDP traffic, 64 bytes per packet, same src/dst ports
No firewall policies

Single flow, 5 firewall policies:
Single target IP address
UDP traffic, 64 bytes per packet, same src/dst ports
5 firewall policies

Single flow, 50 firewall policies:
Single target IP address
UDP traffic, 64 bytes per packet, same src/dst ports
50 firewall policies

Single flow, GRE tunnel:
Single target IP address
UDP traffic, 64 bytes per packet, same src/dst ports
50 firewall policies
GRE encap traffic and forward to static destination

Multiple flow:
254 destination IP addresses (multi-flow)
UDP traffic, 64 bytes per packet, random src ports (multi-flow)
50 firewall policies
pktgen:

range 0 src port 53 53 1000 1
range 0 dst ip 10.22.23.1 10.22.23.1 10.22.23.254 0.0.0.1
range 0 src ip 10.22.22.101 10.22.22.101 10.22.22.101 0.0.0.0
range 0 src mac XX:XX:XX:XX:36:75 XX:XX:XX:XX:36:75 XX:XX:XX:XX:36:75 00:00:00:00:00:00
range 0 dst mac XX:XX:XX:XX:d1:7b XX:XX:XX:XX:d1:7b XX:XX:XX:XX:d1:7b 00:00:00:00:00:00
enable 0 range

900K Route Test:
254 destination IP addresses
UDP traffic, 64 bytes per packet, random src/dst ports
50 firewall policies
900K loaded route table

Drop Test:

1  destination IP addresses, multiple ports (multi-flow)
UDP traffic, 64 bytes per packet, random src/dst ports
50 firewall policies, default deny
900K loaded route table
Half test traffic configured to be dropped

DanOS

Version: 2105
Built on:  Fri Jun 11 11:58:32 UTC 2021
HW Model: PowerEdge R440
CPU: Intel Xeon Silver 4210R CPU @ 2.4Ghz

set protocols static arp 10.22.22.101 hwaddr 'XX:XX:XX:XX:XX:XX'
set protocols static arp 10.22.22.101 interface dp0p59s0f1
set protocols static route 10.22.23.0/24 next-hop 10.22.22.101

Firewall policies:
set security ip-packet-filter group ipv4 ip-version ipv4
set security ip-packet-filter group ipv4 rule 1 action drop
set security ip-packet-filter group ipv4 rule 1 match source ipv4 host 1.1.1.1
set security ip-packet-filter group ipv4 rule 2 action drop
set security ip-packet-filter group ipv4 rule 2 match source ipv4 host 1.1.2.1
set security ip-packet-filter group ipv4 rule 3 action drop
set security ip-packet-filter group ipv4 rule 3 match source ipv4 host 1.1.3.1
...etc...
set security ip-packet-filter interface dp0p59s0f1 in ipv4

For GRE test:
set interfaces tunnel tun0 address 10.90.4.102/24
set interfaces tunnel tun0 encapsulation gre
set interfaces tunnel tun0 local-ip 10.22.22.102
set interfaces tunnel tun0 remote-ip 10.22.22.101