Poor 40 Gbe NIC performance

Discussions related to using VirtualBox on Linux hosts.
Post Reply
Lapsio
Posts: 13
Joined: 24. Jun 2017, 17:15

Poor 40 Gbe NIC performance

Post by Lapsio »

I'm using quad 10G Intel X710-DA4 network card on VirtualBox host. Unfortunately my motherboard doesn't support SR-IOV so I can't really use pci passthrough. In that case I tried to play with macvtap and virtio in order to get poor man's VEPA config (physical switch forwarding traffic between VMs).

On bare metal host I'm getting 10G bidirectional (using router and NAT to prevent local traffic forwarding) without any problems however performance of VMs network is waaay below expectations. For some reason whole VMs networking seems to be capped at around 10G total traffic (means 5G up / 5G down). If I fire up more than 2 VMs speed is just split amongst them so I'm getting one of following scenarios:

2 VMs single direction: 5G up 5G down
4 VMs single direction twice 2.5G up, 2.5G down
4 VMs bidirectional 1.25G up, 1.25G down

And always traffic sums up to around 10G best case scenario. I tested those 4 VMs attached to 2 separate 10G interfaces in following config:
VM -> virtio (bridged to interface) -> macvtap0 -> enp2s0f0
VM -> virtio (bridged to interface) -> macvtap1 -> enp2s0f1
VM -> virtio (bridged to interface) -> macvtap2 -> enp2s0f2
VM -> virtio (bridged to interface) -> macvtap3 -> enp2s0f3

I tried using --nicspeed 10 000 000 but it didn't help. I experimented with cores number but it didn't really seem to change much. Usually single core performed best but dual core was also quite fine. When all 4 VMs had 4 vritual cores performance went out of window tanking to 100mbps so I guess CPU was overloaded since it's only quad core with HT. When running iperf CPU load is significant (around 30-50% each core) but not 100%. 5G for single VM is not bad at all but when more VMs are using card situation is not so cool. 1Gbps per VM on 40 Gbe NIC with 4 VMs running is pretty terrible.

I came across this article: https://www.linux-kvm.org/page/10G_NIC_ ... _vs_virtio It's about KVM but since VBox uses KVM and virtio as backend I guess it's relevant. It would make sense since I'm getting 5G with UDP traffic. When using iperf in TCP mode bandwidth caps at around 3.6 Gbps mentioned in article. Did anyone try to use VBox with 10G+ NICs?
Steffen M.
Posts: 17
Joined: 12. Sep 2013, 16:56

Re: Poor 40 Gbe NIC performance

Post by Steffen M. »

Just out of interest: Did you find a solution to your problem? I am currently investigating a performance problem with Solaris 11.4 hosts and 10 Gbit/s NICs, maybe there is a common cause…

Kind regards,
Steffen
Post Reply