Network performance half with bridged as with NAT

Discussions about using Windows guests in VirtualBox.
Post Reply
madbrain
Posts: 26
Joined: 26. Mar 2011, 00:52
Primary OS: MS Windows 7
VBox Version: OSE other
Guest OSses: CentOS OS/2

Network performance half with bridged as with NAT

Post by madbrain »

This is something I have noticed for a while. Bridged networking is slower than NAT, by a good chunk.

When I set up my NIC as bridged, the Speedtest by Ookla native Windows app reports an Internet download speed of 643 Mbps.

When I setup my NIC as NAT, the Speedtest by Ookla native Windows app reports an Internet download speed of 1235 Mbps. This is almost twice as much as with bridged.

On the host, also Windows, Speedtest reports about 1430 Mbps. The host is wired to the LAN with a Marvell 10 Gbps NIC to a 10 Gbps Trendnet switch. The Comcast XB7 router is using a 2.5 Gbps NIC for the LAN interface.

Is there a reason why the bridged performance in the guest is so much worse than NAT ? And is there a way to fix it ?

Edit: I attached two VM logs where I just started the VM, ran Speedtest, and shut it down. In one case, I ran with bridged. In the other, with NAT.

There are also major performance differences running the iperf3 client on the guest, against an iperf3 server on the host, when running NAT vs bridged. And some of those differences go in the other direction, interestingly.

But for purposes of this thread, I thought it easier to use Speedtest which is a tool easily available to all that doesn't need any command-line options or iperf server configuration.
Attachments
network.zip
(69.69 KiB) Downloaded 6 times
Last edited by madbrain on 10. Oct 2021, 00:41, edited 1 time in total.
fth0
Volunteer
Posts: 5668
Joined: 14. Feb 2019, 03:06
Primary OS: Mac OS X other
VBox Version: PUEL
Guest OSses: Linux, Windows 10, ...
Location: Germany

Re: Bridged network performance half with bridged as with NAT

Post by fth0 »

In my own setup, all variations result in 125 Mbps. Obviously, my Internet access is not fast enough. ;)

FWIW, the VirtualBox User Manual contains some information regarding network performance, for example in 6.11. Improving Network Performance, 8.8.2. Networking Settings and 9.8.3. Tuning TCP/IP Buffers for NAT.
madbrain
Posts: 26
Joined: 26. Mar 2011, 00:52
Primary OS: MS Windows 7
VBox Version: OSE other
Guest OSses: CentOS OS/2

Re: Bridged network performance half with bridged as with NAT

Post by madbrain »

fth0 wrote:In my own setup, all variations result in 125 Mbps. Obviously, my Internet access is not fast enough. ;)
Well, in that case, you would have to run a local iperf3 server on your host, and client on your guest, in order to measure network performance. I have written some scripts to do that, which I will post.
FWIW, the VirtualBox User Manual contains some information regarding network performance, for example in 6.11. Improving Network Performance, 8.8.2. Networking Settings and 9.8.3. Tuning TCP/IP Buffers for NAT.
Thanks, that was a helpful read, particularly the performance section.

The first recommendation was to use the virtio adapter instead of e1000g. I just tried virtio. I got 322 Mbps download speed in bridged mode, and 632 in NAT mode. This is roughly half the speed as I get with e1000g drivers :-(

The second recommendation was to use bridged instead of NAT. But actually, the opposite is true on my system. NAT performs about twice as fast as bridged, for both e1000g and virtio drivers.

The third one was to enable segmentation offloading in the host NIC. I am not sure my Aquantia AQN-107 10gig NIC supports this. It may just be known by another name. The GUI exposes settings for IPv4 checksum offload, Large send offload v1 (IPV4), Large send offload v2 (IPv4), Large send offload v2 (IPv6), NS offload, Recv segment coalescing (IPV4), Recv segment coalescing (IPV6), TCP/UDP checksum offload (IPv4), TCP/UDP checksum offload (IPv6) . All of these are enabled, which is the default. I haven't tried messing with these driver settings.

I had promiscuous mode disabled, and didn't run Wireshark in the guest, or host.
madbrain
Posts: 26
Joined: 26. Mar 2011, 00:52
Primary OS: MS Windows 7
VBox Version: OSE other
Guest OSses: CentOS OS/2

Re: Bridged network performance half with bridged as with NAT

Post by madbrain »

madbrain wrote:
fth0 wrote:In my own setup, all variations result in 125 Mbps. Obviously, my Internet access is not fast enough. ;)
Well, in that case, you would have to run a local iperf3 server on your host, and client on your guest, in order to measure network performance. I have written some scripts to do that, which I will post.
I'm attaching the scripts here. They are actually batch files. They need to be run with the iperf3 binaries for Windows x64, which are not included, but can be easily found at https://iperf.fr/iperf-download.php#windows .

testall.bat needs to be edited to include the hostnames of your host(s) that are running iperf3 server (just start it with iperf3 -s). In my case, higgs is the Win10 VM host with the 5820k, server10g is an Ubuntu bare metal server using a 6600k, and htpc-ryzen is a bare metal AMD Ryzen 2700 freshly upgraded to Win11 yesterday. All have the same 10 Gbps ethernet NIC, an AQN-107. All were already running iperf3 daemon on startup for a long time since I started measuring and optimizing 10gbit LAN performance on bare metal last year .

The results are in the *.txt files. The batch files are simple and don't try to collect results into tables.
The Win11 VM was running on higgs (Win10 host with Intel 5820k). I configured the VM with 2 network interfaces. One is NAT with e1000g. One is bridged with virtio. I just enabled/disabled them in device manager and ran the tests twice.
testall.bat is how to start the tests. It takes an argument in seconds for a pause. Yes, I know that batch could use improvement.

Anyway, reading the results manually, I can see :
- in the NAT case, the VM couldn't resolve the hostname of the VM host, higgs. So, there are no results in higgs.e1000g.nat.txt . The VM correctly resolved the 2 hostnames of the other systems of my LAN, server10g and htpc-ryzen. This is very strange and points to a bug somewhere, but I have no idea whether it's in Virtualbox, Windows, or my router's DNS server. I don't know why that would be the case. I will have to re-run the test with hard-coded IP instead of hostname
- increasing the number of TCP streams sometimes decreases network throughput. This is the opposite of what happens on bare metal. On bare metal, 4 TCP streams always yields higher throughput than 1 TCP stream. It may be that running each test for 10 seconds isn't enough on a VM. On bare metal, it is long enough to get consistent results.
- there are huge differences between the send side and receive sides with virtio going against VM host. For example, in higgs.virtio.bridged.txt, send side throughput is from 612 Mbps to 765 Mbps as a function of number of streams. But receive side throughput is between 1.15 and 1.38 Gbps , again as a function of number of streams. That's about twice as fast for the receive send vs send side.
- there are huge differences between the send side and receive sides with virtio going against server10g, a physical Ubuntu 20 server on the LAN. For example, in server10g.virtio.bridged.txt, send side throughput is from 2.0 Gbps to 2.19 Gbps as a function of number of streams. But receive side throughput is between 908 Mbps and 1.51 Gbps , again as a function of number of streams. The send side is still faster than the receive side, but the difference is lower than against the VM host. But it really doesn't make a lot of sense that hitting a remote server, going over a switch, is faster than hitting the VM host.
When comparing higgs.virtio.net and server10g.virtio.net, it may point to a performance bug in the switch, where things become slower when the host and the VM are communicating using the same ethernet port, instead of 2 separate ethernet ports
- when looking at server10g.e1000g.nat, perf is 320-617 Mbps on the send side, and 1.22-1.73 Gbps on the receive side. So, with NAT, when hitting a server on a different ethernet port, the send side is much slower than the receive side. This is the opposite as bridged case. Peak performance for send side is achieved with virtio.bridged - 2.0 to 2.19 Gbps. But peak performance for receive side is achieved with e1000g.nat - 1.22 - 1.73 Gbps.

This is just some quick analysis. I wish I was better with scripts and could put all this data into easy to read tables or spreadsheets.
I guess I should also run the tests on the other 2 combinations - e1000g.bridged and virtio.nat . Fortunately, Virtualbox allows up to 4 virtual interfaces, so this will work ...
Attachments
iperf batches & results.zip
(7.34 KiB) Downloaded 7 times
fth0
Volunteer
Posts: 5668
Joined: 14. Feb 2019, 03:06
Primary OS: Mac OS X other
VBox Version: PUEL
Guest OSses: Linux, Windows 10, ...
Location: Germany

Re: Network performance half with bridged as with NAT

Post by fth0 »

Thank you for your tests and scripts and your valuable contribution in general. :) Investigating VirtualBox's network performance is on my long-term ToDo list, but I never really got to that. I'll take a look at your results later ...

A word of caution: I'm not affiliated with Oracle or the VirtualBox development, so take the following with a grain of salt. Searching the VirtualBox change log for the term "network", I only found information regarding bugfixes, but neither the term "performance" nor anything with respect to 10 Gbit/s network adapters. In the VirtualBox forums, network performance is a rare topic also. I've heard that the VirtualBox developers accept patches sometimes, so if you're a developer ...

Nonetheless, I'm happy to discuss the topic with you further, as far as I can contribute. For starters, a simple one:
madbrain wrote:Virtualbox allows up to 4 virtual interfaces
If you use the VBoxManage command, you can configure at least 8 virtual network adapters.
madbrain
Posts: 26
Joined: 26. Mar 2011, 00:52
Primary OS: MS Windows 7
VBox Version: OSE other
Guest OSses: CentOS OS/2

Re: Network performance half with bridged as with NAT

Post by madbrain »

fth0 wrote:Thank you for your tests and scripts and your valuable contribution in general. :) Investigating VirtualBox's network performance is on my long-term ToDo list, but I never really got to that. I'll take a look at your results later ...

A word of caution: I'm not affiliated with Oracle or the VirtualBox development, so take the following with a grain of salt. Searching the VirtualBox change log for the term "network", I only found information regarding bugfixes, but neither the term "performance" nor anything with respect to 10 Gbit/s network adapters. In the VirtualBox forums, network performance is a rare topic also. I've heard that the VirtualBox developers accept patches sometimes, so if you're a developer ...

Nonetheless, I'm happy to discuss the topic with you further, as far as I can contribute. For starters, a simple one:
madbrain wrote:Virtualbox allows up to 4 virtual interfaces
If you use the VBoxManage command, you can configure at least 8 virtual network adapters.
Thanks. Are the developers reading these forums ? Yes, I'm a developer. I used to work for Sun and Oracle, in a previous life, even. But never on Virtualbox.

I have rerun my batches for longer (30s for each case vs 5s). Made sure I used the proper high perf power plan in the VM.
And I typed in my results in a spreadsheet, which you can see here :
https://docs.google.com/spreadsheets/d/ ... edit#gid=0

It's the same VM, tested with 4 different network interfaces, against 3 different servers, one being the Win VM host, HIGGS, and 2 being remot. Server10g is on the same switch as HIGGS. HTPC-RYZEN is in another room and behind 2 switches.

Going by the totals on line 29, the fastest combination overall is e1000g-bridged. The slowest is virtio-nat . There is a more than 2:1 difference in performance between those two cases.
However, no single combination emerges as the fastest for every test. For example, on lines 12, 13, 14 and 15, 20, 21, 22 and 23, virtio-bridged is the fastest, by a huge margin.
But on lines 8, 9, 10, 11 16,17, 18, 19 24, 25, 27 and 27, e1000g-bridged is the fastest, also by a huge margin ...

Anyway, overall, my measurements confirm that bridged is faster than NAT, so the doc is correct in this sense.
But between e1000g-bridged and virtio-bridged, it's much less clear.
When talking to the remote hosts, HTPC-RYZEN and SERVER10G, e1000g-bridged is much faster for receiving data. Up to 5x faster at receiving vs sending, when talking to the same host with same number of streams!
For virtio-bridged, it's the opposite. It's faster at sending than receiving, by about 1.5 to 2x, everything else being equal ....
It doesn't make a whole lot of sense that there would be such huge differences between the send and the receive side.
(Things are inverted when talking to HIGGS, but that is the VM host and things are probably bottlenecked at the switch when talking to/from the same ethernet port.)

Difference between send/receive sides are not specific to VMs. I have seen differences too on bare metal. See this spreadsheet from last year.
https://docs.google.com/spreadsheets/d/ ... sp=sharing

However, the differences there are never more than about 20% between send/receive side when talking to the same host with the same number of streams.
I have never seen a 5x factor before.

Anyway, I need to get back to measuring my toaster oven's preheat time and then go to sleep. I think it's going back to Costco tomorrow ;)
scottgus1
Site Moderator
Posts: 20965
Joined: 30. Dec 2009, 20:14
Primary OS: MS Windows 10
VBox Version: PUEL
Guest OSses: Windows, Linux

Re: Network performance half with bridged as with NAT

Post by scottgus1 »

Just to toss in a couple things:
madbrain wrote:Are the developers reading these forums ?
Very rarely.
fth0 wrote:If you use the VBoxManage command, you can configure at least 8 virtual network adapters.
Using the ICH9 chipset, one can get 36 network adapters in a VM. :shock:
fth0
Volunteer
Posts: 5668
Joined: 14. Feb 2019, 03:06
Primary OS: Mac OS X other
VBox Version: PUEL
Guest OSses: Linux, Windows 10, ...
Location: Germany

Re: Network performance half with bridged as with NAT

Post by fth0 »

madbrain wrote:Are the developers reading these forums ?
The VirtualBox forums are user-to-user forums, and it is unknown if any VirtualBox developer is reading them regularly. Posts by VirtualBox developers are very rare, which might give an indication. You can create a ticket in the Bugtracker, though.

Another expert tip: The statistics at the end of the VBox.log file show two groups of detailed information about the E1000 driver (/Devices/e1000#0/*; E1000#0). If you look up their meaning within Intel's documentation (see deep link on OSDev.org Wiki Intel_8254x), you can get further insights.
Post Reply