Guest 3x dbench performance of Host
Posted: 28. Jul 2013, 19:55
I'm trying to evaluate several different possibilities for virtualising a Linux server. One disk benchmark, dbench, is giving me really weird results: the guest has a throughput 3x that of the host (~ 50 MB/s vs ~ 13 MB/s).
For straightforward sequential read throughput benchmarks, the results are as expected: the VirtualBox guest has close to but slightly-lower-than-host performance. It looks like the reason dbench in particular shows such a difference is that, as part of its simulated file server load, it writes a lot of small files (specifically, 4k ones). The writes themselves are not what's faster - but the sync operations are taking much, much less time. The problem is demonstrable with a small C program:
Then using:
it reports fsync times of ~ 0.002 s on the guest and ~ 0.02 s on the host.
Also strange is the amount of writes on the host it causes. Putting the write and fsync in a while(1) loop, iostat on the guest reports ~ 3 MB/s write throughput, whereas on the host, it causes to ~ 8 MB/s write throughput on the actual disk.
Both of these problems only occur with small files. For larger writes, the gulf disappears.
I see it says under the 'Host I/O Caching' section of the user manual:
Details of setup are:
- Both guest and host are Ubuntu 12.04, fully updated
- 3.2.0 kernel
- VirtualBox 4.2.16
- raw disk used for guest (an LVM volume on the same disk as the host)
- Core 2 Duo E8500
- host IO caching disabled
For straightforward sequential read throughput benchmarks, the results are as expected: the VirtualBox guest has close to but slightly-lower-than-host performance. It looks like the reason dbench in particular shows such a difference is that, as part of its simulated file server load, it writes a lot of small files (specifically, 4k ones). The writes themselves are not what's faster - but the sync operations are taking much, much less time. The problem is demonstrable with a small C program:
Code: Select all
#include <stdio.h>
#include <stdlib.h>
#include <unistd.h>
#include <fcntl.h>
int main(void)
{
int fd = open("testfile", O_RDWR | O_CREAT);
char *buf = malloc(4096);
write(fd, buf, 4096);
fsync(fd);
close(fd);
return 0;
}
Code: Select all
ltrace -T ./test_syncAlso strange is the amount of writes on the host it causes. Putting the write and fsync in a while(1) loop, iostat on the guest reports ~ 3 MB/s write throughput, whereas on the host, it causes to ~ 8 MB/s write throughput on the actual disk.
Both of these problems only occur with small files. For larger writes, the gulf disappears.
I see it says under the 'Host I/O Caching' section of the user manual:
Is it possible this small cache is what's causing the above behaviour? Why is it used (besides the obvious - to improve performance)? Is there any way to manually disable it?If you decide to disable host I/O caching for the above reasons, VirtualBox uses its own small cache to buffer writes, but no read caching since this is typically already performed by the guest OS. In addition, VirtualBox fully supports asynchronous I/O for its virtual SATA, SCSI and SAS controllers through multiple I/O threads.
Details of setup are:
- Both guest and host are Ubuntu 12.04, fully updated
- 3.2.0 kernel
- VirtualBox 4.2.16
- raw disk used for guest (an LVM volume on the same disk as the host)
- Core 2 Duo E8500
- host IO caching disabled