Oracle Linux disk I/O performance
Posted: 17. Dec 2012, 23:24
Hello,
According to the documentation, Oracle VM VirtualBox can achieve near-native performance of the virtual machine guest system, provided the guest code was written for the same target hardware and computer architecture as the host computer system. https://www.virtualbox.org/wiki/Virtualization
I was wondering about the VirtualBox I/O subsystem performance and created a number of tests shown below. In addition to flushing the buffer cache and setting DIRECT_IO with the dd command, I also used different sizes to test the affect of the kernel buffer cache. The tests are reproducible when performed in the same sequence with little variation (+/- 1 %) on subsequent runs. Running the Host kernel in 32-bit or 64-bit mode did not affect performance.
My conclusion from the tests is that there is a huge performance difference between fixed size and dynamically allocated disks. The tests show a constant difference between 300 - 500 %. Is this normal?
I also wonder why it always takes 3 executions of the same command before the the kernel buffer cache kicks in when the disk is fixed size, but works as expected when dynamically allocated.
The performance of a dynamically allocated disk seems about 2/3 of the performance of the underlying hardware. With a fixed disk however, the performance is 3 times faster.
Perhaps the performance loss of dynamically allocated disks is due to the preallocating (my guess) of disk blocks. Any other explanations for the strange performance findings?
Kind regards!
Test Environment:
Virtual machine:
Oracle Linux 6.3 x86_64, 2048 MB RAM
2 Extra virtual disks connected to SATA controller
host I/O cache and solid-state not enabled.
a) /dev/sdd (fixed size)
b) /dev/sde (dynamically allocated)
Host computer:
Mac OS X Snow Leopard (10.6.8 )
Mac Pro 4.1 (8 Core Nehalem), 24 GB RAM, Apple RAID Card (RAID-5) 6 TB.
VirtualBox 4.2.4
Performance results of /dev/sdd (fixed size)
# sync; echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/mnt/sdd1/testfile bs=4M count=1k
4294967296 bytes (4.3 GB) copied, 56.2638 s, 76.3 MB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=300 (1st run)
1258291200 bytes (1.3 GB) copied, 15.6361 s, 80.5 MB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=300 (2nd run)
1258291200 bytes (1.3 GB) copied, 0.392249 s, 3.2 GB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=300 (3rd run)
1258291200 bytes (1.3 GB) copied, 0.334007 s, 3.8 GB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=300 iflag=direct
1258291200 bytes (1.3 GB) copied, 8.1513 s, 154 MB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=1k (1st run)
4294967296 bytes (4.3 GB) copied, 67.3149 s, 63.8 MB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=1k (2st run)
4294967296 bytes (4.3 GB) copied, 67.5066 s, 63.6 MB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=1k (3st run)
4294967296 bytes (4.3 GB) copied, 66.8631 s, 64.2 MB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=1k iflag=direct
4294967296 bytes (4.3 GB) copied, 42.6383 s, 101 MB/s
Performance results of /dev/sde (dynamically allocated)
# sync; echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/mnt/sde1/testfile bs=4M count=1k
4294967296 bytes (4.3 GB) copied, 20.7655 s, 207 MB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=300 (1st run)
1258291200 bytes (1.3 GB) copied, 5.69587 s, 221 MB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=300 (2nd run)
1258291200 bytes (1.3 GB) copied, 1.45271 s, 866 MB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=300 (3rd run)
1258291200 bytes (1.3 GB) copied, 0.352501 s, 3.6 GB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=300 iflag=direct
1258291200 bytes (1.3 GB) copied, 3.68386 s, 342 MB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=1k (1st run)
4294967296 bytes (4.3 GB) copied, 13.6296 s, 315 MB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=1k (2st run)
4294967296 bytes (4.3 GB) copied, 13.8362 s, 310 MB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=1k (3st run)
4294967296 bytes (4.3 GB) copied, 16.1571 s, 266 MB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=1k iflag=direct
4294967296 bytes (4.3 GB) copied, 12.9898 s, 331 MB/s
Other Performance results:
Virtual Machine:
# dd if=/dev/zero of=/dev/zero bs=4M count=1k
4294967296 bytes (4.3 GB) copied, 0.458887 s, 9.4 GB/s
Host System:
saturn:~ dude$ dd if=/dev/zero of=./testfile bs=4096k count=1k
4294967296 bytes transferred in 44.833756 secs (95797624 bytes/sec)
I created several large files to avoid the affect of possible disk fragmentation.
Update: I check again and actually confused the fixed size and dynamcially allocated disk. I corrected the mistake.
According to the documentation, Oracle VM VirtualBox can achieve near-native performance of the virtual machine guest system, provided the guest code was written for the same target hardware and computer architecture as the host computer system. https://www.virtualbox.org/wiki/Virtualization
I was wondering about the VirtualBox I/O subsystem performance and created a number of tests shown below. In addition to flushing the buffer cache and setting DIRECT_IO with the dd command, I also used different sizes to test the affect of the kernel buffer cache. The tests are reproducible when performed in the same sequence with little variation (+/- 1 %) on subsequent runs. Running the Host kernel in 32-bit or 64-bit mode did not affect performance.
My conclusion from the tests is that there is a huge performance difference between fixed size and dynamically allocated disks. The tests show a constant difference between 300 - 500 %. Is this normal?
I also wonder why it always takes 3 executions of the same command before the the kernel buffer cache kicks in when the disk is fixed size, but works as expected when dynamically allocated.
The performance of a dynamically allocated disk seems about 2/3 of the performance of the underlying hardware. With a fixed disk however, the performance is 3 times faster.
Perhaps the performance loss of dynamically allocated disks is due to the preallocating (my guess) of disk blocks. Any other explanations for the strange performance findings?
Kind regards!
Test Environment:
Virtual machine:
Oracle Linux 6.3 x86_64, 2048 MB RAM
2 Extra virtual disks connected to SATA controller
host I/O cache and solid-state not enabled.
a) /dev/sdd (fixed size)
b) /dev/sde (dynamically allocated)
Host computer:
Mac OS X Snow Leopard (10.6.8 )
Mac Pro 4.1 (8 Core Nehalem), 24 GB RAM, Apple RAID Card (RAID-5) 6 TB.
VirtualBox 4.2.4
Performance results of /dev/sdd (fixed size)
# sync; echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/mnt/sdd1/testfile bs=4M count=1k
4294967296 bytes (4.3 GB) copied, 56.2638 s, 76.3 MB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=300 (1st run)
1258291200 bytes (1.3 GB) copied, 15.6361 s, 80.5 MB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=300 (2nd run)
1258291200 bytes (1.3 GB) copied, 0.392249 s, 3.2 GB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=300 (3rd run)
1258291200 bytes (1.3 GB) copied, 0.334007 s, 3.8 GB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=300 iflag=direct
1258291200 bytes (1.3 GB) copied, 8.1513 s, 154 MB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=1k (1st run)
4294967296 bytes (4.3 GB) copied, 67.3149 s, 63.8 MB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=1k (2st run)
4294967296 bytes (4.3 GB) copied, 67.5066 s, 63.6 MB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=1k (3st run)
4294967296 bytes (4.3 GB) copied, 66.8631 s, 64.2 MB/s
# dd if=/mnt/sdd1/testfile of=/dev/zero bs=4M count=1k iflag=direct
4294967296 bytes (4.3 GB) copied, 42.6383 s, 101 MB/s
Performance results of /dev/sde (dynamically allocated)
# sync; echo 3 > /proc/sys/vm/drop_caches
# dd if=/dev/zero of=/mnt/sde1/testfile bs=4M count=1k
4294967296 bytes (4.3 GB) copied, 20.7655 s, 207 MB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=300 (1st run)
1258291200 bytes (1.3 GB) copied, 5.69587 s, 221 MB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=300 (2nd run)
1258291200 bytes (1.3 GB) copied, 1.45271 s, 866 MB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=300 (3rd run)
1258291200 bytes (1.3 GB) copied, 0.352501 s, 3.6 GB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=300 iflag=direct
1258291200 bytes (1.3 GB) copied, 3.68386 s, 342 MB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=1k (1st run)
4294967296 bytes (4.3 GB) copied, 13.6296 s, 315 MB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=1k (2st run)
4294967296 bytes (4.3 GB) copied, 13.8362 s, 310 MB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=1k (3st run)
4294967296 bytes (4.3 GB) copied, 16.1571 s, 266 MB/s
# dd if=/mnt/sde1/testfile of=/dev/zero bs=4M count=1k iflag=direct
4294967296 bytes (4.3 GB) copied, 12.9898 s, 331 MB/s
Other Performance results:
Virtual Machine:
# dd if=/dev/zero of=/dev/zero bs=4M count=1k
4294967296 bytes (4.3 GB) copied, 0.458887 s, 9.4 GB/s
Host System:
saturn:~ dude$ dd if=/dev/zero of=./testfile bs=4096k count=1k
4294967296 bytes transferred in 44.833756 secs (95797624 bytes/sec)
I created several large files to avoid the affect of possible disk fragmentation.
Update: I check again and actually confused the fixed size and dynamcially allocated disk. I corrected the mistake.