Page 1 of 1
Raid0/Raid1/No Raid??
Posted: 7. Aug 2008, 08:44
by illa
Something me and some people have been discussing with our VM machines is how to most effectively configure the disk...
On my machine I have 2 160 GB sata drives.
I want to run as many virtual machines as possible with the best performance as possible. Each machine only needs 20 GB of space total.. my machine has one of those half ass RAID controllers... I doubt with 2GB of RAM if I can run more then 4 VM's(Raid0 would give me more disk which I dont need, so I went with the fault tolerance)
This what I have done(Im still setting everything up)
-disabled the fake raid in the bios and elected to use linux software raid(this will probably take some extra cpu, but I find on my machines running VM's RAM and hard disk are my common bottle necks, almost never CPU)
-I created one RAID1 partition of 156GB at the start of the drives
-I created one RAID0 Partion of 8 GB at the end of each drives
-installed ubuntu 8.04 server
-installed 1.6.4 Vbox
I previously only had 1 hdd but decided to redo my box.. anyone else want to share how they set up their hard drives for best performance
Posted: 7. Aug 2008, 19:47
by travishein
I found the most you can do to increase disk performance is use faster hard drives. its not so much as imporant as what kind of RAID, or if the RAID is hardware or software based.
At my old company I had used those seagate cheetah drives with the 15K RPM spindle speeds. The machine also had an U320 SCSI bus and hot swap backplane.
I had three drives in a raid 5 and three in a raid 0. From the running of about eight virtual machines, four on the raid 5 and four on the raid 0,
we couldn't tell for sure which was better, we kept hitting the bottleneck of only having two dual core processors on the host machine.
So I later changed it to be a 6 element RAID 5 array, mostly for the redundancy, but also we found more elements in the array makes RAID 5 faster. I think we choose raid5 because at the time it was one of the only hot-fixable kinds of RAIDs
The trade off is of course the price of having to buy these high end drives. Now that I am out of the company, I run a smaller system, but only have 7200 rpm esata drives (and software raid 5). Im still used to working with those faster cheeta drives, but the barricuda drives are affordable.
Raid setup
Posted: 10. Aug 2008, 14:38
by punkybouy
I would have stuck with the onboard hardware RAID which, assuming you had a linux driver for the chip, would probably be faster. With RAID 5 you start to consider IOPS per spindle which at 15 k RPM might be about 150 -200 IOPS so with a 5 drive setup and maybe ~1000 total would be pretty speedy.
Forget RAID 0, too scary. Lose one drive and the whole volume is gone.
Re: Raid setup
Posted: 10. Aug 2008, 17:22
by hege
punkybouy wrote:I would have stuck with the onboard hardware RAID which, assuming you had a linux driver for the chip, would probably be faster.
I suggest you google "fake raid".
Posted: 18. Aug 2008, 04:43
by illa
from my research hardware raid thats not a real raid controller does not help a lot.. but i have never seen anyone say okay.. here is my system...
I did the following stress tests in both config's.. here is the results... its mostly just like yah it prolly doesnt help..
I lost a 320GB drive over the weekend with personal data.. im all for raid1 this week
Posted: 18. Aug 2008, 13:00
by TerryE
Given that and extra HDD costs about $60 for 500Gb these dates, RAID-1 isn't that much of price burdon. This gives you (i) data security against HDD failure, and (ii) double (unlike RAID-5) the read throughput.
Posted: 19. Aug 2008, 00:18
by MasterChief
RAID-1 (mirroring) is cheap and can save you from trouble, but it will be slightly slower for write actions. However, adding a second controller (duplexing) will eliminate this bottleneck.
Posted: 19. Aug 2008, 00:57
by TerryE
MasterChief, can I gently push back on your statement. I feel that this slowdown is almost negligible for two reasons (i) most HDD systems are now write-through buffered at the device, and current SATA channels are concurrent anyway. (ii) yes, there is an increased write latency on H/W raid since any current read ops must flush on both devices, but this is a consequence of the double throughput on read ops. The overall mix still has a higher throughput.
Posted: 19. Aug 2008, 02:37
by illa
I know this is more of a linux question then a vbox/hardware question.. well still hardware...
how do you bench mark the reads/writes in linux.. Im using ubuntu 8.0.4 with no gui.. in windows I would just use perfmon and look at the buffers... (Im sure i could find this through google...) but this leads me into my second question.. lets say I capture this data what stats are we looking at...
I.E. I setup my system.. run my VMs for a day and record the performance.. next day remove one drive from the raid1 and bench mark the performance
Posted: 19. Aug 2008, 02:56
by TerryE