Hello,
Running VB 4.2.16.rcxx with windows 7 as the host OS. One of the VM's is an ubuntu zfs file server with six sata disks being used in a zfs raidz2. Things are running pretty well, but recently noticed the host OS C drive is just about completely full. Traced it down to one of the six sata disks used on the ubuntu file server being setup with a wrong location. The disk has its location on the C drive of the host windows OS, all of the other disks have their location set in the actual physical drive, and their vhd files there, not the host windows system C drive. The settings/storage menu for the ubuntu file server in VB manager shows this, and I see the vhd file for the disk in the VM folder on the C drive. All the sata disks for the zfs raidz2 pool were created in VB manager as type:normal, and dynamically allocated storage. Must have been either a default value not changed, a typo, or maybe just experimenting on location(s) and forgot to move this disks location back before putting the system in production.
Would like to know what is the best way to go about moving this virtual disk off of the C drive to the correct location of the physcial disk. Searching the forum turned up threads on how to move an entire VM from one location (drive) to another, but don't see anything about keeping the VM intact, and just moving a virtual disk used by the system.
Thanks,
moving vm storage disks
Re: moving vm storage disks
I tried copying over the vhd file for the disk from the C: drive to the actual physical disk (in this case K:).....the file was about 430GB and took over five hours. Then I released, then removed the virtual disk in the virtual media manager, and the settings/storage menu. Tried to add a new virtual disk using the vhd file copied over to the K: drive, and the ubuntu file server VM started once, but then would fail with virtual disk errors on subsequent reboots/starts of the VM. Since the process I used would not work, decided to just remove the virtual disk and its vhd file, and start over by creating the virtual disk with a new vhd file. That did work, and the zfs pool is now resilvering the new virtual disk in the ubuntu file server VM. It is going to take a while since virtual box is growing the dynamic vhd file to the original 430GB.
-
mpack
- Site Moderator
- Posts: 39134
- Joined: 4. Sep 2008, 17:09
- Primary OS: MS Windows 10
- VBox Version: VirtualBox+Oracle ExtPack
- Guest OSses: Mostly XP
Re: moving vm storage disks
Moving a disk is very easy, but you must do it in this order: you go into File | Virtual Media Manager and there you Release (detach from VM) and then Remove (unregister) the drive. Answer "No" when it asks if you want to delete the physical file. Now shut down VirtualBox completely and move the VDI/VHD (whatever). Run VBox again and use the settings|Storage section to reattach the drive in its new location.
Your procedure didn't work because VirtualBox has a bug which causes problems if you release and then reregister a drive with the same path/name or UUID in the same session. You have to shut VBox down after releasing. This bug is fixed in the forthcoming new 4.3 release.
Your procedure didn't work because VirtualBox has a bug which causes problems if you release and then reregister a drive with the same path/name or UUID in the same session. You have to shut VBox down after releasing. This bug is fixed in the forthcoming new 4.3 release.
Re: moving vm storage disks
Thanks mpack. Will refer to the order you list, in case the need to move a disk returns in the future. Still waiting on that zpool resilver to finish, 65% done at about 9 hrs.... the idea in using the copy instead of deletion, was to avoid the added time of a resilver process, since the data of a copied vhd file should have been in sync with the rest of the zfs pool of disks in the VM. It looks like 70% of the resilver time is for the host OS to create the 430GB vhd file, at the current rate, using the proper sequence to copy might have saved about 6 hrs. A 1.6TB file transfer to the VM is what caused the C: drive to fill up because of the incorrect vhd location. The degraded transfer time (50% slower than normal for the LAN) for the writing of the 1.6 TB file to the VM has piqued my interest in how using fixed vs. dynamic vhd files would effect the performance of my setup. Once the resliver completes, going to run a few more tests to get some more data points. If anything interesting turns up will open a new thread to discuss the results, and get feedback on fixed vs dynamic files with this setup.
-
mpack
- Site Moderator
- Posts: 39134
- Joined: 4. Sep 2008, 17:09
- Primary OS: MS Windows 10
- VBox Version: VirtualBox+Oracle ExtPack
- Guest OSses: Mostly XP
Re: moving vm storage disks
Creating a dynamic VHD or VDI should be essentially instant. I take it you're actually using the fixed size variant?mm_half3 wrote:It looks like 70% of the resilver time is for the host OS to create the 430GB vhd file
There should be little or no performance difference in dynamic vs fixed, though if anything dynamic will have the edge. I've memorized more about VDI, so I'll discuss that (VHD is the same, though with a different granularity that I don't remember offhand).
Imagine the VDI disk surface divided into logical 1MB pages. There is a one time performance cost each time a new page is allocated. In fixed size disks you pay all of those performance costs up front, in dynamic disks you pay it on demand. Thereafter there is essentially no difference performance wise. Dynamic disks will tend to go through a growth phase where a majority of writes will tend to be to new areas of the disk. During this time it may appear that a dynamic disk is slightly slower, but it's really just an illusion - the cost of delaying the page allocation until it's needed. It isn't costing you more to be dynamic, you just pay the cost at a different time. Very quickly however the dynamic disk size will stabilise and there will be virtually no difference in I/O performance. Dynamic continues to have a theoretical advantage because the cost of pages which are never allocated never has to be paid.
The only real difference between the two is that fixed sized drives hog more disk space, and as a consequence take longer to backup and restore.
There's a lot of BS talked about dynamic vs fixed, but if you look at the format definition then you'll see that they're essentially identical, so it's all about usage patterns.
One thing: I would not recommend using dynamic VHD. Some idiot decided to put the "header" as a footer on the end of the file, so if the file grows the footer has to be moved, and if something nasty happens in that moment, e.g. you run out of disk space, then you end up with a dead VHD. VDI is the preferred format in VirtualBox.
Re: moving vm storage disks
mpack wrote: It looks like 70% of the resilver time is for the host OS to create the 430GB vhd file
Creating a dynamic VHD or VDI should be essentially instant. I take it you're actually using the fixed size variant?
no I used dynamic, and it was instantaneously created. The resilver time referred to was the time to sync the new vdisk to the other vdisks in the raidz2 data pool in the ubuntu guest system. Since I ended up removing the vhd file, I had to replace the original 430GB virtual disk seen in the ubuntu guest with the newly created virtual disk. The process of resilvering (an oracle zfs term analogous to resync in most other raid software), was recreating the striped data in the original vhd file to the new vhd file. I can see virtual box dynamically increasing the size of the vhd file in the host OS as the resilver in the guest proceeds.
mpack wrote: There should be little or no performance difference in dynamic vs fixed, though if anything dynamic will have the edge. I've memorized more about VDI, so I'll discuss that (VHD is the same, though with a different granularity that I don't remember offhand).
that is what my initial research turned up when I created the VM. What had me investigate a little deeper into the differences was the poor transfer rate this setup ( host OS win 7, with 7 sata II disks, one 500GB disk to hold the host OS & VM system disks, six 1TB disks used for a 5.5TB raidz2 zfs storage pool in an ubuntu guest VM) showed when transferring a 1.7TB file from a native ubuntu system in a similar zfs zpool configuration, to the ubuntu VM zfs server. The transfer rate on the 1.7TB ftp file transfer was a poor 204.8 Mbs. The LAN those two systems are connected to routinely gets above 500Mbs file transfers between systems, so as the resilvering process was going decided to test a 600GB file being transferred from the native ubuntu zfs file server to my desktop iMac. The transfer took about two and a half hours, at an average transfer rate of 560Mbs.
My guess is the degraded transfer rate was mostly because the amount of data striped across the raidz2 zfs data pool before the file transfer was a fraction of the 1.7TB file being transferred to it. The data pool was only 360GB before the transfer, and grew to 1.4TB after the transfer. The dynamic virtual disks in the ubuntu VM had to grow by about 270GB, the windows 7 host OS had to allocate 270GB over six disks concurrently while the file transfer was taking place. The degraded performance could also be from writing such a large file to a two parity bit raidz2 data pool. The additional data points referred to in my last post are going to try and determine which of the two might be causing the degraded transfer speeds. I am hoping that once the host OS dynamically allocates the space required for a virtual disk, it is persistent over data removal in the guest OS using the virtual disks. If that concept is indeed true, then the plan is to try some file transfers of large files to virtual disks that don't have to be dynamically grown to hold them, and see how the transfer rates turn out. Any suggestions on if/when defragging should be done in the windows 7 host OS during the testing? I have read some articles stating a defrag should be done after a file system growth is done on a dynamically allocated virtual disk, but also have read that NTFS file systems don't really need to be defragged. Leaves me not exactly sure about when or if to defag at this point
Have seen articles/postings in microsoft hyperv tech docs, and programmers/techs from a few storage specific companies (netapp, HP, etc..) on how dynamic virtual disks can cause misaligned writes, and could be detrimental in VM environments, but never was sure they really applied to my setup or would show any real performance increases. Your explanation is a solid one as to why there should be minimal performance differences, and I am betting that once I run the file transfer tests again on virtual disks that are already large enough to hold the data being transferred, the transfers speeds are going to be more in line with what the LAN normally produces.mpack wrote:
Imagine the VDI disk surface divided into logical 1MB pages. There is a one time performance cost each time a new page is allocated. In fixed size disks you pay all of those performance costs up front, in dynamic disks you pay it on demand. Thereafter there is essentially no difference performance wise. Dynamic disks will tend to go through a growth phase where a majority of writes will tend to be to new areas of the disk. During this time it may appear that a dynamic disk is slightly slower, but it's really just an illusion - the cost of delaying the page allocation until it's needed. It isn't costing you more to be dynamic, you just pay the cost at a different time. Very quickly however the dynamic disk size will stabilise and there will be virtually no difference in I/O performance. Dynamic continues to have a theoretical advantage because the cost of pages which are never allocated never has to be paid.
In my setup where I have 6 disks that are dedicated for exclusive use in a raid 6 type of data pool, and where most of the data use is going to be reading/writing large video files (20GB and up) I guess I will need to decide which cost is preferred....the time it is going to take to create six 1TB fixed virtual disk files...going to take days for sure....vs the degraded transfer speeds of writes that require the host OS to grow the virtual disk files. Those crazy long creation times for the fixed disk files were one of the main reasons I went with dynamic, as long as the system produces respectable transfer speeds when the virtual disks don't need to be grown, seems like a no brainer to stick with the dynamically allocated virtual disks.mpack wrote: There's a lot of BS talked about dynamic vs fixed, but if you look at the format definition then you'll see that they're essentially identical, so it's all about usage patterns.
Was not aware that vdi was preferred in virtualbox, thought vhd was preferred for windows based host OSs...but maybe that was for windows based host OSs using hyper-v. After I get things straightened out with the primary zfs file server, and can find somewhere to put the archived video library file, I think I may recreate the virtual disks and have them use vdi files. Once the archived library is removed the 360GB of data will be easy to recreate.mpack wrote: One thing: I would not recommend using dynamic VHD. Some idiot decided to put the "header" as a footer on the end of the file, so if the file grows the footer has to be moved, and if something nasty happens in that moment, e.g. you run out of disk space, then you end up with a dead VHD. VDI is the preferred format in VirtualBox.
Thanks,
John
-
mpack
- Site Moderator
- Posts: 39134
- Joined: 4. Sep 2008, 17:09
- Primary OS: MS Windows 10
- VBox Version: VirtualBox+Oracle ExtPack
- Guest OSses: Mostly XP
Re: moving vm storage disks
No, the only reason VHD is used on PCs is because Microsoft likes to have ownership of important file formats on its OS platforms. If Microsoft didn't push it then I expect it would have been replaced long ago by VMDK, a vastly superior format. VDI is the native format in VBox, having a robustness similar to VMDK, but it's a simpler format, which I like - and it's fully supported in VBox, which VMDK and VHD are not.
Don't get me wrong: recent versions of Windows bundle tools to manipulate VHDs which make VHDs attractive in a sense, but I could never accept that risk of corruption, and 3rd party equivalents of those tools are available for formats other than VHD.
Don't get me wrong: recent versions of Windows bundle tools to manipulate VHDs which make VHDs attractive in a sense, but I could never accept that risk of corruption, and 3rd party equivalents of those tools are available for formats other than VHD.
Re: moving vm storage disks
To follow up on what happened with this problem, a few months back removed all the virutal disks, and recreated them as dynamic vdi disks. In my setup with a six disk zfs raidz2 pool as described in earlier posts, using dynamic vdi disks has not worked out very well with respect to write performance. This system is used as a media file server, and does many large (20 GB and up) writes. From time to time I have moved groups of media files between this virtual ubuntu file server system and another native ubuntu file server system. Those writes can be multiple TB in size. As expected when the virtual ubuntu system had to dynamically allocate additional space to accomodate a large write, the performance slowed to a crawl….below 10MB/s. This is the delayed "cost" of business when using dynamic vdi's, and not paying it all up front like when fixed vdi's are used.
The unexpected consequences of using dynamic vdi disks were the varying write performances when media files were deleted from the dynamic disks. My take on the posts about dynamic vdi's is that once the price was paid to create space in a dynamic vdi, that space was permanently there, even if the data was removed and actual usage fell below the space allocated in the dynamic drive. My setup did not see this consistently in practice. Sometimes there would be write performance on par of what is expected for this system (50-80MB/s), but most of the time it would be much slower and drop to below 10MB/s. Poor write performance was seen even after removing 5x the required space to write a new file from the zfs pool of dynamic vdi disks. In these cases, I could actually see the disks reallocating space for the new file by looking at them in the host OS (win 7).
Given the unexpected "double cost" I was seeing using dynamic vdi disks, decided to just take the time to recreate them as fixed vdi disks. After a couple of days, got through all six 2TB disks, and setup the zfs raidz2 pool again with the fixed disks, and things seem to be going as expected. All writes perform at the same 50-80MB/s speeds. Looks like the usage patterns for this type of media file server are better suited for fixed vdi disks.
John
The unexpected consequences of using dynamic vdi disks were the varying write performances when media files were deleted from the dynamic disks. My take on the posts about dynamic vdi's is that once the price was paid to create space in a dynamic vdi, that space was permanently there, even if the data was removed and actual usage fell below the space allocated in the dynamic drive. My setup did not see this consistently in practice. Sometimes there would be write performance on par of what is expected for this system (50-80MB/s), but most of the time it would be much slower and drop to below 10MB/s. Poor write performance was seen even after removing 5x the required space to write a new file from the zfs pool of dynamic vdi disks. In these cases, I could actually see the disks reallocating space for the new file by looking at them in the host OS (win 7).
Given the unexpected "double cost" I was seeing using dynamic vdi disks, decided to just take the time to recreate them as fixed vdi disks. After a couple of days, got through all six 2TB disks, and setup the zfs raidz2 pool again with the fixed disks, and things seem to be going as expected. All writes perform at the same 50-80MB/s speeds. Looks like the usage patterns for this type of media file server are better suited for fixed vdi disks.
John
-
mpack
- Site Moderator
- Posts: 39134
- Joined: 4. Sep 2008, 17:09
- Primary OS: MS Windows 10
- VBox Version: VirtualBox+Oracle ExtPack
- Guest OSses: Mostly XP
Re: moving vm storage disks
That can't be correct. Fixed and Dynamic VDIs use the same format, the only difference is when the allocation happens. I suspect your disks were simply fragmented at some level, or something else is causing the observed I/O hit. Still, if you're happy then what the hey.