Page 1 of 1

I/O errors after importing raw eSATA disks with a ZFS pool to Solaris guest

Posted: 19. Jul 2020, 22:59
by Rob_Thurlow
Hi folks, I'm posting this here after also posting a version to the "Solaris guests" forum.

I have a 5-bay eSATA box (a Sans Digital TR5M-BP) with an existing ZFS pool that's been run from an old homebrew AMD Phenom II machine running Solaris 11.3. I would like to access the ZFS pool from a Solaris VBox instance and retire the old box.

I have a Windows 10 host (Dell XPS 8930) with a Mediasonic ProBox Card HP1-SS3 eSATA card and the ASMedia 3.3.3 eSATA driver. It appears to work, but it's not super trustworthy - the 3.3.2 version of the Win 10 driver rendered Windows 10 unbootable.

I created raw disk vmdks with a series of commands like this (run as admin, and I have to run VBox as admin as well):

VBoxManage internalcommands createrawvmdk -filename "C:\Users\rthur\VirtualBox VMs\sol-11_4-vbox\z1.vmdk" -rawdisk \\.\PhysicalDrive2

Then I added a SATA controller and added these vmdks to the controller in my Solaris 11.4 Vbox config. 'format' saw the disks, and 'zpool import' saw the pool. And 'zpool import <pool>' worked :-)

Then I got cocky. I ran a 'zpool scrub', which worked for a few minutes, but then hit an I/O error (sorry, details lost) - Windows had detected a problem, the Solaris guest was halted, and the restarted Solaris guest lost access to three of the five drives. It looked dire for a while there. I had to power down the Solaris Vbox and reboot Windows 10 to get the drives back online, and then found that the guest had lost a drive because Windows had renumbered the eSATA drives. I edited the five vmdks to use the correct numbers, and I am back to having a full pool again. Renumbering is not my favorite thing, but at least I know to look out for that.

Having I/O errors makes me nervous, and I am not sure how to track this down. Windows seems to have raised the alarm - what can I look at on Windows to find out more? I can imagine there could be issues in the eSATA card+driver, and might try to find a Silicon Image-based card which might be more trustworthy. But I also wonder if I could improve things by setting caching policy for those drives so that Windows is further out of the way for the eSATA drives. Does anyone know of anything in that direction that might help? I did find this page on a search, and a response on it seems relevant: https://superuser.com/questions/289189/ ... in-windows

"ZFS in virtual machine can work just fine if follow one simple rule never ever lie to ZFS. ZFS goes to great length to keep your data from getting corrupted (checksums, copy-on-write, dittoblocks, mirrors or raid-z, etc) so you should do everything in your power to let ZFS directly access your disks. All the horror stories of virtualized ZFS issues come from some level of buffered IO from virtualization software buffers, disk controller cache or even windows with writethrough cache if you're dumb enough to use virtual disks instead of whole raw disks."

Finally, I have a USB 3.0 4-bay box which I am thinking of deploying a new pool on - any thoughts on whether that might be more successful?

Thanks,
Rob T

Re: I/O errors after importing raw eSATA disks with a ZFS pool to Solaris guest

Posted: 20. Jul 2020, 15:22
by scottgus1
In the main Virtualbox window, guest's Storage settings, try selecting the controller that your raw disks are connected to, then toggling "Use Host I/O cache" to the opposite state of what it is now.

I strongly suggest having at least two FC-confirmed backup copies of the data on the pool if you wish to preserve this data during your experiments. Or experiment on different drives. Though things should stabilize, re-enumeration of the drive numbering on the host is a possibility, and it could kill data in raw-disk access. You should write a host script that double-checks the enumeration before starting the guest.

Run as Admin is required for everything about raw disk access on a Windows host. Running Virtualbox as Admin opens vectors for malware. Keep that host clean!