I have this same issue, running 5.2.16 on both Solaris 11.3 and Mac OS. I've not yet tested a Windows host but presume it to be the same as the issue appears cross-platform.
UPDATE: I have just tested Dev snapshot 124007 on both Mac OS and Solaris 11.3 and it resolves this issue.
I expect that the below report is now superfluous, except to mention that this issue still affects the available 5.2.x stable releases. Hopefully the fix will soon be pushed to a full 5.2.x release?
I hoped I could simply start using Dev 124007 on my Solaris server however it introduces a major bug that prevents me doing so: VBoxHeadless segmentation faults every time I open a RDP connection to a guest. So I need to revert to 5.2.16.
(That bug reported here.)
Original problem report below, applying to 5.2.14 and 5.2.16, but fixed in Dev 124007:
----------------------------------------------------------------------------------------------------------
My symptoms:
- Windows 10 Pro x64 guests using the HyperV paravirtualiser and CPUs > 8 will hang early in startup, locking on the Windows logo without the "spinning wheel" progress bar ever appearing;
- This happens both when booting from an installed Win10 HDD, and with the Windows 10 installer DVD . So it is not related to the contents of an OS installation;
- Looking at the VBoxHeadless process on the host, I see that the affected guest uses only a few hundred MB of RAM, and is stuck at 100% of one logical CPU (which is 4.1% on my host, which has 24 logical CPUs from 2 sockets * 6 cores * 2 threads; 100% / 24 =~ 4.1);
- Switching to paravirt=Minimal allows booting a Windows10_64 guest with cpus > 8. However with Minimal guest performance is then unusably bad, regardless of number of guest CPUs (another bug?)
I boot my VMs with VBoxHeadless and then access them through VRDP. All have 3d acceleration turned off.
Host hardware:
Both the hosts I have confirmed the issue on (Solaris 11.3 server and Mac OS workstation) are running Intel X5670 Xeon processors, Westmere architecture.
I can recreate the issue in a new VM using the following CLI commands:
Code: Select all
VBoxManage createvm --name Win10-g2 --basefolder /system/vbox/vm --ostype Windows10_64 --register
VBoxManage modifyvm Win10-g2 --memory 8192 --nic1 bridged --bridgeadapter1 "global1g0 - Ethernet" --audio none --vrde on --vrdeproperty TCP/Ports=3345 --accelerate3d off
VBoxManage modifyvm Win10-g2 --cpus 12
VBoxManage storagectl Win10-g2 --name "SATA" --add sata --controller IntelAhci --portcount 2 --bootable on
VBoxManage storageattach Win10-g2 --storagectl SATA --port 1 --device 0 --type dvddrive --medium /data/software/Win10_1803.iso
VBoxManage storageattach Win10-g2 --storagectl SATA --port 2 --device 0 --type hdd --medium /system/vbox/disks/Win10-g2/Win10-g2.disk.vdi
The above will always hang on startup with the mentioned error, "HyperV: Guest indicates a fatal condition!". Changing the VM as follows will allow it to boot (but run super slow):
Code: Select all
VBoxManage modifyvm Win10-g2 --paravirtprovider minimal
Guests configured with HyperV and 8 or fewer CPUs run fine. This is fine for most of my needs, but I do occasionally want to run Windows VMs that can access more of my host's resources.
Thanks in advance.
Attached:
1. Vbox configuration file of a guest demonstrating the issue (created with the CLI commands listed above);
2. A log of a guest booted with HyperV and cpus = 12, showing the "HyperV: Guest indicates a fatal condition!" error and then being powered off. Guest booted with VBoxHeadless --vrde off (to simplify the log.)