Nested virtualization on windows host

Discussions related to using VirtualBox on Windows hosts.
fb
Posts: 9
Joined: 9. Jun 2021, 19:49

Nested virtualization on windows host

Post by fb »

Hello,

I hope to find some help here. My setup is as follows:

- CPU: i7-10750H
- Guest: Debian 10, kernel 5.10.0-0.bpo.5-amd64
- Host: Windows 10
Virtualbox 6.1.18, nested virtualization enabled via command line (is greyed out in GUI) - ./vboxmanage modifyvm Debian --nested-hw-virt on
I verified it in the guest via "egrep --color -i "svm|vmx" cpuinfo". vmx is displayed (as seen in the image)

I want to use qemu inside my debian guest system. I enabled nested virtualization of VirtualBox as I wrote above.

So it's like this: Windows 10 VirtualBox[ Debian 10 qemu[ Debian+custom kernel ] ] ]

When i use qemu with the flag -enable-kvm VirtualBox crashes, I don't really understand why, as nested Virtualization should be possible with my Version of VirtualBox, right?
Using qemu without the flag -enable-kvm naturally seems to work.

Attaching the log file does not seem to work, as it's > 128 KiB. Embedding pictures also does not work.

Picture of vm-state: ibb.co/F3fkh9v
Log-file: ufile.io/d0zc7log
scottgus1
Site Moderator
Posts: 20965
Joined: 30. Dec 2009, 20:14
Primary OS: MS Windows 10
VBox Version: PUEL
Guest OSses: Windows, Linux

Re: Nested virtualization on windows host

Post by scottgus1 »

There are prerequisites in the CPU capabilities for nested Virtualization to work well.

This implies your CPU may not have them all:
fb wrote:is greyed out in GUI
We can check with a vbox.log. Start the Debian VM from full normal shutdown, not save-state. Get logged in, then shut down the VM from within the VM's OS.

Right-click the VM in the main Virtualbox window's VM list, choose Show Log. Save the far left tab's log, zip it, and post the zip file, using the forum's Upload Attachment tab.

FWIW The most supported formation of nested virtualization is where Virtualbox is on both layers. Other hypervisors might be tried, but they may not work.
fb
Posts: 9
Joined: 9. Jun 2021, 19:49

Re: Nested virtualization on windows host

Post by fb »

Allright, I did what you said and uploaded the log.
Thank you in advance!
Attachments
Debian-2021-06-09-23-58-37.zip
log
(38.72 KiB) Downloaded 12 times
scottgus1
Site Moderator
Posts: 20965
Joined: 30. Dec 2009, 20:14
Primary OS: MS Windows 10
VBox Version: PUEL
Guest OSses: Windows, Linux

Re: Nested virtualization on windows host

Post by scottgus1 »

Yep, you'se missing one:
00:00:07.228268 Full Name: "Intel(R) Core(TM) i7-10750H CPU @ 2.60GHz"
00:00:07.228299 VMX - Virtual-Machine Extensions = 1 (1)
00:00:07.228320 Ept - Extended Page Tables = 0 (1)
00:00:07.228324 UnrestrictedGuest - Unrestricted guest = 0 (1)
00:00:07.228328 VmcsShadowing - VMCS shadowing = 0 (0)
For optimum nested virtualization all four last (#)'s in parentheses should be a 1. The last one "VMCS shadowing" is a (0), so no good nested virtualization with that CPU.
fb wrote:When i use qemu with the flag -enable-kvm VirtualBox crashes
Does nested work on your setup if you don't enable this flag?
fb
Posts: 9
Joined: 9. Jun 2021, 19:49

Re: Nested virtualization on windows host

Post by fb »

It runs without the flag. If my understanding is correct, without that flag the hardware gets 100% emulated, so it's totally independent from nesting and should always work.
The guest system in qemu then just runs extremely slow. It almost took me 3 hours to install a small Debian OS without that flag. The system in qemu is just for verification of some new kernel features, so it does not need to be extremely fast. It does not even need a graphical user interface. But I fear connecting to it with gdb and debugging with it could be too slow.

Edit: My understandnig (did not look that much into it) of VMCS is, that it's "just" for performance and not an absolute show-stopper of nested virtualization, is that not correct?
scottgus1
Site Moderator
Posts: 20965
Joined: 30. Dec 2009, 20:14
Primary OS: MS Windows 10
VBox Version: PUEL
Guest OSses: Windows, Linux

Re: Nested virtualization on windows host

Post by scottgus1 »

fb wrote:VMCS is, that it's "just" for performance and not an absolute show-stopper
I picked up all I know about it from our hex whisperer 'fth0', who put together the list of CPU features needed. It appears that VMCS isn't "necessary", but it really increases performance. Without it, you'd have performance equivalent to a block of concrete, but it maybe would tick over. With it...

It appears that without it that flag is not compatible. Whether this is expected or is a bug would be something for the devs on the Bugtracker, unless fth0 drops by and explains things here.
fth0
Volunteer
Posts: 5668
Joined: 14. Feb 2019, 03:06
Primary OS: Mac OS X other
VBox Version: PUEL
Guest OSses: Linux, Windows 10, ...
Location: Germany

Re: Nested virtualization on windows host

Post by fth0 »

First of all, my technical summary in Nested VT-x and VMCS Shadowing gives some background info.

Just like QEMU without KVM, and just like (non-nested) VirtualBox without EPT, using nested virtualization with VirtualBox inside a VirtualBox VM without VMCS Shadowing is expected to be extremely slow (by today's standards). QEMU/KVM inside a VirtualBox VM will probably not be faster when it works. Speaking of which:

QEMU/KVM inside a VirtualBox VM is generally expected to work, according to 9.33. Nested Virtualization. But I know about at least two setups, which were reported by users in the VirtualBox forums, where it didn't work and crashed: The GNS3 network simulator, when simulating CISCO devices with QEMU/KVM, and some Android emulators, as soon as they utilize QEMU/KVM. I haven't seen any positive feedback yet, but that doesn't tell much about a rarely used functionality, where you typically do not get much feedback, if at all.

Debian-2021-06-09-23-58-37.log wrote:
00:00:25.201136 VMMDev: Guest Log: vboxguest: host-version: 6.1.18r142142 0x8000000f
00:00:25.201193 VMMDev: Guest Additions information report: Version 6.0.0 r127566 '6.0.0'
[...]
00:00:33.488835 VMMDev: Guest Log: 21:57:48.982688 main     VBoxService 6.1.18 r142142 (verbosity: 0) linux.amd64 (Jan  7 2021 17:26:51) release log
[...]
00:01:14.763990 VMMDev: Guest Log: A fatal guest X Window error occurred.  This may just mean that the Window system was shut down while the client was still runnTerminated with signal 15
The Debian-2021-06-09-23-58-37.log file does not show any crash, but rather the typical signs of a partial installation of the VirtualBox Guest Additions (GA), where the VirtualBox kernel modules could not be built during installation (the 6.0.0 ones are pre-installed by many Linux distributions). Please re-install the GA und especially pay attention to any error messages inside the terminal output.
fb
Posts: 9
Joined: 9. Jun 2021, 19:49

Re: Nested virtualization on windows host

Post by fb »

First of all, thank you for your answers.
fth0 wrote:
Debian-2021-06-09-23-58-37.log wrote:
00:00:25.201136 VMMDev: Guest Log: vboxguest: host-version: 6.1.18r142142 0x8000000f
00:00:25.201193 VMMDev: Guest Additions information report: Version 6.0.0 r127566 '6.0.0'
[...]
00:00:33.488835 VMMDev: Guest Log: 21:57:48.982688 main     VBoxService 6.1.18 r142142 (verbosity: 0) linux.amd64 (Jan  7 2021 17:26:51) release log
[...]
00:01:14.763990 VMMDev: Guest Log: A fatal guest X Window error occurred.  This may just mean that the Window system was shut down while the client was still runnTerminated with signal 15
The Debian-2021-06-09-23-58-37.log file does not show any crash, but rather the typical signs of a partial installation of the VirtualBox Guest Additions (GA), where the VirtualBox kernel modules could not be built during installation (the 6.0.0 ones are pre-installed by many Linux distributions). Please re-install the GA und especially pay attention to any error messages inside the terminal output.
That log was without VirtualBox crashing, just for checking the capabilities of my system. I attached a log with crash to this post.

I will try to re-install the guest additions, do you think that could have something to do with my VM crashing when starting qemu with kvm enabled? Otherwise my VM just runs perfectly.

Edit: I uninstalled and installed the guest-additions again. This is the output, seems that everything worked:

fb@fb-debian:/media/fb/VBox_GAs_6.1.18$ sudo sh ./VBoxLinuxAdditions.run
Verifying archive integrity... All good.
Uncompressing VirtualBox 6.1.18 Guest Additions for Linux........
VirtualBox Guest Additions installer
Copying additional installer modules ...
Installing additional modules ...
VirtualBox Guest Additions: Starting.
VirtualBox Guest Additions: Building the VirtualBox Guest Additions kernel
modules. This may take a while.
VirtualBox Guest Additions: To build modules for other installed kernels, run
VirtualBox Guest Additions: /sbin/rcvboxadd quicksetup <version>
VirtualBox Guest Additions: or
VirtualBox Guest Additions: /sbin/rcvboxadd quicksetup all
VirtualBox Guest Additions: Building the modules for kernel
5.10.0-0.bpo.5-amd64.
update-initramfs: Generating /boot/initrd.img-5.10.0-0.bpo.5-amd64
I: The initramfs will attempt to resume from /dev/sda2
I: (UUID=cb32d0dd-84b3-40f2-be2c-508a7f28d554)
I: Set the RESUME variable to override this.
VirtualBox Guest Additions: Running kernel modules will not be replaced until
the system is restarted

Edit2: Problem stays with newly installed guest additions
Attachments
log-with-crash.rar
(83.51 KiB) Downloaded 5 times
fth0
Volunteer
Posts: 5668
Joined: 14. Feb 2019, 03:06
Primary OS: Mac OS X other
VBox Version: PUEL
Guest OSses: Linux, Windows 10, ...
Location: Germany

Re: Nested virtualization on windows host

Post by fth0 »

I now think the GA installation and the crash are two separate topics:

Regarding the GA installation, the log-with-crash.log file doesn't give any useful info, because it shows the VM being resumed from a saved state. This is like starting to view a video from your last pausing position, where you usually aren't told what happened at the beginning of the video. The output from the GA installation looks fine, and I hope you restarted the guest as requested therein.

Regarding the crash, the log-with-crash.log file shows similar information as in the other cases I mentioned (GNS3, Android). Since nested virtualization is a rare use case of VirtualBox, and the VirtualBox developers are very busy all the time, I wouldn't hold my breath.
fb
Posts: 9
Joined: 9. Jun 2021, 19:49

Re: Nested virtualization on windows host

Post by fb »

fth0 wrote:I now think the GA installation and the crash are two separate topics:
Regarding the GA installation, the log-with-crash.log file doesn't give any useful info, because it shows the VM being resumed from a saved state. This is like starting to view a video from your last pausing position, where you usually aren't told what happened at the beginning of the video. The output from the GA installation looks fine, and I hope you restarted the guest as requested therein.
I did restart, everything except the nesting problem seems to be OK.
fth0 wrote: Regarding the crash, the log-with-crash.log file shows similar information as in the other cases I mentioned (GNS3, Android). Since nested virtualization is a rare use case of VirtualBox, and the VirtualBox developers are very busy all the time, I wouldn't hold my breath.
Well I just saw there's a new Version of VirtualBox, (6.1.22), mine is 6.1.18. There's this bug-ticket, too: virtualbox/org/ticket/20199. Seems to be my problem (same VirtualBox version) which got assumingly fixed by the testbuild from that time (4 months ago). So I will try the new version and hope that the testbuild from that time made it to the new release.


Edit: Updated it and qemu with -enable-kvm flag seems to correctly boot the nested guest system. Had I just checked this earlier... oh well.
fth0 wrote: Just like QEMU without KVM, and just like (non-nested) VirtualBox without EPT, using nested virtualization with VirtualBox inside a VirtualBox VM without VMCS Shadowing is expected to be extremely slow (by today's standards). QEMU/KVM inside a VirtualBox VM will probably not be faster when it works.
Just to clarify it for me, using qemu with nested virtualization via kvm without VMCS could really be slower than full emulation without kvm?
There would not even be a graphical user interface in the nested machine, just a terminal. The application which will run on it is just a small server which stores a bit of data (I assume under 50 MB in total) and communicates with one client. Additionally the gdb may run in then nested machine, so it's just used for verification and debugging purposes. Do you think the performance will be sufficient or can I expect a ridiculously slow lagging system on which doing the work as described above is not possible?
Qemu will get 2 cores and 4 GB memory.
fth0
Volunteer
Posts: 5668
Joined: 14. Feb 2019, 03:06
Primary OS: Mac OS X other
VBox Version: PUEL
Guest OSses: Linux, Windows 10, ...
Location: Germany

Re: Nested virtualization on windows host

Post by fth0 »

fb wrote:Well I just saw there's a new Version of VirtualBox, (6.1.22), mine is 6.1.18. There's this bug-ticket, too: virtualbox/org/ticket/20199. Seems to be my problem (same VirtualBox version) which got assumingly fixed by the testbuild from that time (4 months ago). So I will try the new version and hope that the testbuild from that time made it to the new release.
Right, I'm getting old and forgot that one (20199). The bug fix is part of VirtualBox 6.1.20 and newer.
fb wrote:Just to clarify it for me, using qemu with nested virtualization via kvm without VMCS could really be slower than full emulation without kvm?
No, I don't think so. I just wanted to convey that all three disadvantageous combinations (QEMU without KVM, VirtualBox without EPT, nested VirtualBox without VMCS Shadowing) will be considerably slower than a typical VirtualBox VM. If it was slower by a factor of 5, 10 or more, I don't know. But it usually is not only some small percentage of slowness.

For example, I once took one of my Windows VMs and on of my Linux VMs on a dual-boot Linux/Windows host, and disabled System > Acceleration > Nested Paging, which effectively disables EPT. All 4 VMs needed 5 to 10 times the usual time for booting, and were no fun to use.
fb wrote:The application which will run on it is just a small server which stores a bit of data (I assume under 50 MB in total) and communicates with one client. Additionally the gdb may run in then nested machine, so it's just used for verification and debugging purposes. Do you think the performance will be sufficient or can I expect a ridiculously slow lagging system on which doing the work as described above is not possible?
It is always difficult to assess such a situation, since most of the time you don't really know what performance you need from which resources (e.g. CPU, RAM, disk; latency, throughput). So I'd say just give it a try. I'm interested in hearing about your experiences with that, please report back. :)
fb
Posts: 9
Joined: 9. Jun 2021, 19:49

Re: Nested virtualization on windows host

Post by fb »

fth0 wrote: For example, I once took one of my Windows VMs and on of my Linux VMs on a dual-boot Linux/Windows host, and disabled System > Acceleration > Nested Paging, which effectively disables EPT. All 4 VMs needed 5 to 10 times the usual time for booting, and were no fun to use.
That sounds awful to work with. Just booted a small Debian OS with nested qemu and kvm-enabled. It took about ~30 seconds and at least the terminal did not show any sign of lag, copying some file with ~15 - 20MB/s. Using apt was noticeable slow, though.
fth0 wrote:
fb wrote:The application which will run on it is just a small server which stores a bit of data (I assume under 50 MB in total) and communicates with one client. Additionally the gdb may run in then nested machine, so it's just used for verification and debugging purposes. Do you think the performance will be sufficient or can I expect a ridiculously slow lagging system on which doing the work as described above is not possible?
It is always difficult to assess such a situation, since most of the time you don't really know what performance you need from which resources (e.g. CPU, RAM, disk; latency, throughput). So I'd say just give it a try. I'm interested in hearing about your experiences with that, please report back. :)
Allright, I will give it a try. Running qemu on my host windows parallel to the VirtualBox is the fallback. But there could be other nasty problems I don't have the time and nerves to deal with.

It could take up to 2 months till I'm fully developing with it, but I will report my experiences here then.

Danke für Deine Zeit!
fb
Posts: 9
Joined: 9. Jun 2021, 19:49

Re: Nested virtualization on windows host

Post by fb »

fth0 wrote: It is always difficult to assess such a situation, since most of the time you don't really know what performance you need from which resources (e.g. CPU, RAM, disk; latency, throughput). So I'd say just give it a try. I'm interested in hearing about your experiences with that, please report back. :)
Allright, so I already did some minor kernel debugging with my nested qemu system. I connect with gdb via VS Code (first time using this, went surprisingly smooth) to the kgdb of a modified linux kernel, that runs in the nested qemu. I then run a small program in qemu via a shared folder with the VirtualBox system. The kernel remote debugging is relatively slow, it takes about 1-2 seconds per step without kvm-enable. With kvm-enable it's even noteiceably slower, about ~4 seconds. I'm really surprised by that. So either I'm going to continue to use qemu without enable-kvm and debug slower then I'm used to or I switch to a native installation of debian. Running qemu parallel to my VirtualBox is just going produce more problems I'm afraid, connecting to kgdb through my VirtualBox, sharing folders between VirtualBox and qemu running in parallel and so son.

Is qemu with 100% emulation on systems without VMCS really faster than with kvm-enabled or could I have something unfavorable configured in my bios?
Could I use haxm in linux as an accelerator for qemu? It surely would be the same problem with vmcs, right?
fth0
Volunteer
Posts: 5668
Joined: 14. Feb 2019, 03:06
Primary OS: Mac OS X other
VBox Version: PUEL
Guest OSses: Linux, Windows 10, ...
Location: Germany

Re: Nested virtualization on windows host

Post by fth0 »

I have no practical experience with what you're trying, but I can offer some thoughts:

QEMU/KVM uses more CPU hardware features than QEMU alone, so I could imagine that it causes a lot of additional VM-exits from the outer VirtualBox VM that would not be needed if VMCS Shadowing was available. You can check this speculation: In the menu of the VM window, select Machine > Session Information... > Performance Monitor, and watch the bottom graph that shows the VM Exits per second. Do you see a considerable difference between using QEMU/KVM and QEMU alone? You could also try if running QEMU with -enable-kvm -cpu host makes it better or worse (or try the oldest virtual QEMU CPU providing the features you need).

Regarding running QEMU in parallel to VirtualBox, note that while a VirtualBox VM is running, you can only use QEMU without KVM, because they cannot share VT-x (hardware virtualization).
fb
Posts: 9
Joined: 9. Jun 2021, 19:49

Re: Nested virtualization on windows host

Post by fb »

fth0 wrote: QEMU/KVM uses more CPU hardware features than QEMU alone, so I could imagine that it causes a lot of additional VM-exits from the outer VirtualBox VM that would not be needed if VMCS Shadowing was available. You can check this speculation: In the menu of the VM window, select Machine > Session Information... > Performance Monitor, and watch the bottom graph that shows the VM Exits per second. Do you see a considerable difference between using QEMU/KVM and QEMU alone?
Without kvm-enable it's ~850k exits, with kvm-enable it's ~550k exits. Funnily its considerably faster with more exits
fth0 wrote: You could also try if running QEMU with -enable-kvm -cpu host makes it better or worse (or try the oldest virtual QEMU CPU providing the features you need).
I tried it, unfortunately it stays the same.
fth0 wrote: Regarding running QEMU in parallel to VirtualBox, note that while a VirtualBox VM is running, you can only use QEMU without KVM, because they cannot share VT-x (hardware virtualization).
Do you know about Intel HAXM (https://github.com/intel/haxm)? I could run qemu with haxm as accelerator and VirtualBox with VT-x in parallel on Windows. Prepared my debian system that way, because it was too slow in the nested qemu. qemu was running absolutely smooth that way. I tried to do the same in the linux system in my VirtualBox and installed haxm from that github link, but it does not seem to work. Starting qemu with "-accel hax" (which works on windows) produces the following error:

qemu-system-x86_64: -machine accel=hax: No accelerator found

Even though the module is loaded:

lsmod | grep haxm
haxm 241664 0

There's a warning in the HAXM debug log too:

sudo dmesg | grep haxm
[148537.177588] haxm_warning: Host CPU does not support APM
[148537.177590] haxm_warning: -------- HAXM v7.7.0 Start --------
[148562.591307] haxm_warning: -------- HAXM v7.7.0 End --------
[148607.443768] haxm_warning: Host CPU does not support APM
[148607.443770] haxm_warning: -------- HAXM v7.7.0 Start --------

But I don't know what's meant with APM in this context and google was not helpful for me in this case.
Post Reply