Hello,
im trying to virtualise my CentoOS7 linux as nested guest machine L2 (this way: L0=Win10, L1=Win7orWin10, L2=Centos7).
One of my program immediately after i start it gives "VCPU0: Guru Meditation 1155 (VINF_EM_TRIPLE_FAULT)" on L2 machine and stop.
The interesting is when L2 machine works as L1 triple fault never happen, so nested virtualisation must have impact on this.
Can you please analyse the log and put me on the right track what could be wrong, and why working as L1, and no working as nested L2?
My guesses:
L0 machine vmbox release VirtualBox-6.1.42-155177 (newest 6.1), trying release 7.0 but the same.
L1 machine vmbox release VirtualBox-5.2.44-139111 (newest 5.2), trying release 6.1 but the same.
L1 machine os changes Win7 to Win10 but no difference.
L2 reinstall/remove ext pack but the same.
Changing L1 settings, probably all possible set, but no difference.
Usually guested machine L2 print no error, just change state to Guru Meditation, but fortunately sometimes print attached error (bad rip value)
VINF_EM_TRIPLE_FAULT while using nested L2 guest machine.
-
- Site Moderator
- Posts: 20945
- Joined: 30. Dec 2009, 20:14
- Primary OS: MS Windows 10
- VBox Version: PUEL
- Guest OSses: Windows, Linux
Re: VINF_EM_TRIPLE_FAULT while using nested L2 guest machine.
The log from L0 might help the forum gurus as well.
Nested Virtualization came out in 6.0.0. Though this apparently happens with 6.1 as well in the L1 level, it might be good to test 6.1.42 in both levels, then provide both levels' logs when the L2 guru meditates.
The log doesn't post anything between 0:33 and the guru meditation at 2:12. So the problem isn't something Virtualbox is programmed to notice. A triple fault error is Virtualbox's "Whoa, I have no idea what that was!" error message.00:00:33.489730 GUI: UIMachineLogicNormal::sltCheckForRequestedVisualStateType: Requested-state=0, Machine-state=5
00:02:12.091642 Changing the VM state from 'RUNNING' to 'GURU_MEDITATION'
Nested Virtualization came out in 6.0.0. Though this apparently happens with 6.1 as well in the L1 level, it might be good to test 6.1.42 in both levels, then provide both levels' logs when the L2 guru meditates.
Re: VINF_EM_TRIPLE_FAULT while using nested L2 guest machine.
I just started guest machine L2, wait about 2 min (for better log clearance), login and run program.
Will catch logs using the same vbox release.
Will catch logs using the same vbox release.
Re: VINF_EM_TRIPLE_FAULT while using nested L2 guest machine.
Attached logs from L0 and L1. Some serial numbers are hidden by replacing "AAAA" to prevent exposing for open internet.
Both levels running on the same vbox release.
Both levels running on the same vbox release.
- Attachments
-
- VBox-L0L1logs.7z
- (105.76 KiB) Downloaded 2 times
-
- Volunteer
- Posts: 5677
- Joined: 14. Feb 2019, 03:06
- Primary OS: Mac OS X other
- VBox Version: PUEL
- Guest OSses: Linux, Windows 10, ...
- Location: Germany
Re: VINF_EM_TRIPLE_FAULT while using nested L2 guest machine.
Thanks for the log files! VirtualBox 6.1.42 writes much more information than VirtualBox 5.2.44, which somewhat indicates the type of problem:
I'd suggest to create a ticket in the Bugtracker.
Knowing that the Linux kernel code is located in the virtual memory area between 0xffffffff80000000 and 0xffffffffa0000000, I'd guess that a SYSENTER call to 0x00000000816b6f40 is missing a sign extension of its address.VBox-L1_forum.log wrote:00:01:58.614819 !! 00:01:58.614819 !! {cpumguest, verbose} 00:01:58.614819 !! 00:01:58.614823 Guest CPUM (VCPU 0) state: 00:01:58.614825 rax=00000000000000ae rbx=0000000000000006 rcx=00000000ffcdd184 rdx=00000000ffcdd210 00:01:58.614826 rsi=0000000000000008 rdi=00000000f7767000 r8 =0000000000000000 r9 =0000000000000000 00:01:58.614827 r10=0000000000000000 r11=0000000000000000 r12=0000000000000000 r13=0000000000000000 00:01:58.614827 r14=0000000000000000 r15=0000000000000000 00:01:58.614828 rip=00000000816b6f40 rsp=0000000000000000 rbp=00000000ffcdd16c iopl=0 rf nv up di nt zr na pe nc [...] 00:01:58.614834 SysEnter={cs=0010 eip=00000000816b6f40 esp=0000000000000000} [...] 00:01:58.614871 CSTAR =ffffffff816b7170 00:01:58.614871 LSTAR =ffffffff816b4f50 [...] 00:01:58.618368 !! 00:01:58.618368 !! {exits} 00:01:58.618368 !! 00:01:58.618370 CPU[0]: VM-exit history: 00:01:58.618370 Exit No.: TSC timestamp / delta RIP (Flat/*) Exit Name 00:01:58.618383 2251225: 0x0000013a16e6e6ff/+0 00000000816b6f40 0x01000 VMX_EXIT_XCPT_OR_NMI - 0 - Exception or non-maskable interrupt (NMI). 00:01:58.618386 2251224: 0x0000013a16e6931d/-21474 00000000816b6f40 0x01000 VMX_EXIT_XCPT_OR_NMI - 0 - Exception or non-maskable interrupt (NMI). 00:01:58.618387 2251223: 0x0000013a16e63920/-23037 00000000816b6f40 0x01000 VMX_EXIT_XCPT_OR_NMI - 0 - Exception or non-maskable interrupt (NMI).
I'd suggest to create a ticket in the Bugtracker.