Hello,
I have been having a consistent problem running VB on a new server I built. The server is a dual Xeon quad-core, with 16 GB of RAM, and an 80 GB SATA drive for the OS and ISO's, as well as a 750GB SATA drive for VM's. The server is running CentOS 5. I built it so several of my students could run remote VM's and work on software and labs from home. When more than a half a dozen VM's are running at the same time, the OS locks up, and has to be rebooted. It doesn't seem to matter if the guest OS is Linux, Windows XP, or Server 2k3. I tried to duplicate the problem on a dual core machine, and it crashed with only 5 VM's. All VM's were started with VBoxVRDP. It almost seems to me like it is trying to run everything on one CPU core, but that is jus a guess. Any ideas? If I can't get this problem fixed, I'm going to have to ditch the project.
Thanks
VRDP sessions crash server
I will have to wait until tomorrow morning to check the hardware virtualization settings. This is what is in the /var/log/messages file.
Thanks
I have seen this in the log file each time after the system crashed - I remember the last line. The next line in the log file is four hours later, when I drove to work and pressed the reset button. For what it is worth, I am running the latest CentOS PAE kernel. I haven't checked the CentOS forums, but i will.Oct 4 08:06:58 vmserver kernel: BUG: soft lockup detected on CPU#5!
Oct 4 08:06:58 vmserver kernel: [<c0447ecf>] softlockup_tick+0x98/0xa6
Oct 4 08:06:58 vmserver kernel: [<c042d138>] update_process_times+0x39/0x5c
Oct 4 08:06:58 vmserver kernel: [<c04176f0>] smp_apic_timer_interrupt+0x5c/0x6
4
Oct 4 08:06:58 vmserver kernel: [<c04049bf>] apic_timer_interrupt+0x1f/0x24
Oct 4 08:06:58 vmserver kernel: [<c041e0a7>] try_to_wake_up+0x371/0x37b
Oct 4 08:06:58 vmserver kernel: [<f98e1510>] SUPR0ObjRelease+0xe2/0x105 [vboxd
rv]
Oct 4 08:06:58 vmserver kernel: [<c041c880>] __wake_up_common+0x2f/0x53
Oct 4 08:06:58 vmserver kernel: [<f98e2a48>] supdrvIOCtl+0xda1/0x1194 [vboxdrv
]
Oct 4 08:06:58 vmserver kernel: [<c045f02b>] __vmalloc_area_node+0x103/0x124
Oct 4 08:06:58 vmserver kernel: [<f98e00e0>] VBoxSupDrvDeviceControl+0xbc/0x15
d [vboxdrv]
Oct 4 08:06:58 vmserver kernel: [<c0478a53>] do_ioctl+0x47/0x5d
Oct 4 08:06:58 vmserver kernel: [<c0478cb3>] vfs_ioctl+0x24a/0x25c
Oct 4 08:06:58 vmserver kernel: [<c0478d0d>] sys_ioctl+0x48/0x5f
Oct 4 08:06:58 vmserver kernel: [<c0403eff>] syscall_call+0x7/0xb
Oct 4 08:06:58 vmserver kernel: =======================
Thanks
-
- Volunteer
- Posts: 1064
- Joined: 10. May 2007, 10:27
- Primary OS: MS Windows Vista
- VBox Version: PUEL
- Guest OSses: Windows, Linux, Solaris
Just to inform other interested parties:
It could to be a bug in the linux kernel that is triggered by VBox.
There are similar reports here:
- http://lkml.org/lkml/2007/2/7/288
- http://lkml.org/lkml/2006/5/2/273
You might want to experiment with disabling soft lockups in the kernel config or apply the patch mentioned in the above links.
We'll see if we can find a way to work around it.
It could to be a bug in the linux kernel that is triggered by VBox.
There are similar reports here:
- http://lkml.org/lkml/2007/2/7/288
- http://lkml.org/lkml/2006/5/2/273
You might want to experiment with disabling soft lockups in the kernel config or apply the patch mentioned in the above links.
We'll see if we can find a way to work around it.
temporary workaround for kernel bug
As a temporary workaround to not have to recompile the kernel, you can add the nosoftlockup stanza to your kernel parameters at boot. I use grub, so my updated config looks like this:
So far, I have had no more of the soft lockups on the system, although it is still a little flaky under heavy loads. A definite step in the right direction though! Thanks for the tip!
Code: Select all
title CentOS (2.6.18-8.1.14.el5PAE)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.1.14.el5PAE ro root=LABEL=/ rhgb quiet nosoftlockup
initrd /initrd-2.6.18-8.1.14.el5PAE.img