Page 2 of 2
Re: Discuss the 5.1.18 release
Posted: 29. Mar 2017, 17:35
by frank
klaus wrote:Not today

At least a Xeon E5-1650 v3 should not be used in anything but a single socket server. The log message is misleading, it's showing the logical CPU count (which includes hyperthreading).
This is indeed a bug in the FreeBSD code. It's using the
generic implementation of RTMpGetCoreCount() -- which is euqal to the number of cores. Someone with FreeBSD knowledge should provide a fix (our team has no time to actively maintain the FreeBSD sources).
Re: Discuss the 5.1.18 release
Posted: 30. Mar 2017, 04:45
by Selin
frank, could you tell why guest shall not use hyperthreading cores? Only physical ones.
From the manual:
"You should not, however, configure virtual machines to use more CPU cores than you have available physically (real cores, no hyperthreads)"
Thanks
Re: Discuss the 5.1.18 release
Posted: 30. Mar 2017, 12:05
by klaus
Executive summary of "why can't VirtualBox use HT for VMs": it can't control the execution of host OS threads which implement a VCPU to make sure that the overall behavior looks like HT (i.e. there's no way to ask any popular OS out there to schedule 2 host threads on a single host CPU core). The uncontrollable scheduling results in unpredictable performance, usually much worse than without reporting HT and sticking to cores.
Re: Discuss the 5.1.18 release
Posted: 30. Mar 2017, 12:23
by socratis
Selin wrote:could you tell why guest shall not use hyperthreading cores? Only physical ones.
Here is another answer that I keep handy and I quote whenever this question comes up. The last couple of sentences are really interesting:
Ramshankar in a [url=https://forums.virtualbox.org/viewtopic.php?f=1&t=79734#p373129]recent post[/url] wrote:
Why is it a bad idea to allocate as many VCPUs as there are physical CPUs?
You cannot have the best of both worlds. Most modern Intel CPUs have VT-x preemption timers, which VirtualBox has been using for years now. This lets us run chunks of guest code and still get interrupted to run host code depending on how we program the preemption timer. However, the question is not whether we can or cannot interrupt guest code, we normally can. The problem is that there are tasks that require to be run in reasonable frequency & amount of time both on the host *and* the guest. If you starve the host or guest of interrupts or introduce latency because there simply isn't enough compute power available, you will be creating bottlenecks.
Getting in and out of executing guest code via VT-x is still quite an expensive operation. We call it a world-switch or world round-trip, (i.e. VM-entry, execute guest, VM-exit). This is done in ring-0 (kernel) on the host, sometimes (especially on Windows hosts) we are forced to return all the way to ring-3 sometimes in order to satisfy DPC (Deferred Procedure Call) latency. Overall, you're going to have strange latencies introduced in unexpected places if you "overcommit". It is totally possible to run 4 VCPU VM on a 4 CPU host (I do it on my own my Linux dev box sometimes) but it is not something you should be doing if you care about reasonable performance; in extreme cases of overcommitment you may encounter program misbehavior (like when disk requests times out) which the programs are never designed to handle. In the not so severe case you may end up with some strange timeouts but not fatal errors.
Re: Discuss the 5.1.18 release
Posted: 10. Apr 2017, 00:39
by Selin
frank wrote:I think I know the problem. Could you try
Code: Select all
VBoxManage setextradata VM_NAME VBoxInternal/Devices/acpi/0/Config/PciPref64Enabled 0
It helps

Thank you very much!