Some basic terminology:
- CPU = Central Processing Unit, Cores = Logical Processors, and Threads are basically two data lines from a single core.
CPU is anything that is contained in a single die and can have 2, 4, 6, 8 cores. More at the time of this writing would require an additional physical CPU or die.
Cores are actually logical processors because they work the same as a physical CPU but are contained in a single die.
Hyper-threaded cores have two threads that use the same core and threads are only used with programs/apps that are designed to use hyper-threading.
Code: Select all
00:00:01.481329 CPUM: Logical host processors: 12 present, 12 max, 12 online, online mask: 0000000000000fff
00:00:01.481511 CPUM: Physical host cores: 6
One other thing that you must consider is to always keep a core ( not threads ) for the host so the host side code and host itself does not become over burdened. So in the above example I would never give more than 5 to a guest if I wanted to have stability in the host and guest. Yes you can assign all of the vCPUs even in the red but you will have issues and complain of performance problems, hangs, and crashes.
Ramshankar (VirtualBox Developer) wrote:Why is it a bad idea to allocate as many VCPUs as there are physical CPUs?
You cannot have the best of both worlds. Most modern Intel CPUs have VT-x preemption timers, which VirtualBox has been using for years now. This lets us run chunks of guest code and still get interrupted to run host code depending on how we program the preemption timer. However, the question is not whether we can or cannot interrupt guest code, we normally can. The problem is that there are tasks that require to be run in reasonable frequency & amount of time both on the host *and* the guest. If you starve the host or guest of interrupts or introduce latency because there simply isn't enough compute power available, you will be creating bottlenecks.
Getting in and out of executing guest code via VT-x is still quite an expensive operation. We call it a world-switch or world round-trip, (i.e. VM-entry, execute guest, VM-exit). This is done in ring-0 (kernel) on the host, sometimes (especially on Windows hosts) we are forced to return all the way to ring-3 sometimes in order to satisfy DPC (Deferred Procedure Call) latency. Overall, you're going to have strange latencies introduced in unexpected places if you "overcommit". It is totally possible to run 4 VCPU VM on a 4 CPU host (I do it on my own my Linux dev box sometimes) but it is not something you should be doing if you care about reasonable performance; in extreme cases of overcommitment you may encounter program misbehavior (like when disk requests times out) which the programs are never designed to handle. In the not so severe case you may end up with some strange timeouts but not fatal errors.