Page 1 of 1

CPU Cores versus threads

Posted: 26. Apr 2016, 16:46
by Perryg
When selecting the amount of cores you give to a guest you must understand the difference between cores and threads.

Some basic terminology:
  • CPU = Central Processing Unit, Cores = Logical Processors, and Threads are basically two data lines from a single core.
    CPU is anything that is contained in a single die and can have 2, 4, 6, 8 cores. More at the time of this writing would require an additional physical CPU or die.
    Cores are actually logical processors because they work the same as a physical CPU but are contained in a single die.
    Hyper-threaded cores have two threads that use the same core and threads are only used with programs/apps that are designed to use hyper-threading.
Now some OSes will count threads as CPUs but remember these are not cores, they are virtual CPUs or vCPUs. VirtualBox calls these Logical host processors, see below:

Code: Select all

00:00:01.481329 CPUM: Logical host processors: 12 present, 12 max, 12 online, online mask: 0000000000000fff
00:00:01.481511 CPUM: Physical host cores: 6
Example: Intel Core i7-5820K CPU has 6 physical cores and if hyper-threading is enabled in your BIOS will have 12 threads or virtual CPUs. Looking at the screen shot below you will see that the slide can use up to 6 vCPUs safely as VirtualBox sees it but that will max out the real cores leaving none for the host to actually use.
processor_setting_gui.png
processor_setting_gui.png (15.97 KiB) Viewed 36616 times
You can test this by assigning 2 vCPUs to the guest and watch the host CPUs in any monitoring program/app. Set the guest to perform any operation that will actually use all of the cores and you should see 4 CPUs actually being used.

One other thing that you must consider is to always keep a core ( not threads ) for the host so the host side code and host itself does not become over burdened. So in the above example I would never give more than 5 to a guest if I wanted to have stability in the host and guest. Yes you can assign all of the vCPUs even in the red but you will have issues and complain of performance problems, hangs, and crashes.
Ramshankar (VirtualBox Developer) wrote:Why is it a bad idea to allocate as many VCPUs as there are physical CPUs?

You cannot have the best of both worlds. Most modern Intel CPUs have VT-x preemption timers, which VirtualBox has been using for years now. This lets us run chunks of guest code and still get interrupted to run host code depending on how we program the preemption timer. However, the question is not whether we can or cannot interrupt guest code, we normally can. The problem is that there are tasks that require to be run in reasonable frequency & amount of time both on the host *and* the guest. If you starve the host or guest of interrupts or introduce latency because there simply isn't enough compute power available, you will be creating bottlenecks.

Getting in and out of executing guest code via VT-x is still quite an expensive operation. We call it a world-switch or world round-trip, (i.e. VM-entry, execute guest, VM-exit). This is done in ring-0 (kernel) on the host, sometimes (especially on Windows hosts) we are forced to return all the way to ring-3 sometimes in order to satisfy DPC (Deferred Procedure Call) latency. Overall, you're going to have strange latencies introduced in unexpected places if you "overcommit". It is totally possible to run 4 VCPU VM on a 4 CPU host (I do it on my own my Linux dev box sometimes) but it is not something you should be doing if you care about reasonable performance; in extreme cases of overcommitment you may encounter program misbehavior (like when disk requests times out) which the programs are never designed to handle. In the not so severe case you may end up with some strange timeouts but not fatal errors.