Excellent! Thank you...
Now to the diffs between the two runs. The red ones are the "10.10" ones, the rest are the "generic" ones. The first two entries do not exist in the "10.10" run:
00:00:02.606124 MaxIntelFamilyModelStep <integer> = 0x0000000000061701 (399 105)
...
00:00:02.614165 CPU: CPUID(0).EAX 0x906e9 -> 0x10671 (uMaxIntelFamilyModelStep=0x61701, uCurIntelFamilyModelStep=0x69e09
...
00:00:02.628081 IEM: TargetCpu=CURRENT, Microarch=Intel_Core2_Penryn
00:00:01.555697 IEM: TargetCpu=CURRENT, Microarch=Intel_Core7_KabyLake
...
00:00:02.814622 Model: 7 Extended: 1 Effective: 23
00:00:02.814623 Stepping: 1
00:00:01.670271 Model: 14 Extended: 9 Effective: 158
00:00:01.670272 Stepping: 9
I think I know where this is going... faking the CPU.
After a little bit of search, that is indeed the case for guests up to 10.9. The "culprit" is at '
/src/VBox/Main/src-client/ConsoleImpl2.cpp'. Pay closer attention to the comments:
[quote]
if (fOsXGuest)
{
/* Expose extended MWAIT features to Mac OS X guests. */
LogRel(("Using MWAIT extensions\n"));
InsertConfigInteger(pIsaExts, "MWaitExtensions", true);
/* Fake the CPU family/model so the guest works. This is partly
because older mac releases really doesn't work on newer cpus,
and partly because mac os x expects more from systems with newer
cpus (MSRs, power features, whatever). */
uint32_t uMaxIntelFamilyModelStep = UINT32_MAX;
if ( osTypeId == "MacOS"
|| osTypeId == "MacOS_64")
uMaxIntelFamilyModelStep = RT_MAKE_U32_FROM_U8(1, 23, 6, 0); /* Penryn / X5482. */
else if ( osTypeId == "MacOS106"
|| osTypeId == "MacOS106_64")
uMaxIntelFamilyModelStep = RT_MAKE_U32_FROM_U8(1, 23, 6, 0); /* Penryn / X5482 */
else if ( osTypeId == "MacOS107"
|| osTypeId == "MacOS107_64")
uMaxIntelFamilyModelStep = RT_MAKE_U32_FROM_U8(1, 23, 6, 0); /* Penryn / X5482 */ /** @todo figure out
what is required here. */
else if ( osTypeId == "MacOS108"
|| osTypeId == "MacOS108_64")
uMaxIntelFamilyModelStep = RT_MAKE_U32_FROM_U8(1, 23, 6, 0); /* Penryn / X5482 */ /** @todo figure out
what is required here. */
else if ( osTypeId == "MacOS109"
|| osTypeId == "MacOS109_64")
uMaxIntelFamilyModelStep = RT_MAKE_U32_FROM_U8(1, 23, 6, 0); /* Penryn / X5482 */ /** @todo figure
out what is required here. */
if (uMaxIntelFamilyModelStep != UINT32_MAX)
InsertConfigInteger(pCPUM, "MaxIntelFamilyModelStep", uMaxIntelFamilyModelStep);
}
[/quote]
which pretty much, and in conjunction with your VBox.log, tells me that if you set the template to "OSX" your CPU is presented as a
Xeon "Harpertown" X5482, whereas if you set the template to "OSX 10.10" your
i5-7600 is presented unfiltered.
Apparently this needs to be updated. There's also a ticket,
#13118: Presenting compatible CPU architecture to Mac OS X guest VMs, which was closed as obsolete (and which I just re-opened), both for your crash, and for the fact that if I don't set the following for my 10.5 and 10.6 clients, they crash as well. You might want to do the same for your 10.10 guest:
I might open a thread that summarizes the situation with different hardware/guest combinations. I'll let you know...