Just out of curiosity, using kernel 5.7 + 1 vCPU (6.1.12) + total time for 1000 1ms sleeps:
1360ms nohz
1360ms nohz+hpet
1125ms 1khz
1060ms 1khz+hpet
Interesting that there's ~30% improvement available, even on uniprocessor configuration.
Unusual hangs during sleep() calls
-
- Posts: 231
- Joined: 1. Jan 2017, 09:16
- Primary OS: MS Windows 7
- VBox Version: PUEL
- Guest OSses: Ubuntu 16.04 x64, W7
Re: Unusual hangs during sleep() calls
Yeah, that's a pretty meaningful improvement.
It's strange though. 16.04 / 4.15 / whatever is old, but nowhere near old enough that it predates CFS (which is the current "favored" general -purpose scheduler). HPET, and the 1KHz scheduler, both come with potentially-significant overheads (many many more wakeups to simply return with without doing any work, lots of time aggressively spinning spinlocks, etc), so they're probably a poor choice if the machine is near capacity. But if it's mostly (or even just non-trivially) idle anyway, it's a small price to pay for the massively-better accuracy available.
As you say, building custom kernels is a PITA. I suspect that you may well be able to get the same (or at least, significantly better) results with any of several schedulers, but I also agree with you that it's not really worth the trouble for most scenarios when you have a "good enough" workaround available.
> So, it's an unwanted interaction between vbox and nohz scheduler, but I have no idea who to point the finger at.
I think that realistically the virtual blame falls on the kernel devs, since this is technically a regression. I suspect that they're unlikely to care much though, since it clearly doesn't impact Xen/KVM (or it would have been resolved already).
It's strange though. 16.04 / 4.15 / whatever is old, but nowhere near old enough that it predates CFS (which is the current "favored" general -purpose scheduler). HPET, and the 1KHz scheduler, both come with potentially-significant overheads (many many more wakeups to simply return with without doing any work, lots of time aggressively spinning spinlocks, etc), so they're probably a poor choice if the machine is near capacity. But if it's mostly (or even just non-trivially) idle anyway, it's a small price to pay for the massively-better accuracy available.
As you say, building custom kernels is a PITA. I suspect that you may well be able to get the same (or at least, significantly better) results with any of several schedulers, but I also agree with you that it's not really worth the trouble for most scenarios when you have a "good enough" workaround available.
> So, it's an unwanted interaction between vbox and nohz scheduler, but I have no idea who to point the finger at.
I think that realistically the virtual blame falls on the kernel devs, since this is technically a regression. I suspect that they're unlikely to care much though, since it clearly doesn't impact Xen/KVM (or it would have been resolved already).