I am in the situation to evaluate virtual box running inside a KVM Debian-Guest, that in turn runs on a Debian-Hypervisor.
The Debian-Hypervisor is one of my company's big machines, that runs several unrelated hosts. The Guest is a Debian system, that acts as the Jenkins host and VirtualBox is eventually used to create the build- and testing-environment

This setup is generally working, but we hit performance issues when running unit tests. One test suite completing in 20 minutes within VirtualBox on my laptop, takes hours to complete within the jenkins setup. Well, actually it does not fully complete there, but test-php processes are going wild after some hours (supposingly due to performance issues, since the test suites run fine on all local development machines running the same VM as on the build server).
The Jenkins Host, which is the KVM guest has all virtualisation CPU flags passed through (on an Intel-Host i.e. vmx, ept, vpid) and even /sys/module/kvm_intel/parameters/nested states nested paging is enabled with an 'Y'. For the testing phase, we've got 8 cores and plenty of RAM. CPU and RAM do not hit any limits.
Long story short: The setup is quite untypical and I am looking for reasons for our operations department to move the Jenkins host to a bare metal machine, where it should be easily able to run the virtual box setup.
So question is: Does anyone have experience running virtual box inside a KVM- or any other Hypervisor based guest? What were your performance results like? Can you please tell me why you use or do not use an approach like this? Does it make any sense at all to run a build server in a virtualised environment? If not, what about setting up Jenkins in the cloud? After all it is virtualised there, too.
Looking forward to your opinions.
Thanks!