I'm currently trying to emulate the setup of our servers with virtualbox, which is nearly working, but only for disks which aren't pcie nvmes (i.e. SATA works without issues). Primarly, we want to test some pretty complicated disk setup workflows (involving reboots, partprobes etc., which make meaningful testing by manually faking the drive nodes via mknod defacto impossible).
It's possible to add two nvmes, but only on the same controller. This leads to the linux guest recognizing them as /dev/nvme0n1 and /dev/nvme0n2. Our servers all have two different nvme controllers with each one nvme, which then appear ass /dev/nvme0n1 and /dev/nvme1n1.
Is there a way to make the linux host recognize both disks as /dev/nvme0n1 and /dev/nvme1n1?
My research up to now:
You can add a second nvme controller if you set the chipset to ich9; the default pxii3 chipset refuses to add multiple controllers of the same family. Did that:
Code: Select all
vboxmanage modifyvm $NAME --chipset ich9
vboxmanage storagectl $NAME --name "NVMe Controller 1" --add pcie --controller NVMe --portcount 1 --bootable on
vboxmanage storageattach $NAME --storagectl "NVMe Controller 1" --device 0 --port 0 --type hdd --medium ${VHD1} --nonrotational on
vboxmanage storagectl $NAME --name "NVMe Controller 2" --add pcie --controller NVMe --portcount 1
vboxmanage storageattach $NAME --storagectl "NVMe Controller 2" --device 0 --port 0 --type hdd --medium ${VHD2} --nonrotational on
Code: Select all
[Mon Aug 24 21:35:37 2020] nvme nvme1: I/O 28 QID 0 timeout, completion polled
[Mon Aug 24 21:35:37 2020] nvme nvme1: Duplicate cntlid 0 with nvme0, rejecting
[Mon Aug 24 21:35:37 2020] nvme nvme1: Removing after probe failure status: -22
Maybe that's the wrong approach, maybe a bug (I've seen ich9 being described as experimental) - I have no idea what to try next.