xfs Filesystem corruption using rawdisk partitions
Posted: 3. Apr 2009, 22:03
Let me preface this by saying I have the same issue running under VMWare, although it doesn't seem as prevalent in VirtualBox.
I'm trying to build a new "root" partition on an existing disk. I've tried a number of times now, and every time I've ended up with a corrupt filesystem. I'm trying to do it this way because I can't shut the server down in order to create the new system.
My existing disk is /dev/hda, which has the following partions: /dev/hda1 is an active xfs filesystem. /dev/hda2 is unused. /dev/hda4 is a Linux swap.
In my VM I added a hard disk, as "rawdisk" and then included "partitions 2". I also added 2 Virtual disks as well.
The following sequence nearly always results in a corrupted filesystem on /dev/hda2.
Boot the VM from a Slackware DVD image.
Install Slackware on /dev/hda2, which includes formatting /dev/hda2 as xfs.
poweroff the VM.
Run xfs_check /dev/hda2, which doesn't find any errors.
Mount /dev/hda2 on existing system, and copy 2 directory trees from /dev/hda1 to /dev/hda2.
Unmount /dev/hda2.
Run xfs_check /dev/hda2, which again doesn't find any errors.
Boot the VM, again from the Slackware DVD image, but add the kernel parameters "root=/dev/hda2 rdinit= ro".
poweroff the VM.
Run xfs_check, which now complains that there are errors on the filesystem. If the corruption hasn't happened here, it will almost certainly occur the next time I boot, and poweroff.
The partition is never mounted to the host and the VM at the same time. It is always unmounted from the host before starting the VM. The VM is always powered down before mounting the partition on the host. Both are running the same kernel version, and xfs version
Does anyone have any idea why this could be happening. I'm running VirtualBox 2.1.4.
Cheers,
Eddie
I'm trying to build a new "root" partition on an existing disk. I've tried a number of times now, and every time I've ended up with a corrupt filesystem. I'm trying to do it this way because I can't shut the server down in order to create the new system.
My existing disk is /dev/hda, which has the following partions: /dev/hda1 is an active xfs filesystem. /dev/hda2 is unused. /dev/hda4 is a Linux swap.
In my VM I added a hard disk, as "rawdisk" and then included "partitions 2". I also added 2 Virtual disks as well.
The following sequence nearly always results in a corrupted filesystem on /dev/hda2.
Boot the VM from a Slackware DVD image.
Install Slackware on /dev/hda2, which includes formatting /dev/hda2 as xfs.
poweroff the VM.
Run xfs_check /dev/hda2, which doesn't find any errors.
Mount /dev/hda2 on existing system, and copy 2 directory trees from /dev/hda1 to /dev/hda2.
Unmount /dev/hda2.
Run xfs_check /dev/hda2, which again doesn't find any errors.
Boot the VM, again from the Slackware DVD image, but add the kernel parameters "root=/dev/hda2 rdinit= ro".
poweroff the VM.
Run xfs_check, which now complains that there are errors on the filesystem. If the corruption hasn't happened here, it will almost certainly occur the next time I boot, and poweroff.
The partition is never mounted to the host and the VM at the same time. It is always unmounted from the host before starting the VM. The VM is always powered down before mounting the partition on the host. Both are running the same kernel version, and xfs version
Does anyone have any idea why this could be happening. I'm running VirtualBox 2.1.4.
Cheers,
Eddie