successful FreeBSD 7 guest install -- shared folders?

Discussions about using non Windows and Linux guests such as FreeBSD, DOS, OS/2, OpenBSD, etc.
Post Reply
bsc
Posts: 5
Joined: 10. Jul 2008, 20:10

successful FreeBSD 7 guest install -- shared folders?

Post by bsc »

I asked this before with 1.6.2 and got not answer, trying again with 1.6.4.

Host is SLES10SP2.

I dl'd the 1.6.2 source hoping there's be an easy and obvious way to build the freebsd vboxvfs driver or even the whole thing, and the guest tools.

Bottom line is I want/need shared folders. Any hints on how to get started?

Thanks,
TerryE
Volunteer
Posts: 3572
Joined: 28. May 2008, 08:40
Primary OS: Ubuntu other
VBox Version: PUEL
Guest OSses: Ubuntu 10.04 & 11.10, both Svr&Wstn, Debian, CentOS
Contact:

Post by TerryE »

Well another way is use a loop device to create a shared disk. Let's assume you only need 50 Mbytes, then use the VboxManage to create and register a VDI container:
  • createvdi -filename vdiName -size 50 -static -register -type writethrough
Note that you may need to specify an absolute path in the vdiName because relative addresses are relative to the default VDI folder.

Then attach it to your FreeBSD system as a second or third HDD. The easiest way is to boot your VM and use FreeBSD to partition it (preferably as DOS partitioned device) and initialise the filesystem, I would strongly recommend sticking to a single partition per VDI in this sort of scenario and VFAT or Ext3.

Once the guest has initialised the FS on the VDI you can shut it down and then use a loop device to mount it with an offset of 33280 (for a disk of <128Mb DOS partitioned).

You now have a FS that can be mounted in the host and VM and used to pass data between. The only thing that you must remember is to only mount it in one system at a time because if you mount it in both then you can trash the FS. One way to do this is to put some wrapper scripts around the mount and umount so that you create an interlock file in the FS e.g. "/owned_by_HOST" so that the mount creates this and the umount deletes it. When you do the mount first mount it RO and check for the absence of the wrong file before remounting it RW. Hence this interllock will prevent both FS mounting at the same time.
Read the Forum Posting Guide
Google your Q site:VirtualBox.org or search for the answer before posting.
bsc
Posts: 5
Joined: 10. Jul 2008, 20:10

Post by bsc »

TerryE wrote:Well another way is use a loop device to create a shared disk. Let's assume you only need 50 Mbytes, then use the VboxManage to create and register a VDI container:
  • createvdi -filename vdiName -size 50 -static -register -type writethrough
Note that you may need to specify an absolute path in the vdiName because relative addresses are relative to the default VDI folder.
Sounds good. Thanks for the reply.

I actually have several (raw) Linux MD RAID0 slices on the host. What I really want is to configure them as drives in the guest. Absent the ability to use them directly I suppose I could create an ext3 fs on each volume, mount under linux, then createvdi (big) files on each volume. But I'm limited to just two additional "drives" this way! (If I enable the SATA controller and try to associate the vdi files with the SATA ports I get FAILUREs and TIMEOUTs for READ_DMA. Maybe because the box I'm playing with doesn't have a real SATA controller, although the box I plan to ultimately use does.)
TerryE wrote: Then attach it to your FreeBSD system as a second or third HDD. The easiest way is to boot your VM and use FreeBSD to partition it (preferably as DOS partitioned device) and initialise the filesystem, I would strongly recommend sticking to a single partition per VDI in this sort of scenario and VFAT or Ext3.
I don't actually need to be able to see the files from the host. I just want to have Linux MD RAID0 underneath.

If I use FreeBSD's fdisk to create a single DOS slice, then FreeBSD's disklabel to create d, e, f, g, and h partitions, then I'm probably happy. Do you think that's safe?
TerryE wrote: Once the guest has initialised the FS on the VDI you can shut it down and then use a loop device to mount it with an offset of 33280 (for a disk of <128Mb DOS partitioned).

You now have a FS that can be mounted in the host and VM and used to pass data between. The only thing that you must remember is to only mount it in one system at a time because if you mount it in both then you can trash the FS. One way to do this is to put some wrapper scripts around the mount and umount so that you create an interlock file in the FS e.g. "/owned_by_HOST" so that the mount creates this and the umount deletes it. When you do the mount first mount it RO and check for the absence of the wrong file before remounting it RW. Hence this interllock will prevent both FS mounting at the same time.
Thanks for your help.

-b
TerryE
Volunteer
Posts: 3572
Joined: 28. May 2008, 08:40
Primary OS: Ubuntu other
VBox Version: PUEL
Guest OSses: Ubuntu 10.04 & 11.10, both Svr&Wstn, Debian, CentOS
Contact:

Post by TerryE »

bcs, I need to put a stop!sign up here. On reading your last post, I think that we are talking at cross purposes and your first question is the wrong one. Can I pick out two of your statements:
In your first post you wrote:Bottom line is I want/need shared folders
In your last post you wrote: I don't actually need to be able to see the files from the host. I just want to have Linux MD RAID0 underneath.
In the first you are asking for shared folders (which are difficult to realise in BSD because of the lack of GA support) which is why I started talking about VDI encapsulates loop devices.

In your second you say you don't need to have shared access you just want them protected by your RAID-1 subsystem. In this case all you need are bog standard VDIs: no sharing; no loop devices; nothing but bog standard and we don't need to have this discussion.
:wink:
Read the Forum Posting Guide
Google your Q site:VirtualBox.org or search for the answer before posting.
bsc
Posts: 5
Joined: 10. Jul 2008, 20:10

Post by bsc »

TerryE wrote:bcs, I need to put a stop!sign up here. On reading your last post, I think that we are talking at cross purposes and your first question is the wrong one. Can I pick out two of your statements:
In your first post you wrote:Bottom line is I want/need shared folders
In your last post you wrote: I don't actually need to be able to see the files from the host. I just want to have Linux MD RAID0 underneath.
In the first you are asking for shared folders (which are difficult to realise in BSD because of the lack of GA support) which is why I started talking about VDI encapsulates loop devices.

In your second you say you don't need to have shared access you just want them protected by your RAID-1 subsystem. In this case all you need are bog standard VDIs: no sharing; no loop devices; nothing but bog standard and we don't need to have this discussion.
:wink:
Well, it's good to know it's bog standard. My mistake was following the UserManual links in Section 9, page 121, to section 5 and Section 3 instead of going to the next page where setting up raw disk access is documented.

But if I have SATA enabled and I associate my raw vmdk with sataport1 (or 2, 3, or 4), the guest sees the drive but gets TIMEOUT errors. If I use sataport5 or above, the guest doesn't see the drive at all. Obviously I'd prefer to use SATA and be able to have more than the two drives I'm limited to if I don't use SATA. I'm currently groveling around in the FreeBSD kernel source to see if there is some hard-coded limit on the number of sata drives it probes, but the TIMEOUTs may prove to be the more important issue.

FWIW, that's on an experimental system using /dev/loop0 to a file, but on the system I intend to use I'll have "real" Linux MD (RAID0) volumes.
TerryE
Volunteer
Posts: 3572
Joined: 28. May 2008, 08:40
Primary OS: Ubuntu other
VBox Version: PUEL
Guest OSses: Ubuntu 10.04 & 11.10, both Svr&Wstn, Debian, CentOS
Contact:

Post by TerryE »

A couple of points:

First, benchmarks done by guys here on the forum indicate that the performance claims made that "SATA Drives are far more efficient" seems to be hot air. The only real benefit seems to be that you can have more than 3 drives attached to your VM, but if you need 4+ drives on a VM I personally would question the reasons here.

Even if you have a SATA RAID-1 host setup, you can still virtualise your VDIs as IDE drives. You still get all the performance and functional benefits of the underlying host system.

Second, what I still don't understand is why you just don't use VDIs. I really can't understand why you want to use raw partitions. I am really interested in your reasoning here. What do you think that you will gain?
Read the Forum Posting Guide
Google your Q site:VirtualBox.org or search for the answer before posting.
bsc
Posts: 5
Joined: 10. Jul 2008, 20:10

Post by bsc »

TerryE wrote: Second, what I still don't understand is why you just don't use VDIs. I really can't understand why you want to use raw partitions. I am really interested in your reasoning here. What do you think that you will gain?
Well, among other reasons, if I use a VDI on top of an ext3 fs then the the embedded ufs fs will be fscked by the guest in addition to the host ext3 fsck that Linux already did, but at the risk of stating the obvious, raw partitions won't be fscked.

And ext3 isn't exactly known for being fast at fsck even when nothing has gone wrong.

-b
TerryE
Volunteer
Posts: 3572
Joined: 28. May 2008, 08:40
Primary OS: Ubuntu other
VBox Version: PUEL
Guest OSses: Ubuntu 10.04 & 11.10, both Svr&Wstn, Debian, CentOS
Contact:

Post by TerryE »

bsc wrote:Well, among other reasons, if I use a VDI on top of an ext3 fs then the the embedded ufs fs will be fscked by the guest in addition to the host ext3 fsck that Linux already did, but at the risk of stating the obvious, raw partitions won't be fscked.
If doesn't help that much. You still have all of the other files used by the VMM on a file system somewhere that needs checked. Surely what you want to do is to move your VirtualBox hierarchy onto its own dedicated partition with a tune2fs -i 0 -c 0 and a routine W/E maintenance slot which does a savestate of any guests, bounces the file system with any fschk.

If you do use raw partitions then I assume that you are going to use ZFS or LVM of course. You also need to create a service account to run the VMs since it needs to be a member of the disk group (or the ZFS equiv) and that is something the you do not want to do with a normal interactive account.
Read the Forum Posting Guide
Google your Q site:VirtualBox.org or search for the answer before posting.
Post Reply