Ubuntu 16.04.1 LTS (Xenial) fails to configure host-only interface, if started in headless mode

Discussions about using Linux guests in VirtualBox.
Post Reply
iliv
Posts: 3
Joined: 7. Jan 2014, 08:33

Ubuntu 16.04.1 LTS (Xenial) fails to configure host-only interface, if started in headless mode

Post by iliv »

Hi,

I know the subject sounds very odd but it is what happens.

The host is Ubuntu 16.04.1 LTS (Xenial) running VirtualBox 5.0.24. All of my VM's were imported from VirtualBox 4.x installation. Most importantly I did not have this problem using VirtualBox 4.x. It only became a problem after upgrade to 5.x.

So, networking is configured properly on this guest VM (Xenial LTS). Again, it used to work just fine in VirtualBox 4.x and it does so when I start the VM in GUI mode using VirtualBox 5.0.24. Every time. However, it fails to configure networking (just the host-only adapter) if VM is started in headless mode.

When the guest VM finishes the boot process, I cannot ping its Host-Only interface from the host machine:

Code: Select all

iliv@sega ~ $ ping xenial
PING xenial.localdomain (10.0.3.4) 56(84) bytes of data.
From 10.0.3.1 icmp_seq=1 Destination Host Unreachable
From 10.0.3.1 icmp_seq=2 Destination Host Unreachable
From 10.0.3.1 icmp_seq=3 Destination Host Unreachable
^C
--- xenial.localdomain ping statistics ---
6 packets transmitted, 0 received, +3 errors, 100% packet loss, time 5031ms
pipe 3
When I launch GUI and click on the Show button I see the usual login prompt:

Image

I can log in an see that networking is configured properly:

Image

but it never works until I restart networking:

Image

At which point I can also ping this guest VM from the host:

Code: Select all

iliv@sega ~ $ ping xenial
PING xenial.localdomain (10.0.3.4) 56(84) bytes of data.
64 bytes from xenial.localdomain (10.0.3.4): icmp_seq=1 ttl=64 time=4.43 ms
64 bytes from xenial.localdomain (10.0.3.4): icmp_seq=2 ttl=64 time=13.7 ms
^C
--- xenial.localdomain ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1004ms
rtt min/avg/max/mdev = 4.438/9.086/13.735/4.649 ms
This never happens if I start the guest in GUI mode. To make things even worse sometimes, let's say 1% of the time, this guest VM succeeds starting up with a working networking. It happens very rarely and the circumstances do not seem to be different from all the other times when I start it in headless mode.

Interestingly, I have a similar Ubuntu Trusty LTS guest VM that doesn't have this problem. In fact, Xenial is a clone (reconfigured properly) of that Trusty VM, but again, as I have just said above, this particular Xenial guest VM didn't have this problem either until I upgraded VirtualBox from 4.x to 5.x.

I tried to run tcpdump on both the host and the vm but it's like the VM is down:

10.0.3.4 is the Xenial guest VM.
10.0.3.1 is vboxnet0 interface on the host.

Code: Select all

03:35:46.091354 ARP, Request who-has 10.0.3.4 tell 10.0.3.1, length 28
........
.'...
.........
...
03:35:47.090927 ARP, Request who-has 10.0.3.4 tell 10.0.3.1, length 28
........
.'...
.........
...
03:35:48.091036 ARP, Request who-has 10.0.3.4 tell 10.0.3.1, length 28
........
.'...
.........
…
I also used the --nictrace command (see log file attached) and all I saw was nothing really. It's like the host is down and the guest's host-only interface isn't configured. You probably noticed in the screenshots that the NAT interface works just fine. It's the Host-Only interface that doesn't. In the pcap file itself you can see that I tried to ping this guest VM from the host computer and it didn't succeed until I restarted networking manually after the guest booted and launched GUI terminal to log in and run the systemctl restart networking command.

This is a host's Host-Only adapter (the only VirtualBox adapter in this system):

Code: Select all

4: vboxnet0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000
    link/ether 0a:00:27:00:00:00 brd ff:ff:ff:ff:ff:ff
    inet 10.0.3.1/24 brd 10.0.3.255 scope global vboxnet0
       valid_lft forever preferred_lft forever
    inet6 fe80::800:27ff:fe00:0/64 scope link 
       valid_lft forever preferred_lft forever
I would appreciate any help in troubleshooting and/or resolving this problem.

Thanks,
Ivan
Attachments
nictrace2.pcap.zip
Original nictrace output
(2.73 KiB) Downloaded 26 times
nictrace2.pcap.txt.gz
Same file processed with tcpdump -qns 0
(2.08 KiB) Downloaded 27 times
iliv
Posts: 3
Joined: 7. Jan 2014, 08:33

Re: Ubuntu 16.04.1 LTS (Xenial) fails to configure host-only interface, if started in headless mode

Post by iliv »

After much troubleshooting, it turns out that simply configuring the guest VM to use virtio-net network instead of Intel PRO 1000/MT Desktop interface solves the problem.

Among other things I tried was adding a hackish BASH one-liner to /etc/rc.local: ifdown enp0s8 && ifup enp0s8. Such toggling of the enp0s8 interface helped on multiple test start-ups, whether done manually or when run by /etc/rc.local.

I talked to people in #vbox and #ubuntu-server but everybody was ready to blame the other side. systemd may be the problem here as a Trusty VM doesn't have this problem. However, I have no evidence really. It's just a hunch.

Why virtio-net interface is treated differently is something I would love to learn myself. I was told by nemo in #vbox that virtio-net is finicky but documentation https://www.virtualbox.org/manual/ch06.html#ftn.idm2723 actually makes it clear that virtio-net is a preferred first choice:
...

Performance-wise the virtio network adapter is preferable over Intel PRO/1000 emulated adapters, which are preferred over PCNet family of adapters.

...

Here is the short summary of things to check in order to improve network performance:
1. Whenever possible use virtio network adapter, otherwise use one of Intel PRO/1000 adapters;

...
If somebody can explain why Intel PRO/1000 MT Desktop adapter acts this way in a Xenial guest VM, and virtio-net interface doesn't, it would be great. I solved the problem but not the mystery of what is going on here.
Post Reply