can not create docker-machine with virtualbox provider for docker installed inside of vagrant

Discussions about using Linux guests in VirtualBox.
Post Reply
kostyanius
Posts: 5
Joined: 15. Nov 2014, 01:57

can not create docker-machine with virtualbox provider for docker installed inside of vagrant

Post by kostyanius »

docker-machine -debug --native-ssh create --driver virtualbox node-1

Code: Select all

(node-1) DBG | About to run SSH command:
(node-1) DBG | exit 0
(node-1) DBG | SSH cmd err, output: exit status 255: 
(node-1) DBG | Error getting ssh command 'exit 0' : ssh command error:
(node-1) DBG | command : exit 0
(node-1) DBG | err     : exit status 255
(node-1) DBG | output  : 
(node-1) DBG | Getting to WaitForSSH function...
(node-1) DBG | Using SSH client type: external
(node-1) DBG | Using SSH private key: /home/vagrant/.docker/machine/machines/node-1/id_rsa (-rw-------)
(node-1) DBG | &{[-F /dev/null -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none docker@127.0.0.1 -o IdentitiesOnly=yes -i /home/vagrant/.docker/machine/machines/node-1/id_rsa -p 39137] /usr/bin/ssh <nil>}
(node-1) DBG | About to run SSH command:
(node-1) DBG | exit 0
(node-1) DBG | SSH cmd err, output: exit status 255: 
(node-1) DBG | Error getting ssh command 'exit 0' : ssh command error:
(node-1) DBG | command : exit 0
(node-1) DBG | err     : exit status 255
(node-1) DBG | output  : 
(node-1) DBG | Getting to WaitForSSH function...
(node-1) DBG | Using SSH client type: external
(node-1) DBG | Using SSH private key: /home/vagrant/.docker/machine/machines/node-1/id_rsa (-rw-------)
(node-1) DBG | &{[-F /dev/null -o PasswordAuthentication=no -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null -o LogLevel=quiet -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none docker@127.0.0.1 -o IdentitiesOnly=yes -i /home/vagrant/.docker/machine/machines/node-1/id_rsa -p 39137] /usr/bin/ssh <nil>}
(node-1) DBG | About to run SSH command:
(node-1) DBG | exit 0
(node-1) DBG | SSH cmd err, output: exit status 255: 
(node-1) DBG | Error getting ssh command 'exit 0' : ssh command error:
(node-1) DBG | command : exit 0
(node-1) DBG | err     : exit status 255
(node-1) DBG | output  : 
Error creating machine: Error in driver during machine creation: Too many retries waiting for SSH to be available.  Last error: Maximum number of retries (60) exceeded
notifying bugsnag: [Error creating machine: Error in driver during machine creation: Too many retries waiting for SSH to be available.  Last error: Maximum number of retries (60) exceeded]
[vagrant@localhost ~]$ 
cat /var/log/messages/

Code: Select all

Oct 20 17:02:47 localhost dbus[596]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'
Oct 20 17:02:47 localhost dbus-daemon: dbus[596]: [system] Activating via systemd: service name='org.freedesktop.nm_dispatcher' unit='dbus-org.freedesktop.nm-dispatcher.service'
Oct 20 17:02:47 localhost systemd: Starting Network Manager Script Dispatcher Service...
Oct 20 17:02:47 localhost dhclient[2610]: bound to 192.168.121.177 -- renewal in 1485 seconds.
Oct 20 17:02:47 localhost dbus[596]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
Oct 20 17:02:47 localhost dbus-daemon: dbus[596]: [system] Successfully activated service 'org.freedesktop.nm_dispatcher'
Oct 20 17:02:47 localhost nm-dispatcher: req:1 'dhcp4-change' [eth0]: new request (4 scripts)
Oct 20 17:02:47 localhost systemd: Started Network Manager Script Dispatcher Service.
Oct 20 17:02:47 localhost nm-dispatcher: req:1 'dhcp4-change' [eth0]: start running ordered scripts...
Oct 20 17:03:13 localhost kernel: vboxdrv: ffffffffc05f4020 VMMR0.r0
Oct 20 17:03:13 localhost kernel: VBoxNetFlt: attached to 'vboxnet0' / 0a:00:27:00:00:00
Oct 20 17:03:13 localhost kernel: device vboxnet0 entered promiscuous mode
Oct 20 17:03:13 localhost NetworkManager[2592]: <info>  [1508518993.2554] device (vboxnet0): link connected
Oct 20 17:03:13 localhost kernel: vboxdrv: ffffffffc070f020 VBoxDDR0.r0
Oct 20 17:04:13 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:13 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:22 localhost kernel: HPET: Using timer above configured range: 2
Oct 20 17:04:49 localhost kernel: dockerd invoked oom-killer: gfp_mask=0x200da, order=0, oom_score_adj=-500
Oct 20 17:04:49 localhost kernel: dockerd cpuset=/ mems_allowed=0
Oct 20 17:04:49 localhost kernel: CPU: 0 PID: 4399 Comm: dockerd Tainted: G           OE  ------------   3.10.0-693.2.1.el7.x86_64 #1
Oct 20 17:04:49 localhost kernel: Hardware name: Red Hat KVM, BIOS 0.5.1 01/01/2011
Oct 20 17:04:49 localhost kernel: ffff88001e888000 000000006ec15837 ffff880002d73800 ffffffff816a3db1
Oct 20 17:04:49 localhost kernel: ffff880002d73890 ffffffff8169f1a6 ffff880002d73898 ffffffff812b7e6b
Oct 20 17:04:49 localhost kernel: 0000000000000001 ffff880002d738e8 ffffffff00000206 fffeefff00000000
Oct 20 17:04:49 localhost kernel: Call Trace:
Oct 20 17:04:49 localhost kernel: [<ffffffff816a3db1>] dump_stack+0x19/0x1b
Oct 20 17:04:49 localhost kernel: [<ffffffff8169f1a6>] dump_header+0x90/0x229
Oct 20 17:04:49 localhost kernel: [<ffffffff812b7e6b>] ? cred_has_capability+0x6b/0x120
Oct 20 17:04:49 localhost kernel: [<ffffffff81186394>] oom_kill_process+0x254/0x3d0
Oct 20 17:04:49 localhost kernel: [<ffffffff812b803c>] ? selinux_capable+0x1c/0x40
Oct 20 17:04:49 localhost kernel: [<ffffffff81186bd6>] out_of_memory+0x4b6/0x4f0
Oct 20 17:04:49 localhost kernel: [<ffffffff8169fcaa>] __alloc_pages_slowpath+0x5d6/0x724
Oct 20 17:04:49 localhost kernel: [<ffffffff8118cd85>] __alloc_pages_nodemask+0x405/0x420
Oct 20 17:04:49 localhost kernel: [<ffffffff811d4135>] alloc_pages_vma+0xb5/0x200
Oct 20 17:04:49 localhost kernel: [<ffffffff811c453d>] read_swap_cache_async+0xed/0x160
Oct 20 17:04:49 localhost kernel: [<ffffffff811c4658>] swapin_readahead+0xa8/0x110
Oct 20 17:04:49 localhost kernel: [<ffffffff811b235b>] handle_mm_fault+0xadb/0xfa0
Oct 20 17:04:49 localhost kernel: [<ffffffff81184f55>] ? filemap_fault+0x215/0x410
Oct 20 17:04:49 localhost kernel: [<ffffffff816afff4>] __do_page_fault+0x154/0x450
Oct 20 17:04:49 localhost kernel: [<ffffffff816b03d6>] trace_do_page_fault+0x56/0x150
Oct 20 17:04:49 localhost kernel: [<ffffffff816afa6a>] do_async_page_fault+0x1a/0xd0
Oct 20 17:04:49 localhost kernel: [<ffffffff816ac578>] async_page_fault+0x28/0x30
Oct 20 17:04:49 localhost kernel: [<ffffffff813304a0>] ? copy_user_generic_string+0x30/0x40
Oct 20 17:04:49 localhost kernel: [<ffffffff81215e21>] ? poll_select_copy_remaining+0x121/0x150
Oct 20 17:04:49 localhost kernel: [<ffffffff81216fdf>] SyS_pselect6+0x22f/0x240
Oct 20 17:04:49 localhost kernel: [<ffffffff816b5009>] system_call_fastpath+0x16/0x1b
Oct 20 17:04:49 localhost kernel: Mem-Info:
Oct 20 17:04:49 localhost kernel: active_anon:191 inactive_anon:211 isolated_anon:0#012 active_file:157 inactive_file:698 isolated_file:22#012 unevictable:5 dirty:0 writeback:0 unstable:0#012 slab_reclaimable:4926 slab_unreclaimable:7720#012 mapped:97156 shmem:8 pagetables:1889 bounce:0#012 free:1146 free_pcp:71 free_cma:0
Oct 20 17:04:49 localhost kernel: Node 0 DMA free:1964kB min:88kB low:108kB high:132kB active_anon:12kB inactive_anon:12kB active_file:52kB inactive_file:48kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15992kB managed:15908kB mlocked:0kB dirty:0kB writeback:0kB mapped:9424kB shmem:12kB slab_reclaimable:588kB slab_unreclaimable:2148kB kernel_stack:48kB pagetables:212kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:347 all_unreclaimable? yes
Oct 20 17:04:49 localhost kernel: lowmem_reserve[]: 0 471 471 471
Oct 20 17:04:49 localhost kernel: Node 0 DMA32 free:2620kB min:2728kB low:3408kB high:4092kB active_anon:752kB inactive_anon:832kB active_file:576kB inactive_file:2744kB unevictable:20kB isolated(anon):0kB isolated(file):88kB present:507880kB managed:484284kB mlocked:20kB dirty:0kB writeback:0kB mapped:379200kB shmem:20kB slab_reclaimable:19116kB slab_unreclaimable:28732kB kernel_stack:3008kB pagetables:7344kB unstable:0kB bounce:0kB free_pcp:284kB local_pcp:284kB free_cma:0kB writeback_tmp:0kB pages_scanned:7585 all_unreclaimable? yes
Oct 20 17:04:49 localhost kernel: lowmem_reserve[]: 0 0 0 0
Oct 20 17:04:49 localhost kernel: Node 0 DMA: 1*4kB (U) 3*8kB (U) 39*16kB (UM) 13*32kB (UM) 12*64kB (UM) 1*128kB (M) 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 1964kB
Oct 20 17:04:49 localhost kernel: Node 0 DMA32: 1*4kB (U) 67*8kB (M) 82*16kB (UM) 24*32kB (M) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 2620kB
Oct 20 17:04:49 localhost kernel: Node 0 hugepages_total=0 hugepages_free=0 hugepages_surp=0 hugepages_size=2048kB
Oct 20 17:04:49 localhost kernel: 1117 total pagecache pages
Oct 20 17:04:49 localhost kernel: 231 pages in swap cache
Oct 20 17:04:49 localhost kernel: Swap cache stats: add 145900, delete 145669, find 38529/49209
Oct 20 17:04:49 localhost kernel: Free swap  = 1452028kB
Oct 20 17:04:49 localhost kernel: Total swap = 1572860kB
Oct 20 17:04:49 localhost kernel: 130968 pages RAM
Oct 20 17:04:49 localhost kernel: 0 pages HighMem/MovableOnly
Oct 20 17:04:49 localhost kernel: 5920 pages reserved
Oct 20 17:04:49 localhost kernel: [ pid ]   uid  tgid total_vm      rss nr_ptes swapents oom_score_adj name
Oct 20 17:04:49 localhost kernel: [  404]     0   404     8278        1      21       89             0 systemd-journal
Oct 20 17:04:49 localhost kernel: [  427]     0   427    48772        0      30      396             0 lvmetad
Oct 20 17:04:49 localhost kernel: [  562]     0   562    13863        0      27      115         -1000 auditd
Oct 20 17:04:49 localhost kernel: [  589]   999   589   133691        0      56     1059             0 polkitd
Oct 20 17:04:49 localhost kernel: [  592]     0   592    52006        0      38      251             0 rsyslogd
Oct 20 17:04:49 localhost kernel: [  595]     0   595     6051        2      17       74             0 systemd-logind
Oct 20 17:04:49 localhost kernel: [  596]    81   596    24632        0      20      192          -900 dbus-daemon
Oct 20 17:04:49 localhost kernel: [  599]     0   599    48760        0      38      128             0 gssproxy
Oct 20 17:04:49 localhost kernel: [  607]   998   607    28910        0      27      115             0 chronyd
Oct 20 17:04:49 localhost kernel: [  633]     0   633    31566        0      21      169             0 crond
Oct 20 17:04:49 localhost kernel: [  642]     0   642    27511        1      11       31             0 agetty
Oct 20 17:04:49 localhost kernel: [  643]     0   643    27511        1      11       31             0 agetty
Oct 20 17:04:49 localhost kernel: [  907]     0   907   140599        0      93     2716             0 tuned
Oct 20 17:04:49 localhost kernel: [ 1136]     0  1136    22386        0      43      260             0 master
Oct 20 17:04:49 localhost kernel: [ 1140]    89  1140    22412        0      45      261             0 pickup
Oct 20 17:04:49 localhost kernel: [ 1141]    89  1141    22429        0      45      269             0 qmgr
Oct 20 17:04:49 localhost kernel: [ 2592]     0  2592   173842        0      90      683             0 NetworkManager
Oct 20 17:04:49 localhost kernel: [ 2610]     0  2610    28343        0      57     3122             0 dhclient
Oct 20 17:04:49 localhost kernel: [ 3036]     0  3036    11642        1      23      425         -1000 systemd-udevd
Oct 20 17:04:49 localhost kernel: [ 3602]     0  3602    26499        0      55      245         -1000 sshd
Oct 20 17:04:49 localhost kernel: [ 4398]     0  4398   142471        0      70     3034          -500 dockerd
Oct 20 17:04:49 localhost kernel: [ 4402]     0  4402    66923        0      26      794          -500 docker-containe
Oct 20 17:04:49 localhost kernel: [20251]     0 20251    36425        1      72      317             0 sshd
Oct 20 17:04:49 localhost kernel: [20254]  1000 20254    36504        0      71      383             0 sshd
Oct 20 17:04:49 localhost kernel: [20255]  1000 20255    29012        1      15      252             0 bash
Oct 20 17:04:49 localhost kernel: [21793]     0 21793    36425        3      76      316             0 sshd
Oct 20 17:04:49 localhost kernel: [21796]  1000 21796    36504        0      73      381             0 sshd
Oct 20 17:04:49 localhost kernel: [21797]  1000 21797    29012        3      15      274             0 bash
Oct 20 17:04:49 localhost kernel: [21851]     0 21851    30802        0      16       57             0 anacron
Oct 20 17:04:49 localhost kernel: [21954]     0 21954    49411        0      54      175             0 sudo
Oct 20 17:04:49 localhost kernel: [21955]     0 21955    26986        0      11       27             0 tail
Oct 20 17:04:49 localhost kernel: [22019]  1000 22019    52896        1      52      541             0 VBoxXPCOMIPCD
Oct 20 17:04:49 localhost kernel: [22069]  1000 22069     7510        0      18      594             0 docker-machine
Oct 20 17:04:49 localhost kernel: [22073]  1000 22073     7246        0      16      473             0 docker-machine
Oct 20 17:04:49 localhost kernel: [22080]  1000 22080     8335        9      20     1583             0 docker-machine
Oct 20 17:04:49 localhost kernel: [22189]  1000 22189   158038       25      73      742             0 VBoxSVC
Oct 20 17:04:49 localhost kernel: [22501]  1000 22501   322171    96859     325     7591             0 VBoxHeadless
Oct 20 17:04:49 localhost kernel: [22519]  1000 22519    53095        2      54      728             0 VBoxNetDHCP
Oct 20 17:04:49 localhost kernel: [22619]  1000 22619    18091       39      38      160             0 ssh
Oct 20 17:04:49 localhost kernel: Out of memory: Kill process 22501 (VBoxHeadless) score 202 or sacrifice child
Oct 20 17:04:49 localhost kernel: Killed process 22501 (VBoxHeadless) total-vm:1288684kB, anon-rss:0kB, file-rss:387436kB, shmem-rss:0kB
Oct 20 17:04:54 localhost kernel: device vboxnet0 left promiscuous mode
Oct 20 17:04:54 localhost kernel: vboxnetflt: 0 out of 0 packets were not sent (directed to host)
p.s. i know that virtualbox does not support nested virtualisation, that is why i use KVM as host hypervisor for Vagrant and Virtualbox as guest hypervisor and driver for docker-machine in order to create docker-swarm. But still no luck. Googled many topics, sites but still have no any solution. Also can see that many people are facing with the same issue. Does not matter is it Ubuntu or Centos. It fails everywhere! Any ideas?
p.s.s. this is not definitely a docker issue so as such command works ok if use not nested virtualisation but just run "docker-machine create -d virtualbox vm1" (using docker provider).
Last edited by socratis on 20. Oct 2017, 19:32, edited 1 time in total.
Reason: Enclosed the information in [code] tag for better readability
socratis
Site Moderator
Posts: 27330
Joined: 22. Oct 2010, 11:03
Primary OS: Mac OS X other
VBox Version: PUEL
Guest OSses: Win(*>98), Linux*, OSX>10.5
Location: Greece

Re: can not create docker-machine with virtualbox provider for docker installed inside of vagrant

Post by socratis »

  1. Nested virtualization is indeed not supported. In no way, shape or form.
  2. Docker is a program that relies on VirtualBox but modifies its configuration files in unknown ways to us. It is not supported on these VirtualBox user forums, they have their own Docker support channels. If you are having this problem with a standalone version of VirtualBox, then we can continue this discussion.
  3. Vagrant is a program that relies on VirtualBox but modifies its configuration files in unknown ways to us. It is not supported on these VirtualBox user forums, they have their own Vagrant support channels. If you are having this problem with a standalone version of VirtualBox, then we can continue this discussion.
Do NOT send me Personal Messages (PMs) for troubleshooting, they are simply deleted.
Do NOT reply with the "QUOTE" button, please use the "POST REPLY", at the bottom of the form.
If you obfuscate any information requested, I will obfuscate my response. These are virtual UUIDs, not real ones.
kostyanius
Posts: 5
Joined: 15. Nov 2014, 01:57

Re: can not create docker-machine with virtualbox provider for docker installed inside of vagrant

Post by kostyanius »

Thanks for your reply.
I have already asked about this issue on Vagrant and Docker forums and have not got any answer where could be the root cause of this issue.
That is why i have decided to ask also here hoping that someone can advise something on this to exclude all possible variants of root cause.
p.s. after fail with Virtualbox - Virtualbox variant i have used KVM as host hypervisor with Centos 7.3. and then installed VirtualBox for docker-machine as guest.
In this case it should work, if believe the people from Stackoverflow. But when i checked it has the same behaviour as was in case with Virtualbox - Virtualbox binding. And that is very strangely, though i have initially enabled manually support of VT-X virtualization in KVM parameters.
Post Reply