Vagrant and docker Protocol error mounting directory windows 7 - docker

I have a custom boot2docker with this configuration attached to the .iso
config.vm.provider "virtualbox" do |v|
v.customize ['storageattach', :id, '--storagectl', 'SATA', '--port', 0, '--device', 0, '--type', 'dvddrive', '--medium', File.expand_path("../boot2docker.iso", __FILE__)]
v.customize ['modifyvm', :id, '--nictype1', 'virtio']
end
config.vm.network "private_network", ip: "192.168.10.10", id: "default-network", nic_type: "virtio"
My files in the directory
mycompany/
dockerhost/
Vagrantfile
Vagrantfile
The vagrantfile of the docker-host that replaces the above.
## This is required with the plugin winnfsd
config.vm.network "private_network", type: "dhcp"
config.vm.synced_folder "../", "/vagrant", type: "nfs"
When I start up the dockerhost with vagrant up it works well and the NFS is setup correctly, but when I start a service container vagrant up myservice
shows the error
==> myservice: Docker host is required. One will be created if necessary...
myservice: Docker host VM is already ready.
==> myservice: Syncing folders to the host VM...
dockerhost: Mounting shared folders...
dockerhost: /var/lib/docker/docker_1472079332_51007 => C:/Users/myuser/Desktop/mycompany
Vagrant was unable to mount VirtualBox shared folders. This is usually because the filesystem "vboxsf" is not available. This filesystem is made available via the VirtualBox Guest Additions and kernel module.
Please verify that these guest additions are properly installed in the guest. This is not a bug in Vagrant and is usually caused by a faulty
Vagrant box. For context, the command attemped was:
set -e
mount -t vboxsf -o uid=`id -u docker`,gid=`getent group docker | cut -d: -f3` b5973a5087 /var/lib/docker/docker_1472079332_51007 mount -t vboxsf -o uid=`id -u docker`,gid=`id -g docker` b5973a5087 /var/lib/docker/docker_1472079332_51007
The error output from the command was:
mount: mounting b5973a5087 on /var/lib/docker/docker_1472079332_51007 failed: Protocol error
It looks like Vagrant mount another volumen when starts a container.
Any idea about how to fix it?
or why vagrant mounts another for my?
Tks

Vagrant and VirtualBox version incompatibility problems
This fix my problem:
Vagrant 1.8.5 should be compatible with the latest VirtualBox 5.1.2 release, however VirtualBox Guest Additions version we currently have is 5.0.20 and the latest boot2docker release is at 5.0.24. Both will most likely have issues with VirtualBox 5.1.2.
So for now avoid VirtualBox 5.1.x and stick with:
Vagrant 1.7.4 - 1.8.4
VirtualBox 5.0.x
Reference:
https://github.com/blinkreaction/boot2docker-vagrant/issues/83

Related

How to use vpnkit with minikube on mac

There are many question around this topic, but not the specific info I am after.
Host OS is Mac, and recently had to uninstall Docker Desktop due to their licensing change. So instead we have moved to minikube, and it is all working great with VirtualBox driver.
But ideally we would like to use the hyperkit driver, as it requires less resources than virtualbox, and is (anecdotally) faster. This also all works great until we connect to our VPN (using cisco anyconnect) and then all outbound networking from within the minikube VM stops working. e.g.
k8> minikube ssh "traceroute 8.8.8.8"
traceroute to 8.8.8.8 (8.8.8.8), 30 hops max, 46 byte packets
1 host.minikube.internal (192.168.64.1) 0.154 ms 0.181 ms 0.151 ms
2 * * *
Everything else is is fine, inbound networking via ingress is all good. And maven-docker-plugin is happily creating images with the minikube docker daemon. Just nothing outbound.
So figured I'd try to work with VPNKit as I have read it is meant to address this issue. But cannot find a lot of detailed documentation, and so am struggling.
We have tried starting VPNKit with minimal config:
vpnkit --ethernet /tmp/vpkit-ethernet.socket --debug
And then attempt to start minikube, but it fails:
k8> minikube delete
πŸ”₯ Deleting "minikube" in hyperkit ...
πŸ’€ Removed all traces of the "minikube" cluster.
k8> minikube start --driver=hyperkit --hyperkit-vpnkit-sock=/tmp/vpnkit-ethernet.socket
πŸ˜„ minikube v1.25.1 on Darwin 10.15.7
✨ Using the hyperkit driver based on user configuration
πŸ‘ Starting control plane node minikube in cluster minikube
πŸ”₯ Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
πŸ”₯ Deleting "minikube" in hyperkit ...
🀦 StartHost failed, but will try again: creating host: create: Error creating machine: Error in driver during machine creation: hyperkit crashed! command line:
hyperkit loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube
πŸ”₯ Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
😿 Failed to start hyperkit VM. Running "minikube delete" may fix it: creating host: create: Error creating machine: Error in driver during machine creation: hyperkit crashed! command line:
hyperkit loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube
❌ Exiting due to PR_HYPERKIT_CRASHED: Failed to start host: creating host: create: Error creating machine: Error in driver during machine creation: hyperkit crashed! command line:
hyperkit loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=minikube
πŸ’‘ Suggestion: Hyperkit is broken. Upgrade to the latest hyperkit version and/or Docker for Desktop. Alternatively, you may choose an alternate --driver
🍿 Related issues:
β–ͺ https://github.com/kubernetes/minikube/issues/6079
β–ͺ https://github.com/kubernetes/minikube/issues/5780
And in the vpnkit log we see:
time="2022-02-14T06:07:57Z" level=debug msg="usernet: accepted vmnet connection"
time="2022-02-14T06:07:57Z" level=warning msg="Uwt: Pipe.listen: rejected ethernet connection: EOF"
time="2022-02-14T06:08:07Z" level=debug msg="usernet: accepted vmnet connection"
time="2022-02-14T06:08:07Z" level=warning msg="Uwt: Pipe.listen: rejected ethernet connection: EOF"
So kind of implies something is not right with how I started vpnkit. Have played with IP args to ensure it all matches, but does not help.
My guess is that the --ethernet=path arg is not the right type of socket. I have seen there is also --vsock-path=path but specifying this does not appear to create the socket file like --ethernet=path does. Do I have to create this some other way?
Or are there other config options I need to mess with. e.g. I thought --gateway-forwards=path could help, but can find no documentation on file format or contents.
So, I guess two main questions:
Is what we are trying even possible? Is it the the right way to go about it? Or is it much more complicated than simply running the vpnkit command?
If we are on the right track, does anyone have experience with this, and know how to set up the socket for minikube+vpnkit+hyperkit? What args, config, or other setup is required?
And just to note: --hyperkit-vpnkit-sock=auto is not an option for us, as we do not have docker installed, and so the docker socket file does not exist.
And just in case its a version issue:
k8> minikube version
minikube version: v1.25.1
commit: 3e64b11ed75e56e4898ea85f96b2e4af0301f43d
k8> vpnkit --version
854498c13b1884d4a48d84f3569eb34681af2126
k8> hyperkit -v
hyperkit: 0.20200908
Homepage: https://github.com/docker/hyperkit
License: BSD

Docker Compose binding docker cli error: invalid mount config for type "bind": bind source path does not exist: /usr/local/bin/docker

I've been binding the host docker socket and cli so that I can run docker and compose commands from within running containers for over a year without issue but since updating to docker version 20.10.7 and compose version 1.29.2 I can't get my containerised environment to launch without the following error:
invalid mount config for type "bind": bind source path does not exist: /usr/local/bin/docker
Nothing has changed other than I updated Docker Desktop.
The location of the docker binary (symlink) on the host is still present:
0 lrwxr-xr-x 1 aadams-mbp staff 54 3 Aug 2018 /usr/local/bin/docker -> /Applications/Docker.app/Contents/Resources/bin/docker
The target of the symlink permissions look like this:
133608 -rwxr-xr-x 1 root admin 68405888 7 Jul 17:59 /Applications/Docker.app/Contents/Resources/bin/docker
This snippet is from my docker-compose.yaml file:
volumes:
# Bind docker CLI so can run docker commands
# from inside the container. Double check the
# location of the source binary on hosts that
# are not Mac OS. Docker might be in /usr/bin/docker,
# but on Mac OS it is at /usr/local/bin/docker.
- type: bind
source: /var/run/docker.sock
target: /var/run/docker.sock
- type: bind
source: ${DOCKER_BIN_SRC}
target: /usr/bin/docker
The ${DOCKER_BIN_SRC} is pulled in from a .env file (snippet):
##
# Docker bind
#
DOCKER_BIN_SRC=/usr/local/bin/docker
I am running on Mac OS Mojave version 10.14.6

Connecting to a Remote Docker Daemon

I have installed VirtualBox and installed Ubuntu server version in VirtualBox VM. My host machine is Windows 10.
I have also installed Docker in my host Windows box. My intention is to use the docker CLI in Windows to connect to docker daemon (server) inside the VM.
I have made the changes in the Ubuntu VM and it is listening at port 2375.
tcp 0 0 127.0.0.1:2375 0.0.0.0:* LISTEN 2305/dockerd
Also I have set the environment variable DOCKER_HOST in my host(Windows) to the VM machine IP and port.
set DOCKER_HOST=tcp://192.168.56.107:2375
My Windows machine IP is 192.168.56.1 and the ping is working fine.
Pinging 192.168.56.107 with 32 bytes of data:
Reply from 192.168.56.107: bytes=32 time<1ms TTL=64
Reply from 192.168.56.107: bytes=32 time<1ms TTL=64
But when I try to connect from my Windows machine, it gives the following error:
error during connect: Get http://192.168.56.107:2375/v1.27/info: dial tcp 192.168.56.107:2375: connectex: No connection could be made because the target machine actively refused it.
Please find docker info output:
controller#ubuntuserver:~$ docker info
Containers: 4
Running: 0
Paused: 0
Stopped: 4
Images: 2
Server Version: 18.09.6
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84
runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30
init version: fec3683
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.15.0-50-generic
Operating System: Ubuntu 18.04.2 LTS
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 7.79GiB
Name: ubuntuserver
ID: AWDW:34ET:4J2J:2NWB:UPK7:EQHB:W64E:22AT:W6J4:BMRD:NDO6:CNR2
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Labels:
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
WARNING: API is accessible on http://127.0.0.1:2375 without encryption.
Access to the remote API is equivalent to root access on the host. Refer
to the 'Docker daemon attack surface' section in the documentation for
more information: https://docs.docker.com/engine/security/security/#docker-daemon-attack-surface
WARNING: No swap limit support
cat /lib/systemd/system/docker.service
[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service containerd.service
Wants=network-online.target
Requires=docker.socket
[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://0.0.0.0:2375 --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
# Both the old, and new location are accepted by systemd 229 and up, so using the old location
# to make them work for either version of systemd.
StartLimitBurst=3
# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
# this option work for either version of systemd.
StartLimitInterval=60s
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this option.
TasksMax=infinity
# set delegate yes so that systemd does not reset the cgroups of docker containers
Delegate=yes
# kill only the docker process, not all processes in the cgroup
KillMode=process
[Install]
WantedBy=multi-user.target
Can you please help me to resolve this?
You need to configure the Docker daemon in your ubuntu server in order for it to accept tcp connection.
By default Docker listen on the unix socket /var/run/docker.sock.
To configure your daemon, you can have a look at the documentation here
Step-by-step configuration (in this example, everything is done on the Ubuntu VM) :
Configure the daemon
On Ubuntu, by default you are using systemd. You need to edit the configuration file (usually located in /lib/systemd/system/docker.service) :
[Service]
ExecStart=/usr/bin/dockerd --containerd=/run/containerd/containerd.sock -H tcp://0.0.0.0:2375
With this example, the Docker daemon no longer listen on the unix socket. It only listen on tcp call from localhost.
Restart the daemon :
$> sudo systemctl daemon-reload
$> sudo systemctl restart docker.service
Configure the client (still on the VM)
After restarting the daemon, your docker client does not work anymore (as you've just told the client to only listen to tcp connection). Thus, if you do docker image ls it should not respond. In order for your client to work, you need to tell it which server to connect to :
$> export DOCKER_HOST="tcp://0.0.0.0:2375"
Now, your client should be able to connect to the daemon (i.e : docker image ls should print all the images)
This should work fine on your Ubuntu server. You just need to apply the same client configuration on Windows. If it does not work on Windows, then it means something else is blocking the trafic (probably a firewall).
Hope this helps.
Maybe your server ICMP protocol has been prohibited,check it by this cmd:
iptables -L INPUT --line-numbers
and if terminal shows:
and delete this record by cmd
iptables -D INPUT 7
Hope this helps.

Vagrant - Docker provider on Windows - Rsync fails

I'm trying to set up a Dev environment for our next project with Vagrant + Docker (as a provdier). I'm working on Windows 8.1 OS with cygwin (with its ssh and rsync packages).
Vagrantfile:
Vagrant.configure("2") do |config|
config.vm.provider "docker" do |d|
d.build_dir = "."
end
end
Dockerfile:
FROM ubuntu
RUN apt-get install -y software-properties-common python
RUN add-apt-repository ppa:chris-lea/node.js
RUN echo "deb http://us.archive.ubuntu.com/ubuntu/ precise universe" >> /etc/apt/sources.list
RUN apt-get update
RUN apt-get install -y nodejs
#RUN apt-get install -y nodejs=0.6.12~dfsg1-1ubuntu1
RUN mkdir /var/www
ADD app.js /var/www/app.js
CMD ["/usr/bin/node", "/var/www/app.js"]
vagrant up --provider=docker
Bringing machine 'default' up with 'docker' provider...
==> default: Docker host is required. One will be created if necessary...
default: Vagrant will now create or start a local VM to act as the Docker
default: host. You'll see the output of the `vagrant up` for this VM below.
default:
default: Importing base box 'hashicorp/boot2docker'...
default: Matching MAC address for NAT networking...
default: Checking if box 'hashicorp/boot2docker' is up to date...
default: Setting the name of the VM: docker-host_default_1461921660147_65487
default: Clearing any previously set network interfaces...
default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Forwarding ports...
default: 2375 (guest) => 2375 (host) (adapter 1)
default: 22 (guest) => 2222 (host) (adapter 1)
default: Running 'pre-boot' VM customizations...
default: Booting VM...
default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: docker
default: SSH auth method: password
default: Machine booted and ready!
GuestAdditions versions on your host (5.0.16) and guest (4.3.28 r100309) do not match.
The guest's platform ("tinycore") is currently not supported, will try generic Linux method...
Copy iso file C:\Program Files/Oracle/VirtualBox/VBoxGuestAdditions.iso into the box /tmp/VBoxGuestAdditions.iso
Installing Virtualbox Guest Additions 5.0.16 - guest version is 4.3.28 r100309
mkdir: can't create directory '/tmp/selfgz98727713': No such file or directory
Cannot create target directory /tmp/selfgz98727713
You should try option --target OtherDirectory
An error occurred during installation of VirtualBox Guest Additions 5.0.16. Some functionality may not work as intended.
In most cases it is OK that the "Window System drivers" installation failed.
==> default: Syncing folders to the host VM...
default: Installing rsync to the VM...
default: The machine you're rsyncing folders to is configured to use
default: password-based authentication. Vagrant can't script rsync to automatically
default: enter this password, so you'll likely be prompted for a password
default: shortly.
default:
default: If you don't want to have to do this, please enable automatic
default: key insertion using `config.ssh.insert_key`.
default: Rsyncing folder: /home/Carles/Environment/ => /var/lib/docker/docker_1461921688_64359
There was an error when attempting to rsync a synced folder.
Please inspect the error message below for more info.
Host path: /home/Carles/Environment/
Guest path: /var/lib/docker/docker_1461921688_64359
Command: rsync --verbose --archive --delete -z --copy-links --chmod=ugo=rwX --no-perms --no-owner --no-group --rsync-path sudo rsync -e ssh -p 2222 -o StrictHostKeyChecking=no -o IdentitiesOnly=true -o UserKnownHostsFile=/dev/null --exclude .vagrant/ /home/Carles/Environment/ docker#127.0.0.1:/var/lib/docker/docker_1461921688_64359
Error: Warning: Permanently added '[127.0.0.1]:2222' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password,keyboard-interactive).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.2]
rsync --version
rsync version 3.1.2 protocol version 31
Copyright (C) 1996-2015 by Andrew Tridgell, Wayne Davison, and others.
Web site: http://rsync.samba.org/
Capabilities:
64-bit files, 64-bit inums, 64-bit timestamps, 64-bit long ints,
socketpairs, hardlinks, symlinks, IPv6, batchfiles, inplace,
append, ACLs, no xattrs, iconv, symtimes, prealloc
rsync comes with ABSOLUTELY NO WARRANTY. This is free software, and you
are welcome to redistribute it under certain conditions. See the GNU
General Public Licence for details.
vagrant --version
Vagrant 1.8.1
VBox version
VersiΓ³n 5.0.16 r105871
Anybody has found a Win configuration to run successfully a vagrant machine as docker provider without use a Host VM proxy?
Thanks!
I banged my head against this one all of Friday, then today found a "Docker Toolbox" (https://docs.docker.com/toolbox/toolbox_install_windows/) that makes all of the pain go away. It will even install a light-weight MSYS Git (to get you a bash shell as well) and VirtualBox too if not already installed.
Note that Docker's own web pages mix up the languaging somewhat. "Docker Toolbox" will install on Windows 7 and beyond. There's a newer "Docker for Windows" (https://docs.docker.com/docker-for-windows/) which is Windows 10 Pro or better ONLY, and will prevent you from running any VirtualBox machines because it uses Hyper-V.
Unfortunately, the "old" "Docker Toolbox" used to get called "Docker for Windows" (in places at least), so it's easy mixed messages. Just be aware of the two different solutions (Win 10 Pro + & Hyper-V versus VirtualBox and >= Win 7) and you'll soon work out which one any particular web page is actually talking about.
And yes, this is a strategy for getting Docker on Windows to work; I've ended up abandoning Vagrant.

Cannot install docker on OS X Version 10.9.5

I first tried installing VirtualBox by downloading "VirtualBox 5.0 for OS X hosts (amd64)" from the VirtualBox download page, and then installed boot2docker and docker via brew.
The first apparent issue appeared when creating the boot2docker-vm image:
$ boot2docker init
2015/07/27 21:38:13 Creating VM boot2docker-vm...
2015/07/27 21:38:13 Apply interim patch to VM boot2docker-vm (https://www.virtualbox.org/ticket/12748)
2015/07/27 21:38:13 Failed to modify VM "boot2docker-vm": exit status 1
Launching the VirtualBox manager application I can see the boot2docker-vm machine "Running", but looking at the log I see something like this which appears to indicate that the boot2docker-vm "machine" failed to boot:
00:00:04.169546 Guest Log: BIOS: Boot : bseqnr=1, bootseq=4231
00:00:04.169711 Guest Log: BIOS: Boot from Floppy 0 failed
00:00:04.170101 Guest Log: BIOS: Boot : bseqnr=2, bootseq=0423
00:00:04.170490 Guest Log: BIOS: CDROM boot failure code : 0002
00:00:04.170800 Guest Log: BIOS: Boot from CD-ROM failed
00:00:04.171190 Guest Log: BIOS: Boot : bseqnr=3, bootseq=0042
00:00:04.171795 Guest Log: int13_harddisk: function 02, unmapped device for ELDL=80
00:00:04.172304 Guest Log: BIOS: Boot from Hard Disk 0 failed
00:00:04.172706 Guest Log: BIOS: Boot : bseqnr=4, bootseq=0004
00:00:04.172991 Guest Log: BIOS: Booting from LAN...
00:00:04.191271 Display::handleDisplayResize(): uScreenId = 0, pvVRAM=0000000000000000 w=720 h=400 bpp=0 cbLine=0x0, flags=0x1
00:00:06.446949 Guest Log: BIOS: Boot from LAN failed
00:00:06.448852 Guest Log: Could not read from the boot medium! System halted.
I uninstalled everything and then tried downloading and installing from boot2docker download page, which installs VirtualBox, boot2docker, and docker all in one go. But I still see the same problem indicated above (the boot2docker-vm machine fails to boot).
I'm reluctant to make big changes to the OS X version on my laptop, since my hardware is old. But I'll try the installation sequence on a more modern machine and see if it works there.
Has anyone managed to make docker work on OS X Version 10.9.5?
EDIT (adding additional information which comments suggest might be relevant):
My machine has:
2.26GHz Intel Core 2 Duo
4Gb of RAM (1067 MHz DDR3)
NVIDIA GeForce 9400M 256 MB
OS X 10.9.5
I installed everything as the primary User (not root) on my system.
And the versions of everything which I installed are:
VirtualBox 4.3.30 r101610
boot2docker version 1.7.1
docker version 1.7.1
This issue on github might be of help (Latest version of virtual box 4.3.x works fine in the issue described). Though I would suggest to use docker-machine. Below are the steps (Installation):
$ docker-machine create --driver virtualbox dev
$ eval "$(docker-machine env dev)"
And then you can use docker commands as usual.
Some of the comments in the github issue suggested by nash_ag and this stackoverflow question pointed me in the right direction.
This is the sequence of steps I used to get VirtualBox, boot2docker, docker, and docker-machine working in my environment (where $USERNAME is my primary OS X User, who installed VirtualBox), with several wrong turns elided, and most output omitted:
$ rm -rf /Users/$USERNAME/VirtualBox\ VMs/
$ boot2docker delete
(ran VirtualBox Uninstall script from my desktop)
...
$ brew tap caskroom/cask
...
$ brew update
...
$ brew install brew-cask
...
$ brew cask install virtualbox
...
$ VBoxManage -v
5.0.0r101573
$ boot2docker -v
Boot2Docker-cli version: v1.7.1
Git commit: 8fdc6f5
$ VBoxManage list vms
(nothing)
$ boot2docker init -v
...
$ boot2docker up
...
$ eval "$(boot2docker shellinit)"
(writes .pem files)
$ brew install docker-machine
...
$ docker-machine -v
docker-machine version 0.3.1 (HEAD)
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
$ docker-machine create --driver virtualbox dev
...
$ docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM
dev virtualbox Running tcp://192.168.99.100:2376
$ VBoxManage list vms
"boot2docker-vm" {99d5c5c1-e7cc-49bf-93c7-b0cbf626d62c}
"dev" {341fd11e-86f9-46ca-89e6-39ee78458a4b}
$ eval "$(docker-machine env dev)"
$ docker run -d -p 8000:80 nginx
...
$ curl $(docker-machine ip dev):8000
<!DOCTYPE html>
...
At this point things appear to be working well enough for me to use the "standard" docs/instructions for docker and docker-machine, so my original problem is solved.

Resources