I have a task to create some Docker Containers within a Vagrant Box (centos/7) but I'm having issues with the "vagrant up" execution.
Within the vagrant instance I'm using docker-compose to spin up the services.
Here is the error I am getting when docker-compose is called:
> ==> default: [Provisioning] Bring up Docker Containers
> ==> default: Couldn't connect to Docker daemon at http+docker://localunixsocket - is it running?
> ==> default:
> ==> default: If it's at a non-standard location, specify the URL with the DOCKER_HOST environment variable.
If I check the vagrant instance I can see that docker is running and that my user is part of the docker group
c:\temp\vagrant\kpi-engine>vagrant ssh
[vagrant#localhost ~]$ ps aux | grep docker
root 425 0.1 4.6 496652 23152 ? Ssl 09:54 0:00 /usr/bin/dockerd
root 428 0.0 1.0 265176 5360 ? Ssl 09:54 0:00 docker-containerd -l unix:///var/run/docker/libcontainerd/docker-containerd.sock --shim do
cker-containerd-shim --metrics-interval=0 --start-timeout 2m --state-dir /var/run/docker/libcontainerd/containerd --runtime docker-runc
vagrant 1363 0.0 0.1 112652 972 pts/0 S+ 09:55 0:00 grep --color=auto docker
[vagrant#localhost ~]$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[vagrant#localhost ~]$ grep vagrant /etc/group
vagrant:x:1000:vagrant
docker:x:992:vagrant
However, if I run a "vagrant provision" after "vagrant up" everything runs through successfully with Docker.
==> default: [Provisioning] Clone repositories
==> default: fatal: destination path 'kpi_data' already exists and is not an empty directory.
==> default: [Provisioning] Bring up Docker Containers
==> default: Creating network "vagrant_default" with the default driver
==> default: Creating volume "vagrant_mongo_db" with default driver
==> default: Creating volume "vagrant_maria_db" with default driver
==> default: Pulling mariadb (docker-registry.ptk02.ipaccess.com/mariadb:latest)...
==> default: latest: Pulling from mariadb
==> default: Digest: sha256:d17cfbf8e7e9b9ed79f2de17125a01f66f350ddf5bcdd8b62da20634cfa0b425
==> default: Status: Downloaded newer image for docker-registry.ptk02.ipaccess.com/mariadb:latest
==> default: Pulling mongo (docker-registry.ptk02.ipaccess.com/mongo:latest)...
==> default: latest: Pulling from mongo
==> default: Digest: sha256:4059a5c7c1f7d44a0ea3c1f8bda0e240f74f8cf16d6cc08e81d0fbc59b475553
==> default: Status: Downloaded newer image for docker-registry.ptk02.ipaccess.com/mongo:latest
==> default: Creating mongo
==> default: Creating mariadb
Docker is installed using the vagrant provisioner
config.vm.provision :docker
I ran "vagrant up" with --debug but it did not provide any more insight.
Any thoughts as to what could be the issue.
Related
How to run crictl as non-root user.
My docker commands work with non-root user because my user is added to docker group.
id
uid=1002(kube) gid=100(users) groups=100(users),10(wheel),1001(dockerroot),1002(docker)
I am running dockerD daemon which uses containerd and runc as runtime.
I installed crictl binary and pointed it to connect to existing dockershim socket with config file as below.
cat /etc/crictl.yaml
runtime-endpoint: unix:///var/run/dockershim.sock
image-endpoint: unix:///var/run/dockershim.sock
timeout: 2
debug: false
pull-image-on-create: false
crictl works fine with sudo but without sudo it fails like this.
[user#hostname~]$ crictl ps
FATA[0002] connect: connect endpoint 'unix:///var/run/dockershim.sock', make sure you are running as root and the endpoint has been started: context deadline exceeded
I also tried to change group of dockershim.sock to 'docker' from 'root' just like docker.sock was to try, still same.
srwxr-xr-x 1 root docker 0 Jan 2 23:36 /var/run/dockershim.sock
srw-rw---- 1 root docker 0 Jan 2 23:33 /var/run/docker.sock
sudo usermod -aG docker $USER
or you can see docker postinstall
when Every time I try to minikube start on Linux (ubuntu 18.04), I always get this Docker validation errors.
This works fine for me:
myuser#mymachine:~$ minikube start --driver=docker
😄 minikube v1.11.0 on Ubuntu 16.04
✨ Using the docker driver based on user configuration
👍 Starting control plane node minikube in cluster minikube
🚜 Pulling base image ...
🔥 Creating docker container (CPUs=2, Memory=2200MB) ...
🌐 Found network options:
▪ NO_PROXY=169.254.169.254
🐳 Preparing Kubernetes v1.18.3 on Docker 19.03.2 ...
▪ env NO_PROXY=169.254.169.254
▪ kubeadm.pod-network-cidr=10.244.0.0/16
🔎 Verifying Kubernetes components...
🌟 Enabled addons: default-storageclass, storage-provisioner
🏄 Done! kubectl is now configured to use "minikube"
Make sure that /var/run/docker.sock has the right permission to be accessed by your user
myuser#mymachine:~$ sudo chmod o+rw /var/run/docker.sock
myuser#mymachine:~$ ls -la /var/run/docker.sock
srw-rw-rw- 1 root docker 0 Jul 6 17:42 /var/run/docker.sock
Make sure the docker daemon is running:
myuser#mymachine:~$ ps -Af | grep dockerd
root 12723 1 0 Jul06 ? 00:01:11 /usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
root 18598 17596 0 19:19 ? 00:00:05 /usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12
adminra+ 31177 26444 0 19:36 pts/0 00:00:00 grep --color=auto dockerd
I want to make database environment with docker-compose plugin(vagrant).
But when I do docker compose, it occurs following error.
C:\db>vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'centos/7' version '1902.01' is up to date...
==> default: Machine already provisioned. Run `vagrant provision` or use the `--provision`
==> default: flag to force provisioning. Provisioners marked to run always will still run.
==> default: Running provisioner: docker_compose...
default: Checking for Docker Compose installation...
default: Symlinking Docker Compose 1.22.0 in guest machine...
default: Running docker-compose up...
==> default: bash: line 4: 18581 Segmentation fault /usr/local/bin/docker-compose-1.22.0 -f "/vagrant/docker-compose.yml" up -d
The following SSH command responded with a non-zero exit status.
Vagrant assumes that this means the command failed!
/usr/local/bin/docker-compose-1.22.0 -f "/vagrant/docker-compose.yml" up -d
Stdout from the command:
Stderr from the command:
bash: line 4: 18581 Segmentation fault /usr/local/bin/docker-compose-1.22.0 -f "/vagrant/docker-compose.yml" up -d
What is the equivalent command for minikube delete in docker-for-desktop on OSX
As I understand, minikube creates a VM to host its kubernetes cluster but I do not understand how docker-for-desktop is managing this on OSX.
Tear down Kubernetes in Docker for OS X is quite an easy task.
Go to Preferences, open Reset tab, and click Reset Kubernetes cluster.
All object that have been created with Kubectl before that will be deleted.
You can also reset docker VM image (Reset disk image) and all settings (Reset to factory defaults) or even uninstall Docker.
In recent Docker Edge versions for Mac ( 2.1.7 ) Preferences design has been changed. Now you can reset Kubernetes cluster and other docker aspects by switching to the bug plane in the top right of Preferences window:
Note: You are able to reset Kubernetes cluster only if it's enabled. If you uncheck "Enable Kubernetes" checkbox, "Reset Kubernetes cluster" button becomes inactive.
For convenience "Reset Kubernetes cluster" is also present on the Kubernetes tab in the main Preferences plane:
To reset Docker-desktop Kubernetes cluster using command line, put the following content to a file (dd-reset.sh) and mark it executable ( chmod a+x dd-reset.sh )
#!/bin/bash
dr='docker run -it --rm --privileged --pid=host debian nsenter -t 1 -m -u -n -i'
${dr} sh -c 'export PATH=$PATH:/containers/services/docker/rootfs/usr/bin:/containers/services/docker/rootfs/usr/local/bin:/var/lib/kube-binary-cache/ && \
if [ ! -e /var/run/docker.sock ] ; then ln -s /containers/services/docker/rootfs/var/run/docker.sock /var/run/docker.sock ; fi && \
kube-reset.sh'
sleep 3
echo "cluster resetted. restarting docker-desktop..."
osascript -e 'quit app "Docker"'
open --background -a Docker
echo "docker-desktop started. Wait 3-5 mins for kubernetes to start."
Explanation:
This method uses internal scripts from Docker-desktop VM. To make it work, some preparation of user environment is required.
I wasn't able to start Kubernetes cluster using kube-start.sh script from inside the VM, so I've used MacOS commands to restart Docker application instead.
This method works even if your Kubernetes cluster is not enabled in Docker preferences at the moment, but it's required to enable Kubernetes at least once to use the script.
It was tested on Docker Edge for MacOS v2.2.2.0 (43066)
There is no guarantee that it will be compatible with earlier or later versions.
This version of Docker uses kubeadm to initialize Kubernetes cluster. Scripts are located in the folder /containers/services/docker/rootfs/usr/bin:
kube-pull.sh (brings kubernetes binaries to VM)
kube-reset.sh (runs kube-stop.sh and do kubeadm reset + some rm stuff)
kube-restart.sh (runs kube-stop.sh and kube-start.sh)
kube-start.sh (runs kube-pull.sh and kubelet.sh)
kube-stop.sh (kills kubelet and kube-apiserver processes, and all k8s containers)
kubeadm-init.sh (initializes Kubernetes cluster)
kubelet.sh (runs kubeadm-init.sh and starts kubelet binary)
Cluster configuration is located in the file /containers/services/docker/lower/etc/kubeadm/kubeadm.yaml
Resources used:
Restart Docker from command line
Use nsenter in priviledged container
It's really under the hood in the code. Docker for Mac uses these components: Hyperkit, VPNkit and DataKit
Kubernetes runs in the same Hyperkit VM created for docker and the kube-apiserver is exposed.
You can connect to the VM with this:
screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
Then you can see all the Kubernetes processes in the VM:
linuxkit-025000000001:~# ps -Af | grep kube
1251 root 0:00 /usr/bin/logwrite -n kubelet /usr/bin/kubelet.sh
1288 root 0:51 kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --cluster-dns=10.96.0.10 --cluster-domain=cluster.local --cgroups-per-qos=false --enforce-node-allocatable= --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cadvisor-port=0 --kube-reserved-cgroup=podruntime --system-reserved-cgroup=systemreserved --cgroup-root=kubepods --hostname-override=docker-for-desktop --fail-swap-on=false
3564 root 0:26 kube-scheduler --address=127.0.0.1 --leader-elect=true --kubeconfig=/etc/kubernetes/scheduler.conf
3616 root 1:45 kube-controller-manager --cluster-signing-key-file=/run/config/pki/ca.key --address=127.0.0.1 --root-ca-file=/run/config/pki/ca.crt --service-account-private-key-file=/run/config/pki/sa.key --kubeconfig=/etc/kubernetes/controller-manager.conf --cluster-signing-cert-file=/run/config/pki/ca.crt --leader-elect=true --use-service-account-credentials=true --controllers=*,bootstrapsigner,tokencleaner
3644 root 1:59 kube-apiserver --admission-control=Initializers,NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota --requestheader-username-headers=X-Remote-User --requestheader-group-headers=X-Remote-Group --service-account-key-file=/run/config/pki/sa.pub --secure-port=6443 --insecure-port=8080 --insecure-bind-address=0.0.0.0 --requestheader-client-ca-file=/run/config/pki/front-proxy-ca.crt --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --requestheader-extra-headers-prefix=X-Remote-Extra- --advertise-address=192.168.65.3 --service-cluster-ip-range=10.96.0.0/12 --tls-private-key-file=/run/config/pki/apiserver.key --enable-bootstrap-token-auth=true --requestheader-allowed-names=front-proxy-client --tls-cert-file=/run/config/pki/apiserver.crt --proxy-client-key-file=/run/config/pki/front-proxy-client.key --proxy-client-cert-file=/run/config/pki/front-proxy-client.crt --allow-privileged=true --client-ca-file=/run/config/pki/ca.crt --kubelet-client-certificate=/run/config/pki/apiserver-kubelet-client.crt --kubelet-client-key=/run/config/pki/apiserver-kubelet-client.key --authorization-mode=Node,RBAC --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/run/config/pki/etcd/ca.crt --etcd-certfile=/run/config/pki/apiserver-etcd-client.crt --etcd-keyfile=/run/config/pki/apiserver-etcd-client.key
3966 root 0:01 /kube-dns --domain=cluster.local. --dns-port=10053 --config-dir=/kube-dns-config --v=2
4190 root 0:05 /usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf
4216 65534 0:03 /sidecar --v=2 --logtostderr --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,SRV --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,SRV
4606 root 0:00 /compose-controller --kubeconfig --reconciliation-interval 30s
4905 root 0:01 /api-server --kubeconfig --authentication-kubeconfig --authorization-kubeconfig --etcd-servers=https://127.0.0.1:2379 --etcd-cafile=/etc/docker-compose/etcd/ca.crt --etcd-certfile=/etc/docker-compose/etcd/client.crt --etcd-keyfile=/etc/docker-compose/etcd/client.key --secure-port=9443 --tls-ca-file=/etc/docker-compose/tls/ca.crt --tls-cert-file=/etc/docker-compose/tls/server.crt --tls-private-key-file=/etc/docker-compose/tls/server.key
So if you uncheck the following box (unclear from the docs what command it uses):
You can see that the processes are removed:
linuxkit-025000000001:~# [ 6616.856404] cni0: port 2(veth5f6c8b28) entered disabled state
[ 6616.860520] device veth5f6c8b28 left promiscuous mode
[ 6616.861125] cni0: port 2(veth5f6c8b28) entered disabled state
linuxkit-025000000001:~#
linuxkit-025000000001:~# [ 6626.816763] cni0: port 1(veth87e77142) entered disabled state
[ 6626.822748] device veth87e77142 left promiscuous mode
[ 6626.823329] cni0: port 1(veth87e77142) entered disabled state
linuxkit-025000000001:~# ps -Af | grep kube
linuxkit-025000000001:~#
On docker desktop version 3.5.2 (engine version 20.10.7), the reset button has been moved inside the docker preferences.
You can get there by following the below steps:
Click on the docker icon in the menu bar and choose 'Preferences'.
Go to the Kubernetes tab.
Click on the Reset Kubernetes CLuster button. This is the red color button.
This will delete all pods and reset the kubernetes. You can execute the docker ps command at terminal to verify that there are no containers running.
Just delete the vm that holds the kubernetes resources.
$ minikube delete
I try to synchronize a folder with the boot2docker Vagrant box (on Windows 8.1):
Vagrantfile
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.hostname = "docker-host"
config.vm.box = "hashicorp/boot2docker"
config.vm.synced_folder "./src", "/sync/src"
end
I tried several ways to synchronize the folder:
If I do not define a type (how to sync), Vagrant uses SMB. So it's like I have written:
config.vm.synced_folder "./src", "/sync/src", type: "smb"
With this configuration mounting fails (I enter the credentials of my Windows account I'm logged in):
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'hashicorp/boot2docker' is up to date...
==> default: Clearing any previously set forwarded ports...
==> default: Preparing SMB shared folders...
default: You will be asked for the username and password to use for the SMB
default: folders shortly. Please use the proper username/password of your
default: Windows account.
default:
default: Username: My Username
default: Password (will be hidden):
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 2375 (guest) => 2375 (host) (adapter 1)
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: docker
default: SSH auth method: password
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
==> default: Machine booted and ready!
GuestAdditions versions on your host (5.0.20) and guest (4.3.28 r100309) do not match.
The guest's platform ("tinycore") is currently not supported, will try generic Linux method...
Copy iso file C:\Program Files/Oracle/VirtualBox/VBoxGuestAdditions.iso into the box /tmp/VBoxGuestAdditions.iso
Installing Virtualbox Guest Additions 5.0.20 - guest version is 4.3.28 r100309
mkdir: can't create directory '/tmp/selfgz99220132': No such file or directory
Cannot create target directory /tmp/selfgz99220132
You should try option --target OtherDirectory
An error occurred during installation of VirtualBox Guest Additions 5.0.20. Some functionality may not work as intended.
In most cases it is OK that the "Window System drivers" installation failed.
==> default: Setting hostname...
==> default: Mounting SMB shared folders...
default: C:/my-project/src => /sync/src
Failed to mount folders in Linux guest. This is usually because
the "vboxsf" file system is not available. Please verify that
the guest additions are properly installed in the guest and
can work properly. The command attempted was:
mount -t cifs -o uid=`id -u docker`,gid=`getent group docker | cut -d: -f3`,sec=ntlm,credentials=/etc/smb_creds_d1d75b0a1810a196107486250f8d20f4 //169.254.152.12/d1d75b0a1810a196107486250f8d20f4 /sync/src
mount -t cifs -o uid=`id -u docker`,gid=`id -g docker`,sec=ntlm,credentials=/etc/smb_creds_d1d75b0a1810a196107486250f8d20f4 //169.254.152.12/d1d75b0a1810a196107486250f8d20f4 /sync/src
The error output from the last command was:
mount: mounting //169.254.152.12/d1d75b0a1810a196107486250f8d20f4 on /sync/src failed: Invalid argument
==> default: The previous process exited with exit code 1.
If I use
config.vm.synced_folder "./src", "/sync/src", type: "nfs"
instead, Vagrant still uses SMB (same output as before). If I use
config.vm.synced_folder "./src", "/sync/src", type: "virtualbox"
I get
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'hashicorp/boot2docker' is up to date...
==> default: Clearing any previously set forwarded ports...
The synced folder type 'virtualbox' is reporting as unusable for
your current setup. Please verify you have all the proper
prerequisites for using this shared folder type and try again.
If I use
config.vm.synced_folder "./src", "/sync/src", type: "rsync"
with vagrant-gatling-rsync plugin installed
vagrant plugin install vagrant-gatling-rsync
and run in Cygwin (Cmd has no rsync), I get this:
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'hashicorp/boot2docker' is up to date...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
==> default: Forwarding ports...
default: 2375 (guest) => 2375 (host) (adapter 1)
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Running 'pre-boot' VM customizations...
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: docker
default: SSH auth method: password
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
==> default: Machine booted and ready!
GuestAdditions versions on your host (5.0.20) and guest (4.3.28 r100309) do not match.
The guest's platform ("tinycore") is currently not supported, will try generic Linux method...
Copy iso file C:\Program Files/Oracle/VirtualBox/VBoxGuestAdditions.iso into the box /tmp/VBoxGuestAdditions.iso
Installing Virtualbox Guest Additions 5.0.20 - guest version is 4.3.28 r100309
mkdir: can't create directory '/tmp/selfgz95812741': No such file or directory
Cannot create target directory /tmp/selfgz95812741
You should try option --target OtherDirectory
An error occurred during installation of VirtualBox Guest Additions 5.0.20. Some functionality may not work as intended.
In most cases it is OK that the "Window System drivers" installation failed.
==> default: Setting hostname...
==> default: Installing rsync to the VM...
==> default: The machine you're rsyncing folders to is configured to use
==> default: password-based authentication. Vagrant can't script rsync to automatically
==> default: enter this password, so you'll likely be prompted for a password
==> default: shortly.
==> default:
==> default: If you don't want to have to do this, please enable automatic
==> default: key insertion using `config.ssh.insert_key`.
==> default: Rsyncing folder: /cygdrive/c/my-project/src/ => /sync/src
There was an error when attempting to rsync a synced folder.
Please inspect the error message below for more info.
Host path: /cygdrive/c/my-project/src/
Guest path: /sync/src
Command: rsync --verbose --archive --delete -z --copy-links --chmod=ugo=rwX --no-perms --no-owner --no-group --rsync-path sudo rsync -e ssh -p 2222 -o ControlMaster=auto -o ControlPath=C:/cygwin64/tmp/ssh.640 -o ControlPersist=10m -o StrictHostKeyChecking=no -o IdentitiesOnly=true -o UserKnownHostsFile=/dev/null --exclude .vagrant/ /cygdrive/c/my-project/src/ docker#127.0.0.1:/sync/src
Error: Warning: Permanently added '[127.0.0.1]:2222' (ECDSA) to the list of known hosts.
Permission denied, please try again.
Permission denied, please try again.
Permission denied (publickey,password,keyboard-interactive).
rsync: connection unexpectedly closed (0 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.1]
==> default: The previous process exited with exit code 1.
Besides that, I tried another boot2docker box
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.hostname = "docker-host"
config.vm.box = "dduportal/boot2docker"
config.vm.synced_folder "./src", "/sync/src"
end
which results in
$ vagrant up
Bringing machine 'default' up with 'virtualbox' provider...
==> default: Checking if box 'dduportal/boot2docker' is up to date...
==> default: Clearing any previously set forwarded ports...
==> default: Clearing any previously set network interfaces...
==> default: Preparing network interfaces based on configuration...
default: Adapter 1: nat
default: Adapter 2: hostonly
==> default: Forwarding ports...
default: 2375 (guest) => 2375 (host) (adapter 1)
default: 2376 (guest) => 2376 (host) (adapter 1)
default: 22 (guest) => 2222 (host) (adapter 1)
==> default: Booting VM...
==> default: Waiting for machine to boot. This may take a few minutes...
default: SSH address: 127.0.0.1:2222
default: SSH username: docker
default: SSH auth method: private key
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Remote connection disconnect. Retrying...
default: Warning: Authentication failure. Retrying...
default: Warning: Authentication failure. Retrying...
default: Warning: Authentication failure. Retrying...
default: Warning: Authentication failure. Retrying...
default: Warning: Authentication failure. Retrying...
==> default: Waiting for cleanup before exiting...
Vagrant exited after cleanup due to external interrupt.
How do I get folder synchonization to run with Vagrant and boot2docker?
My working setup with dduportal/boot2docker on windows looks like the following:
# -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure(2) do |config|
config.vm.hostname = "docker-host"
config.vm.box = "dduportal/boot2docker"
config.vm.provision "docker"
config.vm.synced_folder ".", "/vagrant", type: "virtualbox"
end