How to mount local volumes in docker machine - docker

I am trying to use docker-machine with docker-compose. The file docker-compose.yml has definitions as follows:
web:
build: .
command: ./run_web.sh
volumes:
- .:/app
ports:
- "8000:8000"
links:
- db:db
- rabbitmq:rabbit
- redis:redis
When running docker-compose up -d all goes well until trying to execute the command and an error is produced:
Cannot start container b58e2dfa503b696417c1c3f49e2714086d4e9999bd71915a53502cb6ef43936d: [8] System error: exec: "./run_web.sh": stat ./run_web.sh: no such file or directory
Local volumes are not mounted to the remote machine. Whats the recommended strategy to mount the local volumes with the webapps' code?

Docker-machine automounts the users directory... But sometimes that just isn't enough.
I don't know about docker 1.6, but in 1.8 you CAN add an additional mount to docker-machine
Add Virtual Machine Mount Point (part 1)
CLI: (Only works when machine is stopped)
VBoxManage sharedfolder add <machine name/id> --name <mount_name> --hostpath <host_dir> --automount
So an example in windows would be
/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe sharedfolder add default --name e --hostpath 'e:\' --automount
GUI: (does NOT require the machine be stopped)
Start "Oracle VM VirtualBox Manager"
Right-Click <machine name> (default)
Settings...
Shared Folders
The Folder+ Icon on the Right (Add Share)
Folder Path: <host dir> (e:)
Folder Name: <mount name> (e)
Check on "Auto-mount" and "Make Permanent" (Read only if you want...) (The auto-mount is sort of pointless currently...)
Mounting in boot2docker (part 2)
Manually mount in boot2docker:
There are various ways to log in, use "Show" in "Oracle VM VirtualBox Manager", or ssh/putty into docker by IP address docker-machine ip default, etc...
sudo mkdir -p <local_dir>
sudo mount -t vboxsf -o defaults,uid=`id -u docker`,gid=`id -g docker` <mount_name> <local_dir>
But this is only good until you restart the machine, and then the mount is lost...
Adding an automount to boot2docker:
While logged into the machine
Edit/create (as root) /mnt/sda1/var/lib/boot2docker/bootlocal.sh, sda1 may be different for you...
Add
mkdir -p <local_dir>
mount -t vboxsf -o defaults,uid=`id -u docker`,gid=`id -g docker` <mount_name> <local_dir>
With these changes, you should have a new mount point. This is one of the few files I could find that is called on boot and is persistent. Until there is a better solution, this should work.
Old method: Less recommended, but left as an alternative
Edit (as root) /mnt/sda1/var/lib/boot2docker/profile, sda1 may be different for you...
Add
add_mount() {
if ! grep -q "try_mount_share $1 $2" /etc/rc.d/automount-shares ; then
echo "try_mount_share $1 $2" >> /etc/rc.d/automount-shares
fi
}
add_mount <local dir> <mount name>
As a last resort, you can take the slightly more tedious alternative, and you can just modify the boot image.
git -c core.autocrlf=false clone https://github.com/boot2docker/boot2docker.git
cd boot2docker
git -c core.autocrlf=false checkout v1.8.1 #or your appropriate version
Edit rootfs/etc/rc.d/automount-shares
Add try_mount_share <local_dir> <mount_name> line right before fi at the end. For example
try_mount_share /e e
Just be sure not to set the to anything the os needs, like /bin, etc...
docker build -t boot2docker . #This will take about an hour the first time :(
docker run --rm boot2docker > boot2docker.iso
Backup the old boot2docker.iso and copy your new one in its place, in ~/.docker/machine/machines/
This does work, it's just long and complicated
docker version 1.8.1, docker-machine version 0.4.0

Also ran into this issue and it looks like local volumes are not mounted when using docker-machine. A hack solution is to
get the current working directory of the docker-machine instance docker-machine ssh <name> pwd
use a command line tool like rsync to copy folder to remote system
rsync -avzhe ssh --progress <name_of_folder> username#remote_ip:<result _of_pwd_from_1>.
The default pwd is /root so the command above would be rsync -avzhe ssh --progress <name_of_folder> username#remote_ip:/root
NB: you would need to supply the password for the remote system. You can quickly create one by ssh into the remote system and creating a password.
change the volume mount point in your docker-compose.yml file from .:/app to /root/<name_of_folder>:/app
run docker-compose up -d
NB when changes are made locally, don't forget to rerun rsync to push the changes to the remote system.
Its not perfect but it works. An issue is ongoing https://github.com/docker/machine/issues/179
Other project that attempt to solve this include docker-rsync

At the moment I can't really see any way to mount volumes on machines, so the approach by now would be to somehow copy or sync the files you need into the machine.
There are conversations on how to solve this issue on the docker-machine's github repo. Someone made a pull request implementing scp on docker-machine and it's already merged on master, so it's very likely that the next release will include it.
Since it's not yet released, by now I would recommend that if you have your code hosted on github, just clone your repo before you run the app
web:
build: .
command: git clone https://github.com/my/repo.git; ./repo/run_web.sh
volumes:
- .:/app
ports:
- "8000:8000"
links:
- db:db
- rabbitmq:rabbit
- redis:redis
Update: Looking further I found that the feature is already available in the latest binaries, when you get them you'll be able to copy your local project running a command like this:
docker-machine scp -r . dev:/home/docker/project
Being this the general form:
docker-machine scp [machine:][path] [machine:][path]
So you can copy files from, to and between machines.
Cheers!1

Since October 2017 there is a new command for docker-machine that does the trick, but make sure there is nothing in the directory before executing it, otherwise it might get lost:
docker-machine mount <machine-name>:<guest-path> <host-path>
Check the docs for more information: https://docs.docker.com/machine/reference/mount/
PR with the change: https://github.com/docker/machine/pull/4018

If you choose the rsync option with docker-machine, you can combine it with the docker-machine ssh <machinename> command like this:
rsync -rvz --rsh='docker-machine ssh <machinename>' --progress <local_directory_to_sync_to> :<host_directory_to_sync_to>
It uses this command format of rsync, leaving HOST blank:
rsync [OPTION]... SRC [SRC]... [USER#]HOST:DEST
(http://linuxcommand.org/man_pages/rsync1.html)

Finally figured out how to upgrade Windows Docker Toolbox to v1.12.5 and keep my volumes working by adding a shared folder in Oracle VM VirtualBox manager and disabling path conversion. If you have Windows 10+ then you're best to use the newer Docker for Windows.
1st the upgrade Pain:
Uninstall VirtualBox first.
Yep that may break stuff in other tools like Android Studio. Thanks Docker :(
Install new version of Docker Toolbox.
Redis Database Example:
redis:
image: redis:alpine
container_name: redis
ports:
- "6379"
volumes:
- "/var/db/redis:/data:rw"
In Docker Quickstart Terminal ....
run docker-machine stop default - Ensure VM is haulted
In Oracle VM VirtualBox Manager ...
Added a shared folder in default VM via or command line
D:\Projects\MyProject\db => /var/db
In docker-compose.yml...
Mapped redis volume as: "/var/db/redis:/data:rw"
In Docker Quickstart Terminal ....
Set COMPOSE_CONVERT_WINDOWS_PATHS=0 (for Toolbox version >= 1.9.0)
run docker-machine start default to restart the VM.
cd D:\Projects\MyProject\
docker-compose up should work now.
Now creates redis database in D:\Projects\MyProject\db\redis\dump.rdb
Why avoid relative host paths?
I avoided relative host paths for Windows Toolbox as they may introduce invalid '\' chars. It's not as nice as using paths relative to docker-compose.yml but at least my fellow developers can easily do it even if their project folder is elsewhere without having to hack the docker-compose.yml file (bad for SCM).
Original Issue
FYI ... Here is the original error I got when I used nice clean relative paths that used to work just fine for older versions. My volume mapping used to be just "./db/redis:/data:rw"
ERROR: for redis Cannot create container for service redis: Invalid bind mount spec "D:\\Projects\\MyProject\\db\\redis:/data:rw": Invalid volume specification: 'D:\Projects\MyProject\db\redis:/data
This breaks for two reasons ..
It can't access D: drive
Volume paths can't include \ characters
docker-compose adds them and then blames you for it !!
Use COMPOSE_CONVERT_WINDOWS_PATHS=0 to stop this nonsense.
I recommend documenting your additional VM shared folder mapping in your docker-compose.yml file as you may need to uninstall VirtualBox again and reset the shared folder and anyway your fellow devs will love you for it.

All other answers were good for the time but now (Docker Toolbox v18.09.3) all works out of the box. You just need to add a shared folder into VirtualBox VM.
Docker Toolbox automatically adds C:\Users as shared folder /c/Users under virtual linux machine (using Virtual Box shared folders feature), so if your docker-compose.yml file is located somewhere under this path and you mount host machine's directories only under this path - all should work out of the box.
For example:
C:\Users\username\my-project\docker-compose.yml:
...
volumes:
- .:/app
...
The . path will be automatically converted to absolute path C:\Users\username\my-project and then to /c/Users/username/my-project. And this is exactly how this path is seen from the point of view of linux virtual machine (you can check it: docker-machine ssh and then ls /c/Users/username/my-project). So, the final mount will be /c/Users/username/my-project:/app.
All works transparently for you.
But this doesn't work if your host mount path is not under C:\Users path. For example, if you put the same docker-compose.yml under D:\dev\my-project.
This can be fixed easily though.
Stop the virtual machine (docker-machine stop).
Open Virtual Box GUI, open Settings of Virtual Machine named default, open Shared Folders section and add the new shared folder:
Folder Path: D:\dev
Folder Name: d/dev
Press OK twice and close Virtual Box GUI.
Start the virtual machine (docker-machine start).
That's all. All paths of host machine under D:\dev should work now in docker-compose.yml mounts.

It can be done witch combination of three tools:
docker-machine mount, rsync, inotifywait
TL;DR
Script based on all below is here
Let's say you have your docker-compose.yml and run_web.sh in /home/jdcaballerov/web
Mount directory on machine which has same path as you have it on your host docker-machine machine:/home/jdcaballerov/web /tmp/some_random_dir
Synchronize mounted directory with dir on your host rsync -r /home/jdcaballerov/web /tmp/some_random_dir
Synchronize on every change of files in your directory:
inotifywait -r -m -e close_write --format '%w%f' /home/jdcaballerov/web | while read CHANGED_FILE
do
rsync /home/jdcaballerov/web /tmp/some_random_dir
done
BE AWARE - there are two directories which has same path - one is on your local (host) machine, second is on docker machine.

I assume the run_web.sh file is in the same directory as your docker-compose.yml file. Then the command should be command: /app/run_web.sh.
Unless the Dockerfile (that you are not disclosing) takes care of putting the run_web.sh file into the Docker image.

After summarize posts here, attached updated script, to create additional host mount point and automount when Virtualbox restart. The working environment brief as below:
- Windows 7
- docker-machine.exe version 0.7.0
- VirtualBox 5.0.22
#!env bash
: ${NAME:=default}
: ${SHARE:=c/Proj}
: ${MOUNT:=/c/Proj}
: ${VBOXMGR:=C:\Program Files\Oracle\VirtualBox\VBoxManage.exe}
SCRIPT=/mnt/sda1/var/lib/boot2docker/bootlocal.sh
## set -x
docker-machine stop $NAME
"$VBOXMGR" sharedfolder add $NAME --name c/Proj --hostpath 'c:\' --automount 2>/dev/null || :
docker-machine start $NAME
docker-machine env $NAME
docker-machine ssh $NAME 'echo "mkdir -p $MOUNT" | sudo tee $SCRIPT'
docker-machine ssh $NAME 'echo "sudo mount -t vboxsf -o rw,user $SHARE $MOUNT" | sudo tee -a $SCRIPT'
docker-machine ssh $NAME 'sudo chmod +x /mnt/sda1/var/lib/boot2docker/bootlocal.sh'
docker-machine ssh $NAME 'sudo /mnt/sda1/var/lib/boot2docker/bootlocal.sh'
#docker-machine ssh $NAME 'ls $MOUNT'

I am using docker-machine 0.12.2 with the virtualbox drive on my local machine. I found that there is a directory /hosthome/$(user name) from where you have access to local files.

Just thought I'd mention I've been using 18.03.1-ce-win65 (17513) on Windows 10 and I noticed that if you've previously shared a drive and cached the credentials, once you change your password docker will start having the volumes mounted within containers as blank.
It gives no indication that what is actually happening is that it is now failing to access the shared with the old cached credentials.
The solution in this scenario is to reset the credentials either through the UI (Settings->Shared drives) or to disable then renable drive sharing and enter the new password.
It would be useful if docker-compose gave an error in these situations.

Related

Using rviz in a remote connection "Could not connect to any X display."

I am trying to work with rviz by means a remote connection with ssh. When I execute the command rosrun rviz rviz, this error appears:
QStandardPaths: XDG_RUNTIME_DIR not set, defaulting to '/tmp/runtime-root'
qt.qpa.screen: QXcbConnection: Could not connect to display
Could not connect to any X display.
I already added the -X flag during the ssh connection, by ssh myusername#host -X but nothing changes.
I don't know what else to do, so any help would be welcomed.
I am working from a Mac computer (macOS Catalina), remotely I am working on a workstation with Docker, and my image has Ubuntu 18.04 and ROS Melodic.
Thank you in advance.
EDIT:
I just tried to execute rviz locally on the workstation and appears the same error, so I suppose the ssh connection is not the problem. Could the problem be due to the Docker or the workstation (Nvidia DGX Station)? Could it be due to permission issue?
Thank you.
I don't currently know about docker, but does the following work for you:
user#local $ export ROS_MASTER_URI=http://your_remote's_hostname:11311
user#local $ rosrun rviz rviz
And see https://wiki.ros.org/ROS/NetworkSetup for the details + ip configuration on both machines.
Update: Here are some instruction about running GUI apps in docker and MAC, may be useful (in case you haven' seen it already).
I have a docker container with ROS that I use to run rviz and other UI apps (ROS's QT-based apps do not work in KDE Neon).
The docker-compose.yml contains the following:
##############
version: "3.8"
services:
ros:
container_name: ros1
network_mode: host
# I created my own image, with my own user, etc
image: YOUR_IMAGE
volumes:
# you can ignore this line if you want (I'll explain below)
# - /home/ichramm/devel/robots:/home/ichramm/devel/robots
- /etc/localtime:/etc/localtime:ro
- /tmp/.X11-unix:/tmp/.X11-unix:ro
- /home/ichramm/.Xauthority:/home/ichramm/.Xauthority:ro
- /run/user/1000:/run/user/1000:ro
- /run/user/1000/bus:/run/user/1000/bus:ro
command: /entrypoint.sh
environment:
USER: ichramm
DISPLAY: ${DISPLAY}
XDG_RUNTIME_DIR=/tmp/runtime-${USER}
DBUS_SESSION_BUS_ADDRESS: unix:path=/run/user/1000/bus
devices:
#- "/dev/ttyUSB0:/dev/ttyUSB0"
#- "/dev/dri/card0:/dev/dri/card0"
#- "/dev/dri/card1:/dev/dri/card1"
You should try to map the mounted volumes to your system. I understand that you have MAC, which means this may not work for you.
This works for me, but I don't use ssh, I just use two scripts:
1.
❯ cat docker-run.sh
#!/bin/bash
docker exec -ti -w $(pwd) ros1 ./wrapper.sh $#
❯ cat wrapper.sh
#!/bin/bash
export XDG_RUNTIME_DIR=/tmp/runtime-$USER
source env.sh
$#
In order for this to work you need the following:
Mount the working directory in the container (see commented line above)
Have a file env.sh which sources ROS's setup.bash and the workspace devel/setup.bash.
Of course the experience with those scripts its limited, that's why I also enter the docker directly using the following:
❯ cat enter-env.sh
#!/bin/bash
docker exec -ti -w /home/ichramm/devel/robots ros1 /bin/bash
This works only because the container's directory structure matches the host's. (Note that I only mount the development directory anyway). I also added a user with the same name UID and GID as in the host to prevent issues with file permissions.
If you can't make it work, I suggest you turn to a VM. Just install Ubuntu 20.04 without UI (you can disable it later with sudo systemctl set-default multi-user) and use SSH with X forwarding. I worked with that setup switching to docker and still have the VM in case something happens.
Update: Have in mind that I am doing some potentially unsecure things, like mounting .Xauthority. It works for me because no one else has access to my computer.

How to navigate to docker volumes folders on the host machine [duplicate]

I´m looking for the folder /var/lib/docker on my Mac after installing docker for Mac.
With docker info I get
Containers: 5
...
Server Version: 1.12.0-rc4
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 339
Dirperm1 Supported: true
...
Name: moby
ID: LUOU:5UHI:JFNI:OQFT:BLKR:YJIC:HHE5:W4LP:YHVP:TT3V:4CB2:6TUS
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
....
But I don´t have a directory /var/lib/docker on my host.
I have checked /Users/myuser/Library/Containers/com.docker.docker/ but couldn´t find anything there. Any idea where it is located?
As mentioned in the above answers, you will find it in:
screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
Once you get the tty running you can navigate to /var/lib/docker
As of 2021 is the dance going, Mac Users get easily to the VM with the documented methods, and hence to the volumes.
There's a way Rocky Chen found to get inside the VM in Mac. With this you can actually inspect the famous /var/lib/docker/volumes.
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
Let examine the method:
-it goes for Keep STDIN open even if not attached + Allocate pseudo-TTY
--privileged "gives all capabilities to the container. Allows special cases like running docker" .
--pid defines to use the host VM namespace.
debian the actual image to use.
nsenter a debian's tool to run programs in different namespaces
-t is the target PID
-m mount the provided PID namespace.
-u enter the Unix Time Sharing (UTS) namespace.
-n enter the provided PID network namespace.
-i enter the provided PID IPC namespace.
Once run, go to /var/lib/docker/volumes/and you'll find your volumes.
The next question to address for me is:
How to take those volumes and back them up in the host?
I appreciate ideas in the comments!
UPDATE FOR VSCODE USERS
If you downloaded the Official Docker extension, sun will shine for you.
Just inspect the volumes in Visual Studio Code. Right-click the files you want to have in your local, and download them. That easy!
2nd UPDATE
As of July 2021, Docker Desktop for Mac is announcing we will be able to access volumes directly from the GUI, but only for Pro and Team accounts.
The other answers here are outdated if you're using Docker for Mac.
Here's how I was able to get into the VM. Run the command:
screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
This is the default path, but you may need to first do:
cd ~/Library/Containers/com.docker.docker/Data/vms
and then ls to see which directory your VM is in and replace the "0" accordingly.
When you're in, you might just see a blank screen. Hit your "Enter" key.
This page explains that to exit from the VM you need to "Ctrl-a" then "d"
See this answer
When using Docker for Mac Application, it appears that the containers are stored within the VM located at:
~/Library/Containers/com.docker.docker/Data/com.docker.driver.amd64-linux/Docker.qcow2
Just as #Dmitriy said:
screen ~/Library/Containers/com.docker.docker/Data/vms/0/tty
and can use ctrl a + d to detach the screen
and use screen -dr to re-attach the screen again(since if you simply attach screen again, the terminal text will be garbled.)
Reference
or if you want to exit, use ctrl + a + k,then choose y to kill the screen.
some what of a zombie thread but as I just found it here is another solution that doesn't need screen nor messes up shell etc.
The path listed from a docker volume inspect <vol_name>
returns the path for the container, something like:
"Mountpoint": "/var/lib/docker/volumes/coap_service_db_data/_data"
the _data component being the last component of the path you setup in the volumes: section of the service using a given volume eg:
volumes:
- db_data:/var/lib/postgresql/data , obvs your mileage will vary.
To get there on the mac the easiest method I have found is to actually start a small container running and mount the root of the host to the /docker directory in the image, this gives you access to the volumes used on the host.
docker run --rm -it -v /:/docker alpine:edge
from this point you can cd to the volume
cd /var/lib/docker/volumes/coap_service_db_data/_data
I think the new version of docker (my version is 20.10.5) uses socket instead of TTY to communicate with the virtual machine so you can use the nc command instead of the screen command.
nc -U ~/Library/Containers/com.docker.docker/Data/debug-shell.sock
Looks like the new version of docker for Mac has moved this to a UI element which you can see here. Clicking on that button which says CLI will launch a terminal which you can use to browse the docker file system.
Run:
docker run -it --privileged --pid=host debian nsenter -t 1 -a bash
ls /var/lib/docker
For MacOS I use the following steps:
login into docker virtual-machine (on MacOS docker can be run only inside virtual machine, in my case I have VirtualBox tool with docker VM): docker-machine ssh
as soon as I logged-in I need to switch to super user from docker user: sudo -i
now I'm able to check /var/lib/docker directory
I would say that the file:
/var/run/docker.sock
Is actually at:
/Volumes/{DISKNAME}/var/run/docker.sock
If you run this, it should prove it, as long as your running VirtualBox 5.2.8 or later and the share for /Volumes is setup to be auto-mounted and permanent AND you generated the default docker-machine while on that version of Virtualbox:
#!/bin/bash
docker run -d --restart unless-stopped -p 9000:9000 \
-v /var/run/docker.sock:/var/run/docker.sock portainer/portainer \
--no-auth
Then, access Portainer at: 192.168.99.100:9000 or localhost:9000
This path comes from Docker Host (not from MacOS)
before "Docker for Mac Application" times, where there was a VirtualBox VM "default" and inside this VM, the mentioned path exists (for sure), now in "Docker for Mac Application" times there is a Docker.qcow2 image, which is qemu base vm.
To jump inside this VM #mik-jagger way is ok (but there are few more)
Docker logs are not in /var/lib/docker on MacOS.
MacOs users can find the docker logs on this path;
/Users/Barrack.Kenya/Library/Containers/com.docker.docker/Data/log/host
job_name: docker
static_configs:
targets:
docker
labels:
job: dockerlogs
path: (Please put the path)
pipeline_stages:
docker: {}

running docker-compose on a docker-machine

I'm trying to deploy two services on a single ec2 instance with docker-machine and docker-compose.
Here's what I'm doing:
docker-machine create --driver amazonec2 --engine-install-url=https://web.archive.org/web/20170623081500/https://get.docker.com mymachine
docker-machine ssh mymachine -- mkdir -p /home/ubuntu/myapp
git clone https://github.com/myapp/service1.git
docker-machine scp -r ./service1 mymachine:/home/ubuntu/myapp/
rm -rf ./service1
git clone https://github.com/myapp/service2.git
docker-machine scp -r ./service2 mymachine:/home/ubuntu/myapp/
rm -rf ./service2
docker-machine env mymachine
//export DOCKER_TLS_VERIFY="1"
//export DOCKER_HOST="something"
//export DOCKER_CERT_PATH="something"
//export DOCKER_MACHINE_NAME="mymachine"
eval $(docker-machine env mymachine)
docker-machine active
//mymachine
docker-compose -f ./docker-compose-prod.yml up -d
I get this error: build path /home/ubuntu/myapp/service1 either does not exist, is not accessible, or is not a valid URL.
relevant parts of docker-compose-prod.yml:
version: '3'
services:
service1:
build: /home/ubuntu/myapp/service1
service2:
build: /home/ubuntu/myapp/service2
The path is fine when checking through ssh, it seems like docker-compose is still trying to work on my local machine, it's happy when I provide it with a build path that exists locally. Docker itself executes commands on the remote machine.
How do I get docker-compose to run on the remote docker-machine?
I'm new to this, so hopefully I'm missing something trivial.
Thanks for the help!
A docker build (including docker-compose build) involves a "build context". This context is all of the files you select to send from the client to the docker engine, including the Dockerfile, to perform the build. You can remove files from this context with a .dockerignore file.
When you run a docker build /home/ubuntu/myapp/service1 or in your case, include that directory in the compose.yml file, you define /home/ubuntu/myapp/service1 as your build context that you send from the client to the docker engine. That engine may be local or a remote node, which in your case is the ec2 instance. From there, everything runs remotely, including any COPY or ADD commands in your Dockerfile that reference this context.
To run your build remotely, you can either leave the context on your local machine, rather than running your rm, or you can ssh into the ec2 instance and run the docker-compose commands locally on that machine (you may need to install docker-compose there, I'm not sure it's included in the default machine image). My preference would typically be the former since it allows easier development on the files used to build your image, and it allows the remote docker machine to be ephemeral.

How to use --volume option with Docker Toolbox on Windows?

How can I share a folder between my Windows files and a docker container, by mounting a volume with simple --volume command using Docker Toolbox on?
I'm using "Docker Quickstart Terminal" and when I try this:
winpty docker run -it --rm --volume /C/Users/myuser:/myuser ubuntu
I have this error:
Invalid value "C:\\Users\\myuser\\:\\myuser" for flag --volume: bad mount mode specified : \myuser
See 'docker run --help'.
Following this, I also tried
winpty docker run -it --rm --volume "//C/Users/myuser:/myuser" ubuntu
and got
Invalid value "\\\\C:\\Users\\myuser\\:\\myuser" for flag --volume: \myuser is not an absolute path
See 'docker run --help'.
This is an improvement of the selected answer because that answer is limited to c:\Users folder. If you want to create a volume using a directory outside of c:\Users this is an extension.
In windows 7, I used docker toolbox. It used Virtual Box.
Open virtual box
Select the machine (in my case default).
Right clicked and select settings option
Go to Shared Folders
Include a new machine folder.
For example, in my case I have included:
**Name**: c:\dev
**Path**: c/dev
Click and close
Open "Docker Quickstart Terminal" and restart the docker machine.
Use this command:
$ docker-machine restart
To verify that it worked, following these steps:
SSH to the docker machine.
Using this command:
$ docker-machine ssh
Go to the folder that you have shared/mounted.
In my case, I use this command
$ cd /c/dev
Check the user owner of the folder. You could use "ls -all" and verify that the owner will be "docker"
You will see something like this:
docker#default:/c/dev$ ls -all
total 92
drwxrwxrwx 1 docker staff 4096 Feb 23 14:16 ./
drwxr-xr-x 4 root root 80 Feb 24 09:01 ../
drwxrwxrwx 1 docker staff 4096 Jan 16 09:28 my_folder/
In that case, you will be able to create a volume for that folder.
You can use these commands:
docker create -v /c/dev/:/app/dev --name dev image
docker run -d -it --volumes-from dev image
or
docker run -d -it -v /c/dev/:/app/dev image
Both commands work for me. I hope this will be useful.
This is actually an issue of the project and there are 2 working workarounds:
Creating a data volume:
docker create -v //c/Users/myuser:/myuser --name data hello-world
winpty docker run -it --rm --volumes-from data ubuntu
SSHing directly in the docker host:
docker-machine ssh default
And from there doing a classic:
docker run -it --rm --volume /c/Users/myuser:/myuser ubuntu
If you are looking for the solution that will resolve all the Windows issues and make it work on the Windows OS in the same way as on Linux, then see below. I tested this and it works in all cases. I’m showing also how I get it (the steps and thinking process). I've also wrote an article about using Docker and dealing with with docker issues here.
Solution 1: Use VirtualBox (if you think it's not good idea see Solution 2 below)
Open VirtualBox (you have it already installed along with the docker tools)
Create virtual machine
(This is optional, you can skip it and forward ports from the VM) Create second ethernet card - bridged, this way it will receive IP address from your network (it will have IP like docker machine)
Install Ubuntu LTS which is older than 1 year
Install docker
Add shared directories to the virtual machine and automount your project directories (this way you have access to the project directory from Ubuntu) but still can work in Windows
Done
Bonus:
Everything is working the same way as on Linux
Pause/Unpause the dockerized environment whenever you want
Solution 2: Use VirtualBox (this is very similar to the solution 1 but it shows also the thinking process, which might be usefull when solving similar issues)
Read that somebody move the folders to /C/Users/Public and that works https://forums.docker.com/t/sharing-a-volume-on-windows-with-docker-toolbox/4953/2
Try it, realize that it doesn’t have much sense in your case.
Read entire page here https://github.com/docker/toolbox/issues/607 and try all solutions listed on page
Find this page (the one you are reading now) and try all the solutions from other comments
Find somewhere information that setting COMPOSE_CONVERT_WINDOWS_PATHS=1 environment variable might solve the issue.
Stop looking for the solution for few months
Go back and check the same links again
Cry deeply
Feel the enlightenment moment
Open VirtualBox (you have it already installed along with the docker tools)
Create virtual machine with second ethernet card - bridged, this way it will receive IP address from your network (it will have IP like docker machine)
Install Ubuntu LTS which is very recent (not older than few months)
Notice that the automounting is not really working and the integration is broken (like clipboard sharing etc.)
Delete virtual machine
Go out and have a drink
Rent expensive car and go with high speed on highway
Destroy the car and die
Respawn in front of your PC
Install Ubuntu LTS which is older than 1 year
Try to run docker
Notice it’s not installed
Install docker by apt-get install docker
Install suggested docker.io
Try to run docker-compose
Notice it’s not installed
apt get install docker-compose
Try to run your project with docker-compose
Notice that it’s old version
Check your power level (it should be over 9000)
Search how to install latest version of docker and find the official guide https://docs.docker.com/install/linux/docker-ce/ubuntu/
Uninstall the current docker-compose and docker.io
Install docker using the official guide https://docs.docker.com/install/linux/docker-ce/ubuntu/
Add shared directories to the virtual machine and automount your project directories (this way you have access to the project directory from Ubuntu, so you can run any docker command)
Done
As of August 2016 Docker for windows now uses hyper-v directly instead of virtualbox, so I think it is a little different. First share the drive in settings then use the C: drive letter format, but use forward slashes. For instance I created an H:\t\REDIS directory and was able to see it mounted on /data in the container with this command:
docker run -it --rm -v h:/t/REDIS:/data redis sh
The same format, using drive letter and a colon then forward slashes for the path separator worked both from windows command prompt and from git bash.
I found this question googling to find an answer, but I couldn't find anything that worked. Things would seem to work with no errors being thrown, but I just couldn't see the data on the host (or vice-versa). Finally I checked out the settings closely and tried the format they show:
So first, you have to share the whole drive to the docker vm in settings here, I think that gives the 'docker-machine' vm running in hyper-v access to that drive. Then you have to use the format shown there, which seems to only exist in this one image and in no documentation or questions I could find on the web:
docker run --rm -v c:/Users:/data alpine ls /data
Simply using double leading slashes worked for me on Windows 7:
docker run --rm -v //c/Users:/data alpine ls /data/
Taken from here: https://github.com/moby/moby/issues/12590
Try this:
Open Docker Quickstart Terminal. If it is already open, run $ cd ~ to make sure you are in Windows user directory.
$ docker run -it -v /$(pwd)/ubuntu:/windows ubuntu
It will work if the error is due to typo. You will get an empty folder named ubuntu in your user directory. You will see this folder with the name windows in your ubuntu container.
For those using Virtual Box who prefer command-line approach
1) Make sure the docker-machine is not running
Docker Quickstart Terminal:
docker-machine stop
2) Create the sharing Windows <-> docker-machine
Windows command prompt:
(Modify following to fit your scenario. I feed my Apache httpd container from directory synced via Dropbox.)
set VBOX=D:\Program Files\Oracle\VirtualBox\VBoxManage.exe
set VM_NAME=default
set NAME=c/htdocs
set HOSTPATH=%DROPBOX%\htdocs
"%VBOX%" sharedfolder add "%VM_NAME%" --name "%NAME%" --hostpath "%HOSTPATH%" --automount
3) Start the docker-machine and mount the volume in a new container
Docker Quickstart Terminal:
(Again, I am starting an Apache httpd container, hence that port exposing.)
docker-machine start
docker run -d --name my-apache-container-0 -p 80:80 -v /c/htdocs:/usr/local/apache2/htdocs my-apache-image:1.0
share folders virtualBox toolbox and windows 7 and nodejs image container
using...
Docker Quickstart Terminal [QST]
Windows Explorer [WE]
lets start...
[QST] open Docker Quickstart Terminal
[QST] stop virtual-machine
$ docker-machine stop
[WE] open a windows explorer
[WE] go to the virtualBox installation dir
[WE] open a cmd and execute...
C:\Program Files\Oracle\VirtualBox>VBoxManage sharedfolder add "default" --name
"/d/SVN_FOLDERS/X2R2_WP6/nodejs" --hostpath "\?\d:\SVN_FOLDERS\X2R2_WP6\nodejs" --automount
check in the oracle virtual machine, that the new shared folder has appeared
[QST] start virtual-machine
$ docker-machine start
[QST] run container nodejs
docker stop nodejs
docker rm nodejs
docker run -d -it --rm --name nodejs -v /d/SVN_FOLDERS/X2R2_WP6/nodejs:/usr/src/app -w /usr/src/app node2
[QST] open bash to the container
docker exec -i -t nodejs /bin/bash
[QST] execute dir and you will see the shared files
I solved it!
Add a volume:
docker run -d -v my-named-volume:C:\MyNamedVolume testimage:latest
Mount a host directory:
docker run -d -v C:\Temp\123:C:\My\Shared\Dir testimage:latest

Docker External File Access Not in /Users/ on OSX

So, despite Docker 1.3 now allowing easy access to external storage on OSX through boot2docker for files in /Users/, I still need to access files not in /Users/. I have a settings file in /etc/settings/ that I'd like my container to have access to. Also, the CMD in my container writes logs to /var/log in the container, which I'd rather have it write to /var/log on the host. I've been playing around with VOLUME and passing stuff in with -v at run, but I'm not getting anywhere. Googling hasn't been much help. Can someone who has this working provide help?
As boot2docker now includes VirtualBox Guest Additions, you can now share folders on the host computer (OSX) with guest operating systems (boot2docker-vm). /Users/ is automatically mounted but you can mount/share custom folders. In your host console (OSX) :
$ vboxmanage sharedfolder add "boot2docker-vm" --name settings-share --hostpath /etc/settings --automount
Start boot2docker and ssh into it ($boot2docker up / $boot2docker ssh).
Choose where you want to mount the "settings-share" (/etc/settings) in the boot2docker VM :
$ sudo mkdir /settings-share-on-guest
$ sudo mount -t vboxsf settings-share /settings-share-on-guest
According that /settings is the volume declared in the docker container add -v /settings-share-on-guest:/settings to the docker run command to mount the host directory settings-share-on-guest as a data volume.
Works on Windows, not tested on OSX but should work.

Resources