running docker-compose on a docker-machine - docker

I'm trying to deploy two services on a single ec2 instance with docker-machine and docker-compose.
Here's what I'm doing:
docker-machine create --driver amazonec2 --engine-install-url=https://web.archive.org/web/20170623081500/https://get.docker.com mymachine
docker-machine ssh mymachine -- mkdir -p /home/ubuntu/myapp
git clone https://github.com/myapp/service1.git
docker-machine scp -r ./service1 mymachine:/home/ubuntu/myapp/
rm -rf ./service1
git clone https://github.com/myapp/service2.git
docker-machine scp -r ./service2 mymachine:/home/ubuntu/myapp/
rm -rf ./service2
docker-machine env mymachine
//export DOCKER_TLS_VERIFY="1"
//export DOCKER_HOST="something"
//export DOCKER_CERT_PATH="something"
//export DOCKER_MACHINE_NAME="mymachine"
eval $(docker-machine env mymachine)
docker-machine active
//mymachine
docker-compose -f ./docker-compose-prod.yml up -d
I get this error: build path /home/ubuntu/myapp/service1 either does not exist, is not accessible, or is not a valid URL.
relevant parts of docker-compose-prod.yml:
version: '3'
services:
service1:
build: /home/ubuntu/myapp/service1
service2:
build: /home/ubuntu/myapp/service2
The path is fine when checking through ssh, it seems like docker-compose is still trying to work on my local machine, it's happy when I provide it with a build path that exists locally. Docker itself executes commands on the remote machine.
How do I get docker-compose to run on the remote docker-machine?
I'm new to this, so hopefully I'm missing something trivial.
Thanks for the help!

A docker build (including docker-compose build) involves a "build context". This context is all of the files you select to send from the client to the docker engine, including the Dockerfile, to perform the build. You can remove files from this context with a .dockerignore file.
When you run a docker build /home/ubuntu/myapp/service1 or in your case, include that directory in the compose.yml file, you define /home/ubuntu/myapp/service1 as your build context that you send from the client to the docker engine. That engine may be local or a remote node, which in your case is the ec2 instance. From there, everything runs remotely, including any COPY or ADD commands in your Dockerfile that reference this context.
To run your build remotely, you can either leave the context on your local machine, rather than running your rm, or you can ssh into the ec2 instance and run the docker-compose commands locally on that machine (you may need to install docker-compose there, I'm not sure it's included in the default machine image). My preference would typically be the former since it allows easier development on the files used to build your image, and it allows the remote docker machine to be ephemeral.

Related

How to scp files from local machine directly to a docker container on a remote machine (without having to repeatedly copy)?

I'm new to Docker and I want to copy files to/from my local machine directly to a docker container that's on a remote machine without having to scp files from my local to my remote and then using docker cp to copy those files to the container. My container does not have an SSH server installed on it nor do I want to rebuild my image to include it.
I tried following solution given by the second answer here:How to SSH into Docker? . I ran the following command on my remote machine that hosts Docker:
docker run -d -p 2222:22 -v /var/run/docker.sock:/var/run/docker.sock -e CONTAINER=kind_tu -e AUTH_MECHANISM=noAuth jeroenpeeters/docker-ssh
Where kind_tu is the name of my running container.
On my local machine I then used: ssh -L 2222:localhost:2222 remote_account_name#remote_ip and then scp -P 2222 test_file remote_account_name#remote_ip:/destination/path (I'm also not familiar with port forwarding so I'm not sure if my notation is correct). When doing this, I get the following:
ssh: connect to host remote_ip port 2222: Connection refused
lost connection
Could this be an issue with the firewall since the remote machine is on my school's campus?
In all, I'm not sure if what I'm doing is even remotely correct.
According to your comment as a reply to David's, here is the explanation how to bind-mount the directory for your visualization files to your container:
On the host system create a directory, e.g. mkdir /home/sarah/viz/. Then, mount it to your docker container, using e.g.
docker run -v /home/sarah/viz:/data/viz … kind_tu …
Your viz software inside the kind_tu container should place the files in the directory /data/viz – which then lands in /home/sarah/viz/ on the host system, where you can download them to your local computer with scp or rsync or however you can connect to the remote machine.
You can also use docker-compose to have a more persistent environment. Write a file docker-compose.yml with the bind-mount and all the other configuration of the kind_tu container:
version: '3'
services:
kind_tu:
image: your_viz_software_image:latest
volumes:
- /home/sarah/viz:/data/viz:rw
…
Then, instead of docker run … you can just do docker-compose up -d and everything acts according to the config in the compose-file.

How to run shell script on Host from jenkins docker container?

I know my issue is already discussed in How to run shell script on host from docker container? but i think my issue is a littel bit more complicated.
At first I try to explain my situation. I'm using jenkins 2.x from a docker container in CentOS VM (Host). In jenkins i created a Job which checks out 3 files from SVN (2 Shell scripts and 1 .jar file). these files will be downloaded in jenkins workspace in jenkins docker container and also on host in a mounted directory like that:
volumes:
- ${DATA_HOME}/jenkins/data:/var/jenkins_home
One of these scripts will be executed from jenkins job and that executes the other script. The second script checks out a SVN directory and does much more stuffs.
So I want a new mounted volume in that directory all results of executed second script will be placed on Host. I think to connect to the host over 'SSH' and execute the script seems to be fine but how can i do that.
I hope I could explain my issue understandable
I will answer regarding "I think to connect to the host over 'SSH' and execute the script seems to be fine but how can i do that"
Pass Host machine Ip to your run command.
docker run --name redis --env pass=pass_my --add-host="hostmachine:192.168.1.23" -dit redis
Now,
docker exec -it redis ash
and run this command. This will do SSH from the container to host
ssh user_name#hostmachine 'ls; bash /home/user_name/Desktop/test.sh; docker run --name db -dit db; docker ps'
If you want something without password then set ssh-key in a container or you can also try
sshpass -p $pass ssh user_name#hostmachine 'ls;/home/user_name/Desktop/test.sh; docker run --name db -d
it db; docker ps'
or if you want to run the script that is inside container you can also do that just pass the script to ssh.
sshpass -p $pass ssh user_name#hostmachine < ./ab.sh
Note: $pass is password of host from ENV and hostmachine is host the we set during run command.
Based on comments in ans:
We can simply install any SSH plugin (SSH) or (Publish over SSH) and
it will work after providing username/password.
Only thing to watch out is that host name resolution does not work and we will need to provide an IP address.
As pointed out this is not the best approach, but sometimes in migration from older systems, we need to move one step at a time and this is the easiest step to take.

How to run docker-compose on remote host?

I have compose file locally. How to run bundle of containers on remote host like docker-compose up -d with DOCKER_HOST=<some ip>?
After the release of Docker 18.09.0 and the (as of now) upcoming docker-compose v1.23.1 release this will get a whole lot easier. This mentioned Docker release added support for the ssh protocol to the DOCKER_HOST environment variable and the -H argument to docker ... commands respectively. The next docker-compose release will incorporate this feature as well.
First of all, you'll need SSH access to the target machine (which you'll probably need with any approach).
Then, either:
# Re-direct to remote environment.
export DOCKER_HOST="ssh://my-user#remote-host"
# Run your docker-compose commands.
docker-compose pull
docker-compose down
docker-compose up
# All docker-compose commands here will be run on remote-host.
# Switch back to your local environment.
unset DOCKER_HOST
Or, if you prefer, all in one go for one command only:
docker-compose -H "ssh://my-user#remote-host" up
One great thing about this is that all your local environment variables that you might use in your docker-compose.yml file for configuration are available without having to transfer them over to remote-host in some way.
If you don't need to run docker container on your local machine, but still on the same remote machine, you can change this in your docker setting.
On the local machine:
You can control remote host with -H parameter
docker -H tcp://remote:2375 pull ubuntu
To use it with docker-compose, you should add this parameter in /etc/default/docker
On the remote machine
You should change listen from external adress and not only unix socket.
See Bind Docker to another host/port or a Unix socket for more details.
If you need to run container on multiple remote hoste, you should configure Docker Swarm
You can now use docker contexts for this:
docker context create dev ‐‐docker “host=ssh://user#remotemachine”
docker-compose ‐‐context dev up -d
More info here: https://www.docker.com/blog/how-to-deploy-on-remote-docker-hosts-with-docker-compose/
From the compose documentation
Compose CLI environment variables
DOCKER_HOST
Sets the URL of the docker daemon. As with the Docker client, defaults to unix:///var/run/docker.sock.
so that we can do
export DOCKER_HOST=tcp://192.168.1.2:2375
docker-compose up
Yet another possibility I discovered recently is controlling a remote Docker Unix socket via an SSH tunnel (credits to https://medium.com/#dperny/forwarding-the-docker-socket-over-ssh-e6567cfab160 where I learned about this approach).
Prerequisite
You are able to SSH into the target machine. Passwordless, key based access is preferred for security and convenience, you can learn how to set this up e.g. here: https://askubuntu.com/questions/46930/how-can-i-set-up-password-less-ssh-login
Besides, some sources mention forwarding Unix sockets via SSH tunnels is only available starting from OpenSSH v6.7 (run ssh -V to check), I did not try this out on older versions though.
SSH Tunnel
Now, create a new SSH tunnel between a local location and the Docker Unix socket on the remote machine:
ssh -nNT -L $(pwd)/docker.sock:/var/run/docker.sock user#someremote
Alternatively, it is also possible to bind to a local port instead of a file location. Make sure the port is open for connections and not already in use.
ssh -nNT -L localhost:2377:/var/run/docker.sock user#someremote
Re-direct Docker Client
Leave the terminal open and open a second one. In there, make your Docker client talk to the newly created tunnel-socket instead of your local Unix Docker socket.
If you bound to a file location:
export DOCKER_HOST=unix://$(pwd)/docker.sock
If you bound to a local port (example port as used above):
export DOCKER_HOST=localhost:2377
Now, run some Docker commands like docker ps or start a container, pull an image etc. Everything will happen on the remote machine as long as the SSH tunnel is active. In order to run local Docker commands again:
Close the tunnel by hitting Ctrl+C in the first terminal.
If you bound to a file location: Remove the temporary tunnel socket again. Otherwise you will not be able to open the same one again later: rm -f "$(pwd)"/docker.sock
Make your Docker client talk to your local Unix socket again (which is the default if unset): unset DOCKER_HOST
The great thing about this is that you save the hassle of copying docker-compose.yml files and other resources around or setting environment variables on a remote machine (which is difficult).
Non-interactive SSH Tunnel
If you want to use this in a scripting context where an interactive terminal is not possible, there is a way to open and close the SSH tunnel in the background using the SSH ControlMaster and ControlPath options:
# constants
TEMP_DIR="$(mktemp -d -t someprefix_XXXXXX)"
REMOTE_USER=some_user
REMOTE_HOST=some.host
control_socket="${TEMP_DIR}"/control.sock
local_temp_docker_socket="${TEMP_DIR}"/docker.sock
remote_docker_socket="/var/run/docker.sock"
# open the SSH tunnel in the background - this will not fork
# into the background before the tunnel is established and fail otherwise
ssh -f -n -M -N -T \
-o ExitOnForwardFailure=yes \
-S "${control_socket}" \
-L "${local_temp_docker_socket}":"${remote_docker_socket}" \
"${REMOTE_USER}"#"${REMOTE_HOST}"
# re-direct local Docker engine to the remote socket
export DOCKER_HOST="unix://${local_temp_docker_socket}"
# do some business on remote host
docker ps -a
# close the tunnel and clean up
ssh -S "${control_socket}" -O exit "${REMOTE_HOST}"
rm -f "${local_temp_docker_socket}" "${control_socket}"
unset DOCKER_HOST
# do business on localhost again
Given that you are able to log in on the remote machine, another approach to running docker-compose commands on that machine is to use SSH.
Copy your docker-compose.yml file over to the remote host via scp, run the docker-compose commands over SSH, finally clean up by removing the file again.
This could look as follows:
scp ./docker-compose.yml SomeUser#RemoteHost:/tmp/docker-compose.yml
ssh SomeUser#RemoteHost "docker-compose -f /tmp/docker-compose.yml up"
ssh SomeUser#RemoteHost "rm -f /tmp/docker-compose.yml"
You could even make it shorter and omit the sending and removing of the docker-compose.yml file by using the -f - option to docker-compose which will expect the docker-compose.yml file to be piped from stdin. Just pipe its content to the SSH command:
cat docker-compose.yml | ssh SomeUser#RemoteHost "docker-compose -f - up"
If you use environment variable substitution in your docker-compose.yml file, the above-mentioned command will not replace them with your local values on the remote host and your commands might fail due to the variables being unset. To overcome this, the envsubst utility can be used to replace the variables with your local values in memory before piping the content to the SSH command:
envsubst < docker-compose.yml | ssh SomeUser#RemoteHost "docker-compose up"

docker-compose up not adding code to remote container

I've run through the initial Overview of Docker Compose exactly as written and it works just fine locally with boot2docker. However, if I try to do a docker-compose up on a remote host, it does not add the code to the remote container.
To reproduce:
Run through the initial Overview of Docker Compose exactly as written.
Install Docker Machine and start a Dockerized VM on any cloud provider.
docker-machine create --driver my-favourite-cloud composetest
eval "$(docker-machine env composetest)"
Now that you're working with a remote host, run docker-compose up on the original code.
composetest $ docker-compose up
Redis runs fine but the Flask app does not.
composetest $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
794c90928b97 composetest_web "/bin/sh -c 'python About a minute ago Exited (2) About a minute ago composetest_web_1
2c70bd687dfc redis "/entrypoint.sh redi About a minute ago Up About a minute 6379/tcp composetest_redis_1
What went wrong?
composetest $ docker logs 794c90928b97
python: can't open file 'app.py': [Errno 2] No such file or directory
Can we confirm it?
composetest $ docker-compose run -d web sleep 60
Starting composetest_redis_1...
composetest_web_run_3
composetest $ docker exec -it a4 /bin/bash
root#a4a73c6dd159:/code# ls -a
. ..
Nothing there. Can we fix it?
Comment out volumes in docker-compose.yml
web:
build: .
ports:
- "5000:5000"
# volumes:
# - .:/code
links:
- redis
redis:
image: redis
Then just docker-compose up and it works!
Let's try again on boot2docker.
composetest $ eval "$(boot2docker shellinit)"
composetest $ docker-compose up
Recreating composetest_redis_1...
Recreating composetest_web_1...
Attaching to composetest_redis_1, composetest_web_1
...
The Flask app does work but it has a serious problem. If you change app.py, the Flask dev server doesn't reload and those changes aren't automatically seen. Even if you stop the container and docker-compose up again, the changes still aren't seen. I realize we lose this essential feature because the volume is no longer mounted. But not mounting the volume is the only way I've been able to get docker-compose to work with a remote host. We should be able to get both local and remote hosts to work using the same docker-compose.yml and Dockerfile.
How do I develop interactively with a local VM and deploy to a remote VM without having to change the Dockerfile or docker-compose.yml?
Versions:
Docker 1.7.0
Docker Compose 1.3.0
Docker Machine 0.3.0
Compose just does pretty much the same thing as you can do with the regular command line interface under the hood. So your command is roughly equivalent to:
$ docker run --name web -p 5000:5000 -v $(pwd):/code --link redis:redis web
The issue is that the volume is relative to the docker host, not the client. So it will mount the working directory on the remote VM, not the client. In your case, this directory is empty.
If you want to develop interactively with a remote VM, you will have to check out the source and edit the files on the VM.
UPDATE: It seems that you actually want to develop and test locally, then deploy a production version to a remote VM. (Apologies if I still misunderstand). To do this, I suggest you have a separate Compose file for development where you mount the local volume, then rebuild and deploy the image for production. By rebuilding the image, it will pick up the latest version of the code. Mounting a volume in production breaks because you've hidden the code in the image with an empty directory.
It's also worth pointing out that Docker don't advise using Compose in production currently.
What I'm really looking for is found in this Using Compose in production doc. By Extending services in Compose you're able to develop interactively with a local VM and deploy to a remote VM without having to change the Dockerfile or docker-compose.yml.
I ran into this "can't open file 'app.py'" problem following the Getting Started tutorials, for me it was because I'm running Docker in Windows. I needed to make sure that I'd shared the drive containing my project directory in Docker settings.
Source: see the "Shared Drive" section of Docker Settings in the docs

How to mount local volumes in docker machine

I am trying to use docker-machine with docker-compose. The file docker-compose.yml has definitions as follows:
web:
build: .
command: ./run_web.sh
volumes:
- .:/app
ports:
- "8000:8000"
links:
- db:db
- rabbitmq:rabbit
- redis:redis
When running docker-compose up -d all goes well until trying to execute the command and an error is produced:
Cannot start container b58e2dfa503b696417c1c3f49e2714086d4e9999bd71915a53502cb6ef43936d: [8] System error: exec: "./run_web.sh": stat ./run_web.sh: no such file or directory
Local volumes are not mounted to the remote machine. Whats the recommended strategy to mount the local volumes with the webapps' code?
Docker-machine automounts the users directory... But sometimes that just isn't enough.
I don't know about docker 1.6, but in 1.8 you CAN add an additional mount to docker-machine
Add Virtual Machine Mount Point (part 1)
CLI: (Only works when machine is stopped)
VBoxManage sharedfolder add <machine name/id> --name <mount_name> --hostpath <host_dir> --automount
So an example in windows would be
/c/Program\ Files/Oracle/VirtualBox/VBoxManage.exe sharedfolder add default --name e --hostpath 'e:\' --automount
GUI: (does NOT require the machine be stopped)
Start "Oracle VM VirtualBox Manager"
Right-Click <machine name> (default)
Settings...
Shared Folders
The Folder+ Icon on the Right (Add Share)
Folder Path: <host dir> (e:)
Folder Name: <mount name> (e)
Check on "Auto-mount" and "Make Permanent" (Read only if you want...) (The auto-mount is sort of pointless currently...)
Mounting in boot2docker (part 2)
Manually mount in boot2docker:
There are various ways to log in, use "Show" in "Oracle VM VirtualBox Manager", or ssh/putty into docker by IP address docker-machine ip default, etc...
sudo mkdir -p <local_dir>
sudo mount -t vboxsf -o defaults,uid=`id -u docker`,gid=`id -g docker` <mount_name> <local_dir>
But this is only good until you restart the machine, and then the mount is lost...
Adding an automount to boot2docker:
While logged into the machine
Edit/create (as root) /mnt/sda1/var/lib/boot2docker/bootlocal.sh, sda1 may be different for you...
Add
mkdir -p <local_dir>
mount -t vboxsf -o defaults,uid=`id -u docker`,gid=`id -g docker` <mount_name> <local_dir>
With these changes, you should have a new mount point. This is one of the few files I could find that is called on boot and is persistent. Until there is a better solution, this should work.
Old method: Less recommended, but left as an alternative
Edit (as root) /mnt/sda1/var/lib/boot2docker/profile, sda1 may be different for you...
Add
add_mount() {
if ! grep -q "try_mount_share $1 $2" /etc/rc.d/automount-shares ; then
echo "try_mount_share $1 $2" >> /etc/rc.d/automount-shares
fi
}
add_mount <local dir> <mount name>
As a last resort, you can take the slightly more tedious alternative, and you can just modify the boot image.
git -c core.autocrlf=false clone https://github.com/boot2docker/boot2docker.git
cd boot2docker
git -c core.autocrlf=false checkout v1.8.1 #or your appropriate version
Edit rootfs/etc/rc.d/automount-shares
Add try_mount_share <local_dir> <mount_name> line right before fi at the end. For example
try_mount_share /e e
Just be sure not to set the to anything the os needs, like /bin, etc...
docker build -t boot2docker . #This will take about an hour the first time :(
docker run --rm boot2docker > boot2docker.iso
Backup the old boot2docker.iso and copy your new one in its place, in ~/.docker/machine/machines/
This does work, it's just long and complicated
docker version 1.8.1, docker-machine version 0.4.0
Also ran into this issue and it looks like local volumes are not mounted when using docker-machine. A hack solution is to
get the current working directory of the docker-machine instance docker-machine ssh <name> pwd
use a command line tool like rsync to copy folder to remote system
rsync -avzhe ssh --progress <name_of_folder> username#remote_ip:<result _of_pwd_from_1>.
The default pwd is /root so the command above would be rsync -avzhe ssh --progress <name_of_folder> username#remote_ip:/root
NB: you would need to supply the password for the remote system. You can quickly create one by ssh into the remote system and creating a password.
change the volume mount point in your docker-compose.yml file from .:/app to /root/<name_of_folder>:/app
run docker-compose up -d
NB when changes are made locally, don't forget to rerun rsync to push the changes to the remote system.
Its not perfect but it works. An issue is ongoing https://github.com/docker/machine/issues/179
Other project that attempt to solve this include docker-rsync
At the moment I can't really see any way to mount volumes on machines, so the approach by now would be to somehow copy or sync the files you need into the machine.
There are conversations on how to solve this issue on the docker-machine's github repo. Someone made a pull request implementing scp on docker-machine and it's already merged on master, so it's very likely that the next release will include it.
Since it's not yet released, by now I would recommend that if you have your code hosted on github, just clone your repo before you run the app
web:
build: .
command: git clone https://github.com/my/repo.git; ./repo/run_web.sh
volumes:
- .:/app
ports:
- "8000:8000"
links:
- db:db
- rabbitmq:rabbit
- redis:redis
Update: Looking further I found that the feature is already available in the latest binaries, when you get them you'll be able to copy your local project running a command like this:
docker-machine scp -r . dev:/home/docker/project
Being this the general form:
docker-machine scp [machine:][path] [machine:][path]
So you can copy files from, to and between machines.
Cheers!1
Since October 2017 there is a new command for docker-machine that does the trick, but make sure there is nothing in the directory before executing it, otherwise it might get lost:
docker-machine mount <machine-name>:<guest-path> <host-path>
Check the docs for more information: https://docs.docker.com/machine/reference/mount/
PR with the change: https://github.com/docker/machine/pull/4018
If you choose the rsync option with docker-machine, you can combine it with the docker-machine ssh <machinename> command like this:
rsync -rvz --rsh='docker-machine ssh <machinename>' --progress <local_directory_to_sync_to> :<host_directory_to_sync_to>
It uses this command format of rsync, leaving HOST blank:
rsync [OPTION]... SRC [SRC]... [USER#]HOST:DEST
(http://linuxcommand.org/man_pages/rsync1.html)
Finally figured out how to upgrade Windows Docker Toolbox to v1.12.5 and keep my volumes working by adding a shared folder in Oracle VM VirtualBox manager and disabling path conversion. If you have Windows 10+ then you're best to use the newer Docker for Windows.
1st the upgrade Pain:
Uninstall VirtualBox first.
Yep that may break stuff in other tools like Android Studio. Thanks Docker :(
Install new version of Docker Toolbox.
Redis Database Example:
redis:
image: redis:alpine
container_name: redis
ports:
- "6379"
volumes:
- "/var/db/redis:/data:rw"
In Docker Quickstart Terminal ....
run docker-machine stop default - Ensure VM is haulted
In Oracle VM VirtualBox Manager ...
Added a shared folder in default VM via or command line
D:\Projects\MyProject\db => /var/db
In docker-compose.yml...
Mapped redis volume as: "/var/db/redis:/data:rw"
In Docker Quickstart Terminal ....
Set COMPOSE_CONVERT_WINDOWS_PATHS=0 (for Toolbox version >= 1.9.0)
run docker-machine start default to restart the VM.
cd D:\Projects\MyProject\
docker-compose up should work now.
Now creates redis database in D:\Projects\MyProject\db\redis\dump.rdb
Why avoid relative host paths?
I avoided relative host paths for Windows Toolbox as they may introduce invalid '\' chars. It's not as nice as using paths relative to docker-compose.yml but at least my fellow developers can easily do it even if their project folder is elsewhere without having to hack the docker-compose.yml file (bad for SCM).
Original Issue
FYI ... Here is the original error I got when I used nice clean relative paths that used to work just fine for older versions. My volume mapping used to be just "./db/redis:/data:rw"
ERROR: for redis Cannot create container for service redis: Invalid bind mount spec "D:\\Projects\\MyProject\\db\\redis:/data:rw": Invalid volume specification: 'D:\Projects\MyProject\db\redis:/data
This breaks for two reasons ..
It can't access D: drive
Volume paths can't include \ characters
docker-compose adds them and then blames you for it !!
Use COMPOSE_CONVERT_WINDOWS_PATHS=0 to stop this nonsense.
I recommend documenting your additional VM shared folder mapping in your docker-compose.yml file as you may need to uninstall VirtualBox again and reset the shared folder and anyway your fellow devs will love you for it.
All other answers were good for the time but now (Docker Toolbox v18.09.3) all works out of the box. You just need to add a shared folder into VirtualBox VM.
Docker Toolbox automatically adds C:\Users as shared folder /c/Users under virtual linux machine (using Virtual Box shared folders feature), so if your docker-compose.yml file is located somewhere under this path and you mount host machine's directories only under this path - all should work out of the box.
For example:
C:\Users\username\my-project\docker-compose.yml:
...
volumes:
- .:/app
...
The . path will be automatically converted to absolute path C:\Users\username\my-project and then to /c/Users/username/my-project. And this is exactly how this path is seen from the point of view of linux virtual machine (you can check it: docker-machine ssh and then ls /c/Users/username/my-project). So, the final mount will be /c/Users/username/my-project:/app.
All works transparently for you.
But this doesn't work if your host mount path is not under C:\Users path. For example, if you put the same docker-compose.yml under D:\dev\my-project.
This can be fixed easily though.
Stop the virtual machine (docker-machine stop).
Open Virtual Box GUI, open Settings of Virtual Machine named default, open Shared Folders section and add the new shared folder:
Folder Path: D:\dev
Folder Name: d/dev
Press OK twice and close Virtual Box GUI.
Start the virtual machine (docker-machine start).
That's all. All paths of host machine under D:\dev should work now in docker-compose.yml mounts.
It can be done witch combination of three tools:
docker-machine mount, rsync, inotifywait
TL;DR
Script based on all below is here
Let's say you have your docker-compose.yml and run_web.sh in /home/jdcaballerov/web
Mount directory on machine which has same path as you have it on your host docker-machine machine:/home/jdcaballerov/web /tmp/some_random_dir
Synchronize mounted directory with dir on your host rsync -r /home/jdcaballerov/web /tmp/some_random_dir
Synchronize on every change of files in your directory:
inotifywait -r -m -e close_write --format '%w%f' /home/jdcaballerov/web | while read CHANGED_FILE
do
rsync /home/jdcaballerov/web /tmp/some_random_dir
done
BE AWARE - there are two directories which has same path - one is on your local (host) machine, second is on docker machine.
I assume the run_web.sh file is in the same directory as your docker-compose.yml file. Then the command should be command: /app/run_web.sh.
Unless the Dockerfile (that you are not disclosing) takes care of putting the run_web.sh file into the Docker image.
After summarize posts here, attached updated script, to create additional host mount point and automount when Virtualbox restart. The working environment brief as below:
- Windows 7
- docker-machine.exe version 0.7.0
- VirtualBox 5.0.22
#!env bash
: ${NAME:=default}
: ${SHARE:=c/Proj}
: ${MOUNT:=/c/Proj}
: ${VBOXMGR:=C:\Program Files\Oracle\VirtualBox\VBoxManage.exe}
SCRIPT=/mnt/sda1/var/lib/boot2docker/bootlocal.sh
## set -x
docker-machine stop $NAME
"$VBOXMGR" sharedfolder add $NAME --name c/Proj --hostpath 'c:\' --automount 2>/dev/null || :
docker-machine start $NAME
docker-machine env $NAME
docker-machine ssh $NAME 'echo "mkdir -p $MOUNT" | sudo tee $SCRIPT'
docker-machine ssh $NAME 'echo "sudo mount -t vboxsf -o rw,user $SHARE $MOUNT" | sudo tee -a $SCRIPT'
docker-machine ssh $NAME 'sudo chmod +x /mnt/sda1/var/lib/boot2docker/bootlocal.sh'
docker-machine ssh $NAME 'sudo /mnt/sda1/var/lib/boot2docker/bootlocal.sh'
#docker-machine ssh $NAME 'ls $MOUNT'
I am using docker-machine 0.12.2 with the virtualbox drive on my local machine. I found that there is a directory /hosthome/$(user name) from where you have access to local files.
Just thought I'd mention I've been using 18.03.1-ce-win65 (17513) on Windows 10 and I noticed that if you've previously shared a drive and cached the credentials, once you change your password docker will start having the volumes mounted within containers as blank.
It gives no indication that what is actually happening is that it is now failing to access the shared with the old cached credentials.
The solution in this scenario is to reset the credentials either through the UI (Settings->Shared drives) or to disable then renable drive sharing and enter the new password.
It would be useful if docker-compose gave an error in these situations.

Resources