docker-machine and vagrant (in regards of mount volumes) - docker

Say I have a virtual box virtual machine provisioned through Vagrant. I then provision it with docker-machine - so far all good: I can docker-machine ssh into the box and list it ok with docker-machine ls.
In the past, when not yet using dokcer-machine, my usual workflow would involve sshing into the virtual box, installing docker and spinning up my containers.
As far as I understand this is not longer needed as I can control docker containers within the virtual box through docker-machine (and docker itself) from outside the virtual box (essentially from my win dev machine).
Question: how can I mount directories from inside the vm into the container when I am running the docker command from outside the container?
Example to further clarify:
1) old approach. ssh into vbox and run
docker run -i -t --net=try-net \
--name XXXX \
-v ${PWD}/xxxx/yyyy.py:/zzzzz/xxxx/yyyy.py \
-d me/image
2) docker-machine approach. I switch the docker-machine env onto the box. Now how do I reference a folder in the vbox from outside the box? Is this even possible?
From my win host in a Linux like shell:
docker run -v /c/x/y/z:/home --name postgres3 -d postgres:9.5
gets me:
c:\Program Files\Docker\Docker\Resources\bin\docker.exe: Error response from daemon: Invalid bind mount spec "c:\x\y\z\;C:\Program Files (x86)\Git\home": invalid mode: \Program Files (x86)\Git\home.

If you spin up containers using a docker-toolbox install, the VM's are pre-configured to share the /Users folder from the host into the VM which can then be used by containers.
Since you're doing this manually with your own Vagrant install, you'll need to share the folders yourself. This question should walk you through the steps to share a folder from the parent OS into the VM which can be used by Docker containers you spin up with docker-machine.
Edit: with the parent OS synced the the VM, any containers you run inside the VM will just mount volumes there. Docker-machine isn't really a factor, it's just pointing the docker CLI to the selected docker host. The docker CLI would look like:
docker run -v /path/on/vm:/path/in/container image

Related

Docker bind mount permissions - unexpected mounting as root:root

I have a directory /home/foo/mydir owned by foo:foo (uid=1040) that I bind mount in the alpine docker image as such:
docker run -it --rm -v /home/foo/mydir:/tmp/mydir --user 1040 alpine
but when I check the directory in the container, it is owned by root:root. Am I crazy? I thought docker passed through file ownership when mounting in a container? Is there anyway to retain the permissions (ie have mydir owned by foo:foo in the container) without chown'ing it in the container?
I have two Ubuntu Jammy machines and this issues happened on one machine, but not the other. I finally found the cause and the solution.
Apparently the issue is caused by Docker Desktop. On the first machine I only installed the Docker engine. The second machine had Docker Desktop installed, which runs a virtual machine and your containers will run inside that virtual machine. In that case you can't just mount the host directory the same way into the containers, because you need to mount it first into the virtual machine.
So the solution was simply to remove Docker completely, and then only install the Docker engine (https://docs.docker.com/engine/install/).
Based on my support enquiry here:
https://forums.docker.com/t/bind-mount-permissions-unexpected-mounting-as-root-root/129328?u=swpppp

How to Attach Network Directory to Docker Container - Windows 7 Host

I'm testing Docker running on my Windows 7 PC. I can mount directories under C:\Users to containers without issue with e.g.
docker run --rm -it -v //c/Users/someuser/:/data/ alpine ash
but when I try to attach a networked location like //server1/data with e.g.
docker run --rm -it -v //server1/data/:/data/ alpine ash
the /data directory in the container appears empty. How do I pass a directory not under C:\Users\ to my Docker containers?
Because my PC was running Windows 7 I'd installed Docker Toolbox, which uses Virtualbox instead of Hyper-V. My understanding is that this means Docker is running inside a VM on my system, so that VM needs to have access to any data I intend to pass to Docker.
To attach network directories (or anything local above C:\Users) I needed to add it as a shared folder in Virtualbox.
VM (default in my case) => Settings => Shared Folders => +
After navigating the file explorer and adding //server1/data to the list of folders shared with VM 'default' I was able to pass it to the container as a volume using the second command outline in my original question.

How can I access a shell on the VM Linux host when using the Docker Windows Beta

I have set up Docker for Windows (Hyperv Beta) on my Laptop.
My intention is to laborate on some setups for containers I intend to install in my real server later. I am fairly new to Docker (but know the basics) so I wanted to laborate with volumes and volume images a bit.
However all anonymous volumes end up on the virtual Linux host. I would like to access the filesystem of the host, not within a container.
I cannot access it from within a container easily due to (well founded) security constraints. Neither can I find a way to access it from the windows prompt.
(Using Docker for Windows version 1.12.0-beta21)
I know that it possible to mount volumes using the c share made by Docker for Windows, but that raises the complexity for me. My intent is to use Docker tutorials unmodified and inspect the results in the host filesystem. Preferably through a (bash) shell in the host VM or with a windows file access into the virtual machine.
Later on I would also like to copy volume contents into the vm volumes although that could be solved using a volume against the c drive.
I have after research on my own deducted the following technique to create a privileged container that works as if it was the Linux root host. This is the best I have been able to pinpoint so far.
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
Docker-machine will allow you to ssh to the default machine by typing:
"docker-machine ssh"
You'll be logged into the VM that is running docker.

How is docker able to mount a volume from docker client into a docker container running on docker host?

I am using docker toolbox on Mac. The setup looks like:
docker host - Boot2Docker VirtualBox VM running on Mac
docker client - Mac
I am using following command - docker run -it -v $PWD/dir_on_docker_client:/dir_inside_container ubuntu:14.04 /bin/bash to run a container with a volume mount. I wonder, how is docker able to mount volume from docker client (in this case Mac) into a docker container running on docker host (in this case, VM running on Mac)?
The toolbox VM includes a shared directory from the client. /c/Users (C:\Users) on Windows and /Users on Mac.
Directories in these folders, on the client, can be added as volumes in a container.
Note though that if you add for example /tmp as a volume, it will be /tmp in the toolbox.
The main problem is that virtulbox shares only your home folder with the docker machine at the moment you can only shares content inside this directory. It's uncomfortable but the unique way that I fund to resolve this problem is with the bootlocal.sh file, you can write this file inside your docker-machine to mount after the boot new directory
https://github.com/boot2docker/boot2docker/blob/master/doc/FAQ.md#local-customisation-with-persistent-partition
Yesterday during this dockercon they announced a public beta for "Docker For Mac", I think that you can replace docker-machine with this tool, it provide the best experience with docker and macos, and it resolves this problem
https://www.docker.com/products/docker

docker run python from container

I took over a project which requires the usage of docker to setup the development environment. The project wiki is primarily written for use with coreos and one of the setup steps involved running a python script.
I'm using boot2docker and realised that there's no python pre-installed with the tcl. However, the image that I've pulled from the project repository comes with python27.
How do I use the python interpreter from the container without having to type docker exec everytime?
Also, how do I access the project code in the boot2docker vm (not docker) instance locally so that I can do development on an IDE?
How do I use the python interpreter from the container without having to type docker exec everytime?
What about opening a shell in that container instead?
docker exec -it <your container id> /bin/bash -l
and then from there use python.
Also, how do I access the project code in the boot2docker vm (not docker) instance locally so that I can do development on an IDE?
I'm not using boot2docker myself but from this note, it seems it can be done given that the files on your host are in the /Users (OSX) or C:\Users (Windows) directory
Note: If you are using Boot2Docker, your Docker daemon only has limited access to your OSX/Windows filesystem. Boot2Docker tries to auto-share your /Users (OSX) or C:\Users (Windows) directory - and so you can mount files or directories using docker run -v /Users/<path>:/<container path> ... (OSX) or docker run -v /c/Users/<path>:/<container path ... (Windows). All other paths come from the Boot2Docker virtual machine's filesystem.

Resources