I am a beginner with docker and I am using a windows machine. But I have a problem mounting files using volumes. The documentation says the following thing about mount files on OSX and windows :
Official docker docs
Note: If you are using Docker Machine on Mac or Windows, your Docker daemon only has limited access to your OS X/Windows filesystem. Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory - and so you can mount files or directories using docker run -v /Users/<path>:/<container path> ... (OS X) or docker run -v /c/Users/<path>:/<container path ... (Windows). All other paths come from your virtual machine’s filesystem.
I have a small nginx Dockerfile:
FROM centos:6.6
MAINTAINER afym
ENV WEBPORT 80
RUN yum -y update; yum clean all
RUN yum -y install epel-release; yum clean all
RUN yum -y install nginx; yum clean all
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
VOLUME /usr/share/nginx/html
EXPOSE $WEBPORT
CMD [ "/usr/sbin/nginx" ]
Creating a simple container
docker run -d --name simple -p 8082:80 ng1
8875448c01a4787f1ffe4c4c5c492efb039e452eff957391ac52a08915e18d66
Creating a container with a volume
My windows host directory
Creating the docker container with -v option
docker run -d --name simple2 -v /c/Users/src:/usr/share/nginx/html -p 8082:80 ng1
invalid value "C:\\Users\\src;C:\\Program Files\\Git\\usr\\share\\nginx\\html"
for flag -v: bad mount mode specified
: \Program Files\Git\usr\share\nginx\html
See 'C:\Program Files\Docker Toolbox\docker.exe run --help'.
Inspecting the ng1 image
docker inspect ng1
What is wrong when I am creating a docker container with a volume?
Thanks.
Try to run it with additional / for volume like:
docker run -d --name simple2 -v /c/Users/src://usr/share/nginx/html -p 8082:80 ng1
Or even for host OS, as
docker run -d --name simple2 -v //c/Users/src://usr/share/nginx/html -p 8082:80 ng1
Due to this issue:
This is something that the MSYS environment does to map POSIX paths to Windows paths before passing them to executables.
As the OP said:
Official docker docs :
Note: If you are using Docker Machine on Mac or Windows, your Docker
daemon only has limited access to your OS X/Windows filesystem. Docker
Machine tries to auto-share your /Users (OS X) or C:\Users (Windows)
directory - and so you can mount files or directories using
docker run -v /Users/:/ ... (OS X)
or
docker run -v /c/Users/:/
But if you want access to other directories, you need to add a new shared folder to the virtual box settings (Settings > Shared Folders > Add share).
Add there a new share (only possible when you stop the vm before, docker-machine stop:
path C:\Projects
name c/Projects
autoMount yes
Or edit directly the vbox configuration file
C:\Users\<username>\.docker\machine\machines\default\default\default.vbox
Add there into <SharedFolders> the line
<SharedFolder name="c/Projects" hostPath="\\?\c:\Projects" writable="true" autoMount="true"/>
Restart the machine:
docker-machine stop
docker-machine start
Now, it's possible to mount also directories with the base C:\Projects
docker run -v //c/Projects/myApp://myApp <myImage>
For anyone using docker ~> 1.12 and faces this issue. I spent 30min trying to figure it out until i realized you have to specifically share a drive first via docker settings, see:
https://docs.docker.com/docker-for-windows/#/shared-drives
If you're simply looking to access a local drive, the MINGW32 Docker Toolbox terminal puts the root of each drive in /<drive-letter>, so drive C:\ will be at /c/
Related
I got Docker Desktop installed on Windows with WSL2 support. Everything works as expected. When I run my containers with a volume mount docker run -it --rm -v W:\projects:/projects busybox i can access all my windows files inside this folder.
Sadly the performance isn't that great with windows shares inside docker, so i tried to mount a path from my wsl machine.
i was under the impression that docker would run inside wsl? So I expected the two commands to output the same:
docker run -it --rm -v /home/:/myHome busybox ls -l /myHome
wsl docker run -it --rm -v /home/:/myHome busybox ls -l /myHome
but the output using docker is just total 0 where as the output using wsl is my home directory.
Can someone explain to me where this /home directory is (physically / in wsl / my computer) when I run docker from windows? And is it possible to run docker and it runs wsl docker without symlinks / path modifications so i can mount my linux directory inside the container?
If wsl-2 is installed, you can access its file system by going to the following path :-
\\wsl$
/home wouldn't just work as its not physically present in Windows's file system
You can however use /home or any other linux based directories if you login to your wsl distro. Please note that the following command won't mount any volumes if you run it from windows. It should be run only from your wsl distro
docker run --name mycontainer -v /home:myhome busybox
To access the /home directory in an Ubuntu-16.04 distro from windows:-
\\wsl$\Ubuntu-16.04\home
You can replace Ubuntu-16.04 with your distro name - version
To mount any of the directories which is under wsl, ensure that you have turned on the option "Enable integration with my default wsl distro"
https://docs.docker.com/docker-for-windows/wsl/
To mount a wsl's directory from windows as a volume, provide your host volume path in the given format
docker run --name mycontainer -v \\wsl$\Ubuntu-16.04\home:/myHome busybox
Basically, docker run -v has an effect from which environment its being executed i.e either windows or wsl
And docker volumes are present in the following path if you have enabled wsl-2 for docker but don't want to use your distro's file system
\\wsl$\docker-desktop-data\version-pack-data\community\docker\volumes\
I'm trying to mount a directory in /tmp to a directory in a container, namely /test. To do this, I've run:
docker run --rm -it -v /tmp/tmpl42ydir5/:/test alpine:latest ls /test
I expect to see a few files when I do this, but instead I see nothing at all.
I tried moving the folder into my home directory and running again:
docker run --rm -it -v /home/theolodus/tmpl42ydir5/:/test alpine:latest ls /test
at which point I see the expected output. This makes me thing I have mis-configured something and/or the permissions have bitten me. Have I missed a step in installing docker? I did it via sudo snap install docker, and then configured docker to let me run as non-root by adding myself to the docker group. Running as root doesn't help...
Host machine is Ubuntu 20.04, docker version is 19.03.11
When running docker as a snap...
all files that Docker uses, such as dockerfiles, to be in $HOME.
Ref: https://snapcraft.io/docker
The /tmp filesystem simply isn't accessible to the docker engine when it's running within the snap isolation. You can install docker directly on Ubuntu from the upstream Docker repositories for a more traditional behavior.
I am running the latest macOS (Sierra) with Docker and Kitematic installed. I am also using Virtualbox for emulation.
I want to use the uwsgi-nginx-flask image but I have no idea how I can make the python files and the nginx directory inside my container accessible from outside the virtual machine ?
Haven't found anything about that on the website either.
Folders between the host and containers can be mapped and mounted by using the -v tag during runtime.
$ docker run -it -v /host/directory:/container/directory imagename:tag
You can alternatively use docker cp to copy stuff inside and outside of the container. For example
$ docker cp /path/to/file ContainerName:/path/inside/container
or
$ docker cp ContainerName:/path/inside/container/file .
you can mount the host directory to docker container which will be shared between host and docker
docker run --name container_image -d -v ~/host_dir:/container_dir docker_image
I am starting with docker on windows and I am trying to use volumes for manage data in containers.
My host environment is a:
Windows 8.1
Docker Toolbox 1.8.
Virtual Box 5.0.6
I've created a ngnix image using the following Dockerfile.
Dockerfile
FROM centos:6.6
MAINTAINER afym
ENV WEBPORT 80
RUN yum -y update; yum clean all
RUN yum -y install epel-release; yum clean all
RUN yum -y install nginx; yum clean all
RUN echo "daemon off;" >> /etc/nginx/nginx.conf
VOLUME /usr/share/nginx/html
EXPOSE $WEBPORT
CMD [ "/usr/sbin/nginx" ]
I've created a ngnix container using the following command.
docker run -d --name nge -v //c/Users/src:/usr/share/nginx/html -p 8082:80 ng1
b738fef9cc4d135416a8cca4caf869acf944319b7c3c61129b11f37f9d891598
Then I go to my browser and I can see the web page:
However when I make a change on my index.html file it doesn't refresh on browser
Editing my file
On my browser (ctrl + f5)
I went to the VirtualBox machine to check if my shared directories options is ok.
Then I inspect my nge container with the following command.
docker inspect ng1
Docker inspect
What is happening with volumes? Why I can not see my changes?
After a couple of days I could find the solution.
Firstable docker on windows even on MAC uses a boot2docker instance on VirtualBox.
Diagrams
On MAC
On Windows
Next, the official docker's documentation says :
docker volume
Docker Machine tries to auto-share your /Users (OS X) or C:\Users (Windows) directory
However, after find a solution I decided to change the default c/Users to another path just for keep order. With this in mind I did the following steps:
Define your own workspace directory. In my case is /e/arquitectura (optional. If you want you can use the default path which is /c/Users)
Verify the configuration on the Virtual Machine (In default machine go to > Configuration > Share directories)
Join to the default machine and mount the directory using the alias name
sudo mount -t vboxsf alias-name-virtualbox some-path-in-boot2docker
# In my case (boot2docker instance)
$ cd
$ mkdir arquitectura
$ sudo mount -t vboxsf arquitectura /arquitectura
Finally create a new container or restart an existing one if you haven't changed the c/user/ path
# In my case (docker client)
$ docker run -d --name nge -v //arquitectura/src:/usr/share/nginx/html -p 8081:80 ng1
Now it works.
New to docker, and as per the documentation about Dockerfile, due to portability, it is not allowed to specify a host volume mapping. That is fine, but is there a way to map a host volume (I am in MAC, so say, my home dir /Users/bsr to /data of ubuntu image) to a linux container. The documentation of docker volume is talking only about docker run, but not sure how to add a volume after creating it.
http://docs.docker.com/userguide/dockervolumes/
On Linux you can simply mount a directory of your host system to a docker container by passing
-v /path/to/host/directory:/path/to/container/directory
to the docker run command.
You can also see it here in the documentation: https://docs.docker.com/userguide/dockervolumes/#mount-a-host-directory-as-a-data-volume
If you are using boot2docker things are more complicated. The problem ist that boot2docker runs a little linux vm to start docker. So if you mount the volume as described above you will mount the directory of the little linux vm.
A workaround for this is described in the README of the boot2docker GitHub page using a samba share:
https://github.com/boot2docker/boot2docker#folder-sharing
the following worked, with the help of #sciutand.
git clone https://github.com/boot2docker/boot2docker.git
cd boot2docker/
docker build -t my-boot2docker-img .
docker run --rm my-boot2docker-img > boot2docker.iso
boot2docker stop
mv ~/.boot2docker/boot2docker.iso ~/.boot2docker/boot2docker.iso.backup
mv boot2docker.iso ~/.boot2docker/boot2docker.iso
VBoxManage sharedfolder add boot2docker-vm -name /Users -hostpath /Users
boot2docker up
docker run -d -P --name web ubuntu