On *nix systems, it is possible to bind-mount the docker socket from the host machine to the VM by doing something like this:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
Is there an equivalent way to do this when running docker on a windows host?
I tried various combinations like:
docker run -v tcp://127.0.0.1:2376:/var/run/docker.sock ...
docker run -v "tcp://127.0.0.1:2376":/var/run/docker.sock ...
docker run -v localhost:2376:/var/run/docker.sock ...
none of these have worked.
For Docker for Windows following seems to be working:
-v //var/run/docker.sock:/var/run/docker.sock
As the Docker documentation states:
If you are using Docker Machine on Mac or Windows, your Engine daemon
has only limited access to your OS X or Windows filesystem. Docker
Machine tries to auto-share your /Users (OS X) or C:\Users (Windows)
directory. So, you can mount files or directories on OS X using:
docker run -v /Users/<path>:/<container path> ...
On Windows, mount directories using:
docker run -v /c/Users/<path>:/<container path> ...
All other paths come from your virtual machine’s filesystem, so if you
want to make some other host folder available for sharing, you need to
do additional work. In the case of VirtualBox you need to make the
host folder available as a shared folder in VirtualBox. Then, you can
mount it using the Docker -v flag.
With all that being said, you can still use the:
docker run -v /var/run/docker.sock:/var/run/docker.sock ...
The first /var/run/docker.sock refers to the same path in your boot2docker virtual machine.
For example, when I run my own Jenkins image using the following command in a Windows machine:
$ docker run -dP -v /var/run/docker.sock:/var/run/docker.sock alidehghanig/jenkins
I can still talk to the Docker Daemon in the host machine using the typical docker commands. For example, when I run docker ps in the Jenkins container, I can see running containers in the host machine:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
65311731f446 jen... "/bi.." 10... Up 10.. 0.0.0.0:.. jenkins
Just to top it off on the answers provided earlier
When using docker-compose, one must set the COMPOSE_CONVERT_WINDOWS_PATHS=1 by either:
1) create a .env file at the same location as the project's docker-compose.yml file
2) in the CLI set COMPOSE_CONVERT_WINDOWS_PATHS=1
before running the docker-compose up command.
source
This never worked for me on Windows 10 even if it is a linux container:
-v /var/run/docker.sock:/var/run/docker.sock
But this did:
-v /usr/local/bin/docker:/usr/bin/docker
Solution taken from this issue i opened: https://github.com/docker/for-win/issues/4642
Some containers (eg. portainer) work fine with -v /var/run/docker.sock:/var/run/docker.sock
The jenkins container required --user root permissions on the docker run command to successfully access the Docker UNIX socket (using Docker-Desktop on Windows).
By default, a unix domain socket (or IPC socket) is created at
/var/run/docker.sock, requiring either root permission, or docker
group membership.
Source: https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option
--group-add docker had no effect using Docker-Desktop on Windows.
To bind to a Windows container you need to use pipes.
-v \\.\pipe\docker_engine:\\.\pipe\docker_engine
What it was suitable for me in Windows 10 was:
-v "\\.\pipe\docker_engine:\\.\pipe\docker_engine"
Have in mind that I was trying to access to portainer that I do recommend a lot it's a great app. For that I use this command:
docker run -d -p 9000:9000 -v "\\.\pipe\docker_engine:\\.\pipe\docker_engine" portainer/portainer
And then just go to:
http://localhost:9000/
I never made it worked myself, but i know it works on windows container on docker for windows server 2016 using this technique:
https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option
We actually have at the shop vsts-agents on windows containers that uses the host docker like that:
# listen using the default unix socket, and on 2 specific IP addresses on this host.
$ sudo dockerd -H unix:///var/run/docker.sock -H tcp://192.168.59.106 -H tcp://10.10.10.2
# then you can execute remote docker commands (from container to host for example)
$ docker -H tcp://0.0.0.0:2375 ps
This is what actually made it work for me
docker run -p 8080:8080 -p 50000:50000 -v D:\docker-data\jenkins:/var/jenkins_home -v /usr/local/bin/docker:/usr/bin/docker -v /var/run/docker.sock:/var/run/docker.sock -u root jenkins/jenkins:lts
it works well :
docker run -it -v //var/run/docker.sock:/var/run/docker.sock -v /usr/local/bin/docker:/usr/bin/docker ubuntu
Related
By default docker uses /var/lib/docker/volumes/ for any started container.
Is there any way to launch a new container and have it consume all the required disk on a different specified path on the host?
Basically have the root volume different.
For a specific container only, the simplest way i think would be to use docker volumes, Create docker volume and then attach the volume to the container. So the process running on the container uses up the share, so this is using the disk you would like to use.
More information on the following webpage,
https://docs.docker.com/storage/volumes/
you can define the volume path.
docker run -it --rm -v PWD$:/MyVolume ubuntu bash
This command will use the current folder where you execute the command from.
In the container you'll find your file under /MyVolume.
jens#DESKTOP:~$ docker run -it --rm -v $PWD:/MyVolume ubuntu bash
root#71969d68099e:/# cd /MyVolume/
root#71969d68099e:/MyVolume# ls
But you can define any path:
docker run -it --rm -v /home/someuser/somevolumepath:/MyVolume ubuntu bash
Almost the same is available in docker compose.
ports:
- "80:8080"
- "443:443"
volumes:
- $HOME/userhome/https_cert:/etc/nginx/certs
Jens
Im trying to use the docker client from inside WSL, connecting to the docker engine on Windows. Ive exposed the docker engine on Windows on port 2375, and after setting the DOCKER_HOST environment variable in WSL, I can verify this works by running docker ps.
The problem comes when i attempt to mount directories into docker containers from WSL. For example:
I create a directory and file inside my home folder on WSL (mkdir ~/dockertest && touch ~/dockertest/example.txt)
ls ~/dockertest shows my file has been created
I now start a docker container, mounting my docker test folder (docker run -it --rm -v ~/dockertest:/data alpine ls /data)
I would expect to see 'example.txt' in the docker container, but this does not seem to be happening.
Any ideas what I might be missing?
There are great instructions for Docker setup in WSL at https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly#ensure-volume-mounts-work - solved most of my problems. The biggest trick for me was with bind-mounted directories; you have to use a path in WSL that the Docker daemon will be able to translate, for example /c/Users/rfay/myproject.
I don't think you need to change the mound point as the link suggests. Rather if you use pwd and sed in combination you should get the effect you need.
docker run -it -v $(pwd | sed 's/^\/mnt//'):/var/folder -w "/var/folder" alpine
pwd returns the working folder in the format '/mnt/c/code/folder'. Pipe this to sed and replace '/mnt' with empty string will leave you with a path such as '/c/code/folder' which is correct for docker for windows.
Anyone stumbling here over this issue follow this: Docker Desktop WSL2 Backend and make sure you are running the version 2 of the WSL in PowerShell:
> wsl -l -v
NAME STATE VERSION
* docker-desktop Running 2
Ubuntu Running 2
docker-desktop-data Running 2
If your Ubuntu VERSION doesn't say 2, you need to update it according to the guide above.
After that, you will be able to mount your Linux directory to the Docker Container directly like:
docker run -v ~/my-project:/sources <my-image>
Specific to WSL 1
Ran into the same issue. One thing to remember is that docker run command does not execute a container then and there on a command shell. It sends the run arguments to a docker daemon that does not interpret WSL path correctly. Hence, you need to pass Windows formatted path in quotes and with back slashes escaped
Your Windows path is
\\wsl$\Ubuntu\home\username\dockertest
Docker command after escaping will probably be like
docker run -it --rm -v "\\\\wsl\$\\Ubuntu\\home\\username\\dockertest":/data alpine ls /data
try
docker -v /:{path} exe
Hope to help you.
I would like my centos7 container to log message to /var/log/messages
[root#gen-r-vrt-057-009 ~]# docker exec -it rsyslog_base_centos7 "/bin/bash"
[root#gen-r-vrt-057-009 /]# logger "lior"
[root#gen-r-vrt-057-009 /]# cat /var/log/messages
[root#gen-r-vrt-057-009 /]#
I installed rsyslog, tried running container in several ways:
docker run -dit --name rsyslog_base_centos7 --network host --privileged rsyslog/rsyslog_base_centos7:latest /usr/sbin/init
docker run -dit --name rsyslog_base_centos7 --log-driver=syslog --network host --privileged rsyslog/rsyslog_base_centos7:latest /usr/sbin/init
docker run -dit --name rsyslog_base_centos7 --log-driver=syslog --network host -v /dev/log:/dev/log --privileged rsyslog/rsyslog_base_centos7:latest /usr/sbin/init
But nothing seems to do the trick.
Container os and docker version:
[root#gen-r-vrt-057-009 /]# cat /etc/redhat-release
CentOS Linux release 7.4.1708 (Core)
[root#gen-r-vrt-057-009 /]# exit
[root#gen-r-vrt-057-009 ~]# docker -v
Docker version 17.03.2-ce, build f5ec1e2
Any ideas?
Thanks
If I understand you correctly, you want to run rsyslog inside the container but want to make rsyslog log data from the host machine. By default, this is not possible due to isolation.
It is an interesting use case, and we probably should add an issue tracker at https://github.com/rsyslog/rsyslog-docker.
You can probably achieve your goal by mounting /dev/log into the container, but depending on the host OS that requires some extra work there as well.
The rsyslog/rsyslog_base_centos7 is designed with the intent to provide a base container that you can use to make applications inside the container use rsyslog logging.
Please also have a look at this Twitter conversation: https://twitter.com/rgerhards/status/978183898776686592 - doc updates will be upcoming once we have the actual procedure.
Note: This answer was completely rewritten as I originally totally missed the point.
Smart people from rsyslog put the following Docker image together:
https://hub.docker.com/r/rsyslog/rsyslog_base_centos7
It allows for your use case :
c) want to run a client machine where rsyslog processes log messagesv
(the default CentOS 7 config does NOT work inside a container, but
this container has a corrected config!)
Here is a URL to a patch you can throw to CentOS7 docker config to make it work:
https://gist.github.com/oleksandriegorov/2718a7e35b8d17ada934b651d627ab97
Of course, restart rsyslogd to apply changes.
I've installed the tensorflow docker container on an ubuntu machine. The tensorflow docker setup instructions specify:
docker run -it b.gcr.io/tensorflow/tensorflow
This puts me into the docker container terminal, and I can run python and execute the Hello World example. I can also manually run .\run_jupyter.sh to start the jupyter notebook. However, I can't reach the notebook from host.
How do I start the jupyter notebook such that I can use the notebook from the host machine? Ideally I would like to use docker to launch the container and start jupyter in a single command.
For a Linux host Robert Graves answer will work, but for Mac OS X or Windows there is more to be done because docker runs in a virtual machine.
So to begin launch the docker shell (or any shell if you are using Linux) and run the following command to launch a new TensorFlow container:
docker run -p 8888:8888 -p 6006:6006 b.gcr.io/tensorflow/tensorflow ./run_jupyter.sh
Then for Mac OS X and Windows you need to do the following only once:
Open VirtualBox
Click on the docker vm (mine was automatically named "default")
Open the settings by clicking settings
In the network settings open the port forwarding dialog
Click the + symbol to add another port and connect a port from your mac to the VM by filling in the dialog as shown below. In this example I chose port 8810 because I run other notebooks using port 8888.
then open a browser and connect to http://localhost:8810 (or whichever port you set in the host port section
Make your fancy pants machine learning app!
My simple yet efficient workflow:
TL;DR version:
Open Docker Quickstart Terminal. If it is already open, run $ cd
Run this once: $ docker run -it -p 8888:8888 -p 6006:6006 -v /$(pwd)/tensorflow:/notebooks --name tf b.gcr.io/tensorflow/tensorflow
To start every time: $ docker start -i tf
If you are not on windows, you should probably change /$(pwd) to $(pwd)
You will get an empty folder named tensorflow in your home directory for use as a persistent storage of project files such as Ipython Notebooks and datasets.
Explanation:
cd for making sure you are in your home directory.
params:
-it stands for interactive, so you can interact with the container in the terminal environment.
-v host_folder:container_folder enables sharing a folder between the host and the container. The host folder should be inside your home directory. /$(pwd) translates to //c/Users/YOUR_USER_DIR in Windows 10. This folder is seen as notebooks directory in the container which is used by Ipython/Jupyter Notebook.
--name tf assigns the name tf to the container.
-p 8888:8888 -p 6006:6006 mapping ports of container to host, first pair for Jupyter notebook, the second one for Tensorboard
-i stands for interactive
Running TensorFlow on the cloud
After further reading of docker documentation I have a solution that works for me:
docker run -p 8888:8888 -p 6006:6006 b.gcr.io/tensorflow/tensorflow ./run_jupyter.sh
The -p 8888:8888 and -p 6006:6006 expose the container ports to the host on the same port number. If you just use -p 8888, a random port on the host will be assigned.
The ./run_jupyter.sh tells docker what to execute within the container.
With this command, I can use a browser on the host machine to connect to http://localhost:8888/ and access the jupyter notebook.
UPDATE:
After wrestling with docker on windows I switched back to a Ubuntu machine with docker. My notebook was being erased between docker sessions which makes sense after reading more docker documentation. Here is an updated command which also mounts a host directory within the container and starts jupyter pointing to that mounted directory. Now my notebook is saved on the host and will be available next time start up tensorflow.
docker run -p 8888:8888 -p 6006:6006 -v /home/rob/notebook:/notebook b.gcr.io/tensorflow/tensorflow sh -c "jupyter notebook /notebook"
Jupyter now has a ready to run Docker image for TensorFlow:
docker run -d -v $(pwd):/home/jovyan/work -p 8888:8888 jupyter/tensorflow-notebook
These steps worked for me if you are a total docker noob using a windows machine.
Versions: Windows 8.1, docker 1.10.3, tensorflow r0.7
Run Docker Quickstart Terminal
After it is loaded, note the ip address. If you can't find it use this docker-machine ip and make a note. Lets call it 'ip address'. Will look something like this: 192.168.99.104 (I made up this ip address)
Paste this command on the docker terminal:
docker run -p 8888:8888 -p 6006:6006 b.gcr.io/tensorflow/tensorflow.
If you are running this for the first time, it will download and install the image on this light weight vm. Then it should say 'The Jupyter notebook is running at ....' -> This is a good sign!
Open your browser at: <your ip address (see above)>:8888. Eg. 192.168.99.104:8888/
Hopefully you can see your ipython files.
To get this to run under hyper-v. Perform the following steps:
1) Create a docker virtual machine using https://blogs.msdn.microsoft.com/scicoria/2014/10/09/getting-docker-running-on-hyper-v-8-1-2012-r2/ this will get you a working docker container. You can connect to it via the console or via ssh. I'd put at least 8gb of memory since I'm sure this will use a lot of memory.
2) run "ifconfig" to determine the IP address of the Docker VM
3) On the docker shell prompt type:
docker run -p 8888:8888 -p 6006:6006 -it b.gcr.io/tensorflow/tensorflow
4) Connect to the Jupyter Workbench using http:/[ifconfig address]:8888/
To tidy up the things a little bit, I want to give some additional explanations because I also suffered a lot setting up docker with tensorflow. For this I refer to this video which is unfortunately not selfexplanatory in all cases.
I assume you already installed docker. The really interesting general part of the video starts at minute 0:44 where he finally started docker. Until there he only downloads the tensorflow repo into the folder, that he then mounts into the container. You can of course put anything else into the container and access it later in the docker VM.
First he runs the long docker command docker run –dit -v /c/Users/Jay/:/media/disk –p 8000 –p 8888 –p 6006 b.gcr.io/tensorflow/tensorflow. The “run” command starts containers. In this case it starts the container “b.gcr.io/tensorflow/tensorflow”, whose address is provided within the tensorflow docker installation tutorial. The container will be downloaded by docker if not already locally available.
Then he gives two additional kinds of arguments: He mounts a folder of the hostsystem at the given path to the container. DO NOT forget to give the partition in the beginning (eg. "/c/").
Additionally he declares ports being available later from the host machine with the params -p.
From all this command you get back the [CONTAINER_ID] of this container execution!
You can always see the currently running containers by running “docker ps” in the docker console. Your container created above should appear in this list with the same id.
Next Step: With your container running, you now want to execute something in it. In our case jupyter notebook or tensorflow or whatever: To do this you make docker execute the bash on the newly created container: docker exec –ti [CONTAINER_ID] bash. This command now starts a bash shell on your container. You see this because the “$” now changed to root#[CONTAINER_ID]:. From here is no way back. If you want to get back to the docker terminal, you have to start another fresh docker console like he is doing in minute 1:10. Now with a bash shell running in the container you can do whatever you want and execute Jupiter or tensorflow or whatever. The folder of the host system, you gave in the run command, should be available now under “/media/disk”.
Last step accessing the VM output. It still did not want to work out for me and I could not access my notebook. You still have to find the correct IP and Port to access the launched notebook, tensorboard session or whatever. First find out the main IP by using docker-machine –ls. In this list you get the URL. (If it is your only container it is called default.) You can leave away the port given here. Then from docker ps you get the list of forwarded ports. When there is written 0.0.0.32776->6006/tcp in the list, you can access it from the hostmachine by using the port given in the first place (Awkyard). So in my case the executed tensorboard in the container said “launched on port 6006”. Then from my hostmachine I needed to enter http://192.168.99.100:32776/ to access it.
-> And that’s it! It ran for me like this!
It gives you the terminal prompt:
FOR /f "tokens=*" %i IN ('docker-machine env --shell cmd vdocker') DO %i
docker run -it tensorflow/tensorflow:r0.9-devel
or
FOR /f "tokens=*" %i IN ('docker-machine env --shell cmd vdocker') DO %i
docker run -it b.gcr.io/tensorflow/tensorflow:latest-devel
You should have 'vdocker' or change vdocker to 'default'.
For some reason I ran into one additional problem that I needed to overcome beyond the examples provided, using the --ip flag:
nvidia-docker run --rm \
-p 8888:8888 -p 6006:6006 \
-v `pwd`:/root \
-it tensorflow/tensorflow:latest-devel-gpu-py3 sh -c "jupyter notebook --ip 0.0.0.0 ."
And then I can access via http://localhost:8888 from my machine. In some ways this makes sense; within the container you bind to 0.0.0.0 which represents all available addresses. But whether I need to do this seems to vary (e.g I've started notebooks using jupyter/scipy-notebook without having to do this).
In any case, the above command works for me, might be of use to others.
As an alternative to the official TensorFlow image, you can also use the ML Workspace Docker image. The ML Workspace is an open-source web IDE that combines Jupyter, VS Code, TensorFlow, and many other tools & libraries into one convenient Docker image. Deploying a single workspace instance is as simple as:
docker run -p 8080:8080 mltooling/ml-workspace:latest
All tools are accessible from the same port and integrated into the Jupyter UI. You can find the documentation here.
How can I share a folder between my Windows files and a docker container, by mounting a volume with simple --volume command using Docker Toolbox on?
I'm using "Docker Quickstart Terminal" and when I try this:
winpty docker run -it --rm --volume /C/Users/myuser:/myuser ubuntu
I have this error:
Invalid value "C:\\Users\\myuser\\:\\myuser" for flag --volume: bad mount mode specified : \myuser
See 'docker run --help'.
Following this, I also tried
winpty docker run -it --rm --volume "//C/Users/myuser:/myuser" ubuntu
and got
Invalid value "\\\\C:\\Users\\myuser\\:\\myuser" for flag --volume: \myuser is not an absolute path
See 'docker run --help'.
This is an improvement of the selected answer because that answer is limited to c:\Users folder. If you want to create a volume using a directory outside of c:\Users this is an extension.
In windows 7, I used docker toolbox. It used Virtual Box.
Open virtual box
Select the machine (in my case default).
Right clicked and select settings option
Go to Shared Folders
Include a new machine folder.
For example, in my case I have included:
**Name**: c:\dev
**Path**: c/dev
Click and close
Open "Docker Quickstart Terminal" and restart the docker machine.
Use this command:
$ docker-machine restart
To verify that it worked, following these steps:
SSH to the docker machine.
Using this command:
$ docker-machine ssh
Go to the folder that you have shared/mounted.
In my case, I use this command
$ cd /c/dev
Check the user owner of the folder. You could use "ls -all" and verify that the owner will be "docker"
You will see something like this:
docker#default:/c/dev$ ls -all
total 92
drwxrwxrwx 1 docker staff 4096 Feb 23 14:16 ./
drwxr-xr-x 4 root root 80 Feb 24 09:01 ../
drwxrwxrwx 1 docker staff 4096 Jan 16 09:28 my_folder/
In that case, you will be able to create a volume for that folder.
You can use these commands:
docker create -v /c/dev/:/app/dev --name dev image
docker run -d -it --volumes-from dev image
or
docker run -d -it -v /c/dev/:/app/dev image
Both commands work for me. I hope this will be useful.
This is actually an issue of the project and there are 2 working workarounds:
Creating a data volume:
docker create -v //c/Users/myuser:/myuser --name data hello-world
winpty docker run -it --rm --volumes-from data ubuntu
SSHing directly in the docker host:
docker-machine ssh default
And from there doing a classic:
docker run -it --rm --volume /c/Users/myuser:/myuser ubuntu
If you are looking for the solution that will resolve all the Windows issues and make it work on the Windows OS in the same way as on Linux, then see below. I tested this and it works in all cases. I’m showing also how I get it (the steps and thinking process). I've also wrote an article about using Docker and dealing with with docker issues here.
Solution 1: Use VirtualBox (if you think it's not good idea see Solution 2 below)
Open VirtualBox (you have it already installed along with the docker tools)
Create virtual machine
(This is optional, you can skip it and forward ports from the VM) Create second ethernet card - bridged, this way it will receive IP address from your network (it will have IP like docker machine)
Install Ubuntu LTS which is older than 1 year
Install docker
Add shared directories to the virtual machine and automount your project directories (this way you have access to the project directory from Ubuntu) but still can work in Windows
Done
Bonus:
Everything is working the same way as on Linux
Pause/Unpause the dockerized environment whenever you want
Solution 2: Use VirtualBox (this is very similar to the solution 1 but it shows also the thinking process, which might be usefull when solving similar issues)
Read that somebody move the folders to /C/Users/Public and that works https://forums.docker.com/t/sharing-a-volume-on-windows-with-docker-toolbox/4953/2
Try it, realize that it doesn’t have much sense in your case.
Read entire page here https://github.com/docker/toolbox/issues/607 and try all solutions listed on page
Find this page (the one you are reading now) and try all the solutions from other comments
Find somewhere information that setting COMPOSE_CONVERT_WINDOWS_PATHS=1 environment variable might solve the issue.
Stop looking for the solution for few months
Go back and check the same links again
Cry deeply
Feel the enlightenment moment
Open VirtualBox (you have it already installed along with the docker tools)
Create virtual machine with second ethernet card - bridged, this way it will receive IP address from your network (it will have IP like docker machine)
Install Ubuntu LTS which is very recent (not older than few months)
Notice that the automounting is not really working and the integration is broken (like clipboard sharing etc.)
Delete virtual machine
Go out and have a drink
Rent expensive car and go with high speed on highway
Destroy the car and die
Respawn in front of your PC
Install Ubuntu LTS which is older than 1 year
Try to run docker
Notice it’s not installed
Install docker by apt-get install docker
Install suggested docker.io
Try to run docker-compose
Notice it’s not installed
apt get install docker-compose
Try to run your project with docker-compose
Notice that it’s old version
Check your power level (it should be over 9000)
Search how to install latest version of docker and find the official guide https://docs.docker.com/install/linux/docker-ce/ubuntu/
Uninstall the current docker-compose and docker.io
Install docker using the official guide https://docs.docker.com/install/linux/docker-ce/ubuntu/
Add shared directories to the virtual machine and automount your project directories (this way you have access to the project directory from Ubuntu, so you can run any docker command)
Done
As of August 2016 Docker for windows now uses hyper-v directly instead of virtualbox, so I think it is a little different. First share the drive in settings then use the C: drive letter format, but use forward slashes. For instance I created an H:\t\REDIS directory and was able to see it mounted on /data in the container with this command:
docker run -it --rm -v h:/t/REDIS:/data redis sh
The same format, using drive letter and a colon then forward slashes for the path separator worked both from windows command prompt and from git bash.
I found this question googling to find an answer, but I couldn't find anything that worked. Things would seem to work with no errors being thrown, but I just couldn't see the data on the host (or vice-versa). Finally I checked out the settings closely and tried the format they show:
So first, you have to share the whole drive to the docker vm in settings here, I think that gives the 'docker-machine' vm running in hyper-v access to that drive. Then you have to use the format shown there, which seems to only exist in this one image and in no documentation or questions I could find on the web:
docker run --rm -v c:/Users:/data alpine ls /data
Simply using double leading slashes worked for me on Windows 7:
docker run --rm -v //c/Users:/data alpine ls /data/
Taken from here: https://github.com/moby/moby/issues/12590
Try this:
Open Docker Quickstart Terminal. If it is already open, run $ cd ~ to make sure you are in Windows user directory.
$ docker run -it -v /$(pwd)/ubuntu:/windows ubuntu
It will work if the error is due to typo. You will get an empty folder named ubuntu in your user directory. You will see this folder with the name windows in your ubuntu container.
For those using Virtual Box who prefer command-line approach
1) Make sure the docker-machine is not running
Docker Quickstart Terminal:
docker-machine stop
2) Create the sharing Windows <-> docker-machine
Windows command prompt:
(Modify following to fit your scenario. I feed my Apache httpd container from directory synced via Dropbox.)
set VBOX=D:\Program Files\Oracle\VirtualBox\VBoxManage.exe
set VM_NAME=default
set NAME=c/htdocs
set HOSTPATH=%DROPBOX%\htdocs
"%VBOX%" sharedfolder add "%VM_NAME%" --name "%NAME%" --hostpath "%HOSTPATH%" --automount
3) Start the docker-machine and mount the volume in a new container
Docker Quickstart Terminal:
(Again, I am starting an Apache httpd container, hence that port exposing.)
docker-machine start
docker run -d --name my-apache-container-0 -p 80:80 -v /c/htdocs:/usr/local/apache2/htdocs my-apache-image:1.0
share folders virtualBox toolbox and windows 7 and nodejs image container
using...
Docker Quickstart Terminal [QST]
Windows Explorer [WE]
lets start...
[QST] open Docker Quickstart Terminal
[QST] stop virtual-machine
$ docker-machine stop
[WE] open a windows explorer
[WE] go to the virtualBox installation dir
[WE] open a cmd and execute...
C:\Program Files\Oracle\VirtualBox>VBoxManage sharedfolder add "default" --name
"/d/SVN_FOLDERS/X2R2_WP6/nodejs" --hostpath "\?\d:\SVN_FOLDERS\X2R2_WP6\nodejs" --automount
check in the oracle virtual machine, that the new shared folder has appeared
[QST] start virtual-machine
$ docker-machine start
[QST] run container nodejs
docker stop nodejs
docker rm nodejs
docker run -d -it --rm --name nodejs -v /d/SVN_FOLDERS/X2R2_WP6/nodejs:/usr/src/app -w /usr/src/app node2
[QST] open bash to the container
docker exec -i -t nodejs /bin/bash
[QST] execute dir and you will see the shared files
I solved it!
Add a volume:
docker run -d -v my-named-volume:C:\MyNamedVolume testimage:latest
Mount a host directory:
docker run -d -v C:\Temp\123:C:\My\Shared\Dir testimage:latest