Docker cannot access host files using -v option - docker

Not 100% sure this is the right place but let's try.
I'm using on my Windows laptop the Docker Quickstart Terminal (docker toolbox) to get access to a Linux env with Google AppEngine, python, mysql...
Well, that seems to work and when I type docker run -i -t appengine /bin/bash I get access to this env.
Now I'd like to have access to some of my local (host) files so I can edit them with my Windows editors but run them into the docker instance.
I've seen a -v option but cannot make it work.
What I do
docker run -v /d/workspace:/home/root/workspace:rw -i -t appengine /bin/bash
But workspace stays empty in the Docker instance...
Any help appreciated
(I've read this before to post: https://github.com/rocker-org/rocker/wiki/Sharing-files-with-host-machine#windows)

You have to enable Shared Drives , you can follow this Blog

Related

Unable to find Jupyter notebook via Docker

I am very new to docker and Jupyter notebook. I pulled the image from docker, it was able to direct me to the relevant Jupyter notebook. Problem is, whatever plots I am making in the notebook, I am not able to find the file in the system. A file with the name settings.cmnd should be made on my system. I am using Windows 10 home version. I am using the following command
docker run -it -v "//c/Users/AB/project":"//c/program files/Docker Toolbox" -p 8888:8888/tcp CONTAINER NAME
It is running fine as I am able to access the jupyter notebook but the file is still missing on my system.
Here the folder in which I want to save file is project
Kindly help.
I did not find an image called electronioncollider/pythiatutorial, so I'm assuming you meant electronioncollider/pythia-eic-tutorial.
Default working directory for that image is /code so the command on Windows should look like:
docker run --rm -v //c/Users/AB/project://code -p 8888:8888 electronioncollider/pythia-eic-tutorial:latest
Working dierctory can be changed with -w, so the following should work as well:
docker run --rm -w //whatever -v //c/Users/AB/project://whatever -p 8888:8888 electronioncollider/pythia-eic-tutorial:latest
Edit:
electronioncollider/pythia-eic-tutorial:latest image has only one version - one that is meant to run on linux/amd64. This means it's meant to run on 64-bit Linux installed on a computer with Intel or AMD processor.
You're not running it on Windows, but on a Linux VM that runs on your Windows host. Docker can access C:\Users\AB\project, because it's mounted inside the VM as c/Users/AB/project (although most likely it's actuall C:\Users that's mounted as /c/Users). Therein lies the problem - Windows and Linux permission models are incompatible, so the Windows directory is mounted with fixed permissions that allows all Linux users access. Docker then mounts that directory inside the container with the same permissions. Unfortunately Jupyter wants some of the files it creates to have a very specific set of permissions (for security reasons). Since the permissions are fixed to a specific value, Jupyter cannot change them and breaks.
There are two possible solutions
Get inside whatever VM the Docker is running inside, change directory to one not mounted from Windows, and run the container from there using the command from the tutorial/README:
docker run --rm -u `id -u $USER` -v $PWD:$PWD -w $PWD -p 8888:8888 electronioncollider/pythia-eic-tutorial:latest
and the files will appear in the directory that the command is run from.
Use the modified image I created:
docker run --rm -v //c/Users/AB/project://code -p 8888:8888 forinil/pythia-eic-tutorial:latest
You can find the image on Docker Hub here. The source code is available on GitHub here.
Edit:
Due to changes in my version of the image the proper command for it would be:
docker run -it --rm -v //c/Users/AB/project://code --entrypoint rivet forinil/pythia-eic-tutorial
I release a new version, so if you run docker pull forinil/pythia-eic-tutorial:latest, you'll be able to use both the command above, as well as:
docker run -it --rm -v //c/Users/AB/project://code forinil/pythia-eic-tutorial rivet
That being said I did not receive any permission errors while testing either the old or the new versions of the image.
I hope you understand that due to how Docker Toolbox works, you won't be able to use aliases the way the tutorial says you would on Linux.
For one thing, you'll only have access to files inside directory C:\Users\AB\project, for another file path inside the container will be different than outside the container, eg. file C:\Users\AB\project\notebooks\pythiaRivet.ipynb will be available inside the container as /code/notebooks/pythiaRivet.ipynb
Note on asking questions:
You've got banned from asking questions, because your questions are low quality. Please read the guidelines before asking any more.

Connecting Spyder to Remote Jupyter Notebook in a Docker Container

I have been trying to connect Spyder to a docker container running on a remote server and failing time and again. Here is a quick diagram of what I am trying to achieve:
Currently I am launching the docker container on the remote machine through ssh with
docker run --runtime=nvidia -it --rm --shm-size=2g -v /home/timo/storage:/storage -v /etc/passwd:/etc/passwd -v /etc/group:/etc/group --ulimit memlock=-1 -p 8888:8888 --ipc=host ufoym/deepo:all-jupyter
so I am forwarding on port 8888. Then inside the docker container I am running
jupyter notebook --no-browser --ip=0.0.0.0 --port=8888 --allow-root --notebook-dir='/storage'
OK, now for the Spyder part - As per the instructions here, I go to ~/.local/share/jupyter/runtime, where I find the following files:
kernel-ada17ae4-e8c3-4e17-9f8f-1c029c56b4f0.json nbserver-11-open.html nbserver-21-open.html notebook_cookie_secret
kernel-e81bc397-05b5-4710-89b6-2aa2adab5f9c.json nbserver-11.json nbserver-21.json
Not knowing which one to take, I copy them all to my local machine.
I now go to Consoles->Connect to an Existing Kernel, which gives me the "Connect to an Existing Kernel" window which I fill out as so (of course using my actual remote IP address):
(here I have chosen the first of the json files for Connection info:). I hit enter and Spyder goes dark and crashes.
This happens regardless of which connection info file I choose. So, my questions are:
1: Am I doing all of this correctly? I have found lots of instructions for how to connect to remote servers, but not so far for specifically connecting to a jupyter notebook on a docker on a remote server.
2: If yes, then what else can I do to troubleshoot the issues I am encountering?
I should also note that I have no problems connecting to the Jupyter Notebook through the browser on my local machine. It's just that I would prefer to be working with Spyder as my IDE.
Many thanks in advance!
This isn't a solution so much as a work around, but sshfs might be of help
Use sshfs to mount the remote machine's home directory on a local directory, then your local copy of Spyder can edit the file as if it were a local file.
sshfs remotehost.com:/home/user/ ./remote-host/
It typically takes about half a second to upload the changes to an AWS host when you I hit save in Spyder, which is an acceptable delay for me. When it's time to run the code, ssh into the remote machine, and run the code from an IPython shell. It's not elegant, but it does work.
I'm not expecting this to be the best answer, but maybe you can use it as a stopgap solution.
I have the same problem with you. I got it working, maybe a bit clumsy as I am totally new to docker. Here are my steps and notes on where we differ, hope this helps:
Launch docker conatiner in remote machine:
docker run --gpus all --rm -ti --net=host -v /my_storage/data:/home/data -v /my_storage/JSON:/root/.local/share/jupyter/runtime repo/tensorflow:20.03-tf2-py3
I use a second volume mount, in order to get kernel.json file to my local computer. I couldn't manage to access directly from docker via ssh, as it is in /root/ folder in docker container, and with root-only access. If you know how to read from there directly, I'll be happy to learn. My workaround is:
On remote machine, create a JSON/ directory, and map it to the "jupyter --runtime-dir" in container. Once the kernel is created, access the kernel-xxx.json file through this volume mount, copy to local machine and chmod.
Launch ipython kernel in container:
ipython kernel
You are launching jupyter notebook. I suspect this is the reason for your problem. I am not sure if spyder works on notebooks, but it works on iPython kernels. Probably, it works better on spyder-kernels.
copy kernel.json file from /remote_machine/JSON to local machine, chmod for accessing.
launch spyder, use local kernel.json and ssh settings. This part is same as yours.
Not enough reputation... to add comment but to chime on #asim's solution. I was able to have my locally installed Spyder to connect to a kernel running from a container on a remote machine. There is bit of manual work but I am okay with this since I can get much more done with Spyder than with other IDEs.
docker run --rm -it --net=host -v /project_directory_remote_machine:/container_project_directory image_id bash
from container
python -m spyder_kernels.console - matplotlib=’inline’ --ip=127.0.0.1 -f=/container_project_directory/connection_file.json
from remote machine, chmod connection_file.json to open then open and copy/paste content to a file on a local machine :) Use the json file to connect to a remote kernel following steps in the sources below
https://medium.com/#halmubarak/connecting-spyder-ide-to-a-remote-ipython-kernel-25a322f2b2be
https://mazzine.medium.com/how-to-connect-your-spyder-ide-to-an-external-ipython-kernel-with-ssh-putty-tunnel-e1c679e44154

Docker Bind Mounts in WSL do not show files

Im trying to use the docker client from inside WSL, connecting to the docker engine on Windows. Ive exposed the docker engine on Windows on port 2375, and after setting the DOCKER_HOST environment variable in WSL, I can verify this works by running docker ps.
The problem comes when i attempt to mount directories into docker containers from WSL. For example:
I create a directory and file inside my home folder on WSL (mkdir ~/dockertest && touch ~/dockertest/example.txt)
ls ~/dockertest shows my file has been created
I now start a docker container, mounting my docker test folder (docker run -it --rm -v ~/dockertest:/data alpine ls /data)
I would expect to see 'example.txt' in the docker container, but this does not seem to be happening.
Any ideas what I might be missing?
There are great instructions for Docker setup in WSL at https://nickjanetakis.com/blog/setting-up-docker-for-windows-and-wsl-to-work-flawlessly#ensure-volume-mounts-work - solved most of my problems. The biggest trick for me was with bind-mounted directories; you have to use a path in WSL that the Docker daemon will be able to translate, for example /c/Users/rfay/myproject.
I don't think you need to change the mound point as the link suggests. Rather if you use pwd and sed in combination you should get the effect you need.
docker run -it -v $(pwd | sed 's/^\/mnt//'):/var/folder -w "/var/folder" alpine
pwd returns the working folder in the format '/mnt/c/code/folder'. Pipe this to sed and replace '/mnt' with empty string will leave you with a path such as '/c/code/folder' which is correct for docker for windows.
Anyone stumbling here over this issue follow this: Docker Desktop WSL2 Backend and make sure you are running the version 2 of the WSL in PowerShell:
> wsl -l -v
NAME STATE VERSION
* docker-desktop Running 2
Ubuntu Running 2
docker-desktop-data Running 2
If your Ubuntu VERSION doesn't say 2, you need to update it according to the guide above.
After that, you will be able to mount your Linux directory to the Docker Container directly like:
docker run -v ~/my-project:/sources <my-image>
Specific to WSL 1
Ran into the same issue. One thing to remember is that docker run command does not execute a container then and there on a command shell. It sends the run arguments to a docker daemon that does not interpret WSL path correctly. Hence, you need to pass Windows formatted path in quotes and with back slashes escaped
Your Windows path is
\\wsl$\Ubuntu\home\username\dockertest
Docker command after escaping will probably be like
docker run -it --rm -v "\\\\wsl\$\\Ubuntu\\home\\username\\dockertest":/data alpine ls /data
try
docker -v /:{path} exe
Hope to help you.

Docker on Cloud hosting

Recently i created a linode account and installed docker in ubuntu 14.04LTS
i installed an image, and ran a container, everything is working properly at the moment.
I wanted to do a scp from my local machine to linode directory and i did successfully like so.
scp file.txt root#ip:/path/to/directory
only my problem started when i realized docker container has its own root#hostname:/path/to/directory within the linode root#ip:/and that i did not know how to do a scp from my local machine directly to the container path, simply because i do not know the syntax and i'm not very experienced with the process.
I looked around asked Linode for support but there was very little they could assist me with.
I decided to test some of my theories such as: instead of scp directly to the docker container i would scp to linode scp file.txt root#ip:/home and from there i would do a docker cp file.txt <container-name>:/path/to/directory after i click enter, i get no response neither error nor success.
i'm a beginner on all of this so what am i missing? what am i not understanding?
Your docker cp aproach is right. And in fact it doesn't return any response. Your can check if the file was indeed copied using a docker exec containerid bash.
There is another way more complicated and not recommended. If you install openssh in your container and open another port lets say -p 2222:22 you can scp directly to the container.
Of course you can do it via the docker way. Declaring a volume, linking your host directory to your container directory: -v /path/to/directory:/path/to/directory. Then your scp to your host will work.
Regards

How to use --volume option with Docker Toolbox on Windows?

How can I share a folder between my Windows files and a docker container, by mounting a volume with simple --volume command using Docker Toolbox on?
I'm using "Docker Quickstart Terminal" and when I try this:
winpty docker run -it --rm --volume /C/Users/myuser:/myuser ubuntu
I have this error:
Invalid value "C:\\Users\\myuser\\:\\myuser" for flag --volume: bad mount mode specified : \myuser
See 'docker run --help'.
Following this, I also tried
winpty docker run -it --rm --volume "//C/Users/myuser:/myuser" ubuntu
and got
Invalid value "\\\\C:\\Users\\myuser\\:\\myuser" for flag --volume: \myuser is not an absolute path
See 'docker run --help'.
This is an improvement of the selected answer because that answer is limited to c:\Users folder. If you want to create a volume using a directory outside of c:\Users this is an extension.
In windows 7, I used docker toolbox. It used Virtual Box.
Open virtual box
Select the machine (in my case default).
Right clicked and select settings option
Go to Shared Folders
Include a new machine folder.
For example, in my case I have included:
**Name**: c:\dev
**Path**: c/dev
Click and close
Open "Docker Quickstart Terminal" and restart the docker machine.
Use this command:
$ docker-machine restart
To verify that it worked, following these steps:
SSH to the docker machine.
Using this command:
$ docker-machine ssh
Go to the folder that you have shared/mounted.
In my case, I use this command
$ cd /c/dev
Check the user owner of the folder. You could use "ls -all" and verify that the owner will be "docker"
You will see something like this:
docker#default:/c/dev$ ls -all
total 92
drwxrwxrwx 1 docker staff 4096 Feb 23 14:16 ./
drwxr-xr-x 4 root root 80 Feb 24 09:01 ../
drwxrwxrwx 1 docker staff 4096 Jan 16 09:28 my_folder/
In that case, you will be able to create a volume for that folder.
You can use these commands:
docker create -v /c/dev/:/app/dev --name dev image
docker run -d -it --volumes-from dev image
or
docker run -d -it -v /c/dev/:/app/dev image
Both commands work for me. I hope this will be useful.
This is actually an issue of the project and there are 2 working workarounds:
Creating a data volume:
docker create -v //c/Users/myuser:/myuser --name data hello-world
winpty docker run -it --rm --volumes-from data ubuntu
SSHing directly in the docker host:
docker-machine ssh default
And from there doing a classic:
docker run -it --rm --volume /c/Users/myuser:/myuser ubuntu
If you are looking for the solution that will resolve all the Windows issues and make it work on the Windows OS in the same way as on Linux, then see below. I tested this and it works in all cases. I’m showing also how I get it (the steps and thinking process). I've also wrote an article about using Docker and dealing with with docker issues here.
Solution 1: Use VirtualBox (if you think it's not good idea see Solution 2 below)
Open VirtualBox (you have it already installed along with the docker tools)
Create virtual machine
(This is optional, you can skip it and forward ports from the VM) Create second ethernet card - bridged, this way it will receive IP address from your network (it will have IP like docker machine)
Install Ubuntu LTS which is older than 1 year
Install docker
Add shared directories to the virtual machine and automount your project directories (this way you have access to the project directory from Ubuntu) but still can work in Windows
Done
Bonus:
Everything is working the same way as on Linux
Pause/Unpause the dockerized environment whenever you want
Solution 2: Use VirtualBox (this is very similar to the solution 1 but it shows also the thinking process, which might be usefull when solving similar issues)
Read that somebody move the folders to /C/Users/Public and that works https://forums.docker.com/t/sharing-a-volume-on-windows-with-docker-toolbox/4953/2
Try it, realize that it doesn’t have much sense in your case.
Read entire page here https://github.com/docker/toolbox/issues/607 and try all solutions listed on page
Find this page (the one you are reading now) and try all the solutions from other comments
Find somewhere information that setting COMPOSE_CONVERT_WINDOWS_PATHS=1 environment variable might solve the issue.
Stop looking for the solution for few months
Go back and check the same links again
Cry deeply
Feel the enlightenment moment
Open VirtualBox (you have it already installed along with the docker tools)
Create virtual machine with second ethernet card - bridged, this way it will receive IP address from your network (it will have IP like docker machine)
Install Ubuntu LTS which is very recent (not older than few months)
Notice that the automounting is not really working and the integration is broken (like clipboard sharing etc.)
Delete virtual machine
Go out and have a drink
Rent expensive car and go with high speed on highway
Destroy the car and die
Respawn in front of your PC
Install Ubuntu LTS which is older than 1 year
Try to run docker
Notice it’s not installed
Install docker by apt-get install docker
Install suggested docker.io
Try to run docker-compose
Notice it’s not installed
apt get install docker-compose
Try to run your project with docker-compose
Notice that it’s old version
Check your power level (it should be over 9000)
Search how to install latest version of docker and find the official guide https://docs.docker.com/install/linux/docker-ce/ubuntu/
Uninstall the current docker-compose and docker.io
Install docker using the official guide https://docs.docker.com/install/linux/docker-ce/ubuntu/
Add shared directories to the virtual machine and automount your project directories (this way you have access to the project directory from Ubuntu, so you can run any docker command)
Done
As of August 2016 Docker for windows now uses hyper-v directly instead of virtualbox, so I think it is a little different. First share the drive in settings then use the C: drive letter format, but use forward slashes. For instance I created an H:\t\REDIS directory and was able to see it mounted on /data in the container with this command:
docker run -it --rm -v h:/t/REDIS:/data redis sh
The same format, using drive letter and a colon then forward slashes for the path separator worked both from windows command prompt and from git bash.
I found this question googling to find an answer, but I couldn't find anything that worked. Things would seem to work with no errors being thrown, but I just couldn't see the data on the host (or vice-versa). Finally I checked out the settings closely and tried the format they show:
So first, you have to share the whole drive to the docker vm in settings here, I think that gives the 'docker-machine' vm running in hyper-v access to that drive. Then you have to use the format shown there, which seems to only exist in this one image and in no documentation or questions I could find on the web:
docker run --rm -v c:/Users:/data alpine ls /data
Simply using double leading slashes worked for me on Windows 7:
docker run --rm -v //c/Users:/data alpine ls /data/
Taken from here: https://github.com/moby/moby/issues/12590
Try this:
Open Docker Quickstart Terminal. If it is already open, run $ cd ~ to make sure you are in Windows user directory.
$ docker run -it -v /$(pwd)/ubuntu:/windows ubuntu
It will work if the error is due to typo. You will get an empty folder named ubuntu in your user directory. You will see this folder with the name windows in your ubuntu container.
For those using Virtual Box who prefer command-line approach
1) Make sure the docker-machine is not running
Docker Quickstart Terminal:
docker-machine stop
2) Create the sharing Windows <-> docker-machine
Windows command prompt:
(Modify following to fit your scenario. I feed my Apache httpd container from directory synced via Dropbox.)
set VBOX=D:\Program Files\Oracle\VirtualBox\VBoxManage.exe
set VM_NAME=default
set NAME=c/htdocs
set HOSTPATH=%DROPBOX%\htdocs
"%VBOX%" sharedfolder add "%VM_NAME%" --name "%NAME%" --hostpath "%HOSTPATH%" --automount
3) Start the docker-machine and mount the volume in a new container
Docker Quickstart Terminal:
(Again, I am starting an Apache httpd container, hence that port exposing.)
docker-machine start
docker run -d --name my-apache-container-0 -p 80:80 -v /c/htdocs:/usr/local/apache2/htdocs my-apache-image:1.0
share folders virtualBox toolbox and windows 7 and nodejs image container
using...
Docker Quickstart Terminal [QST]
Windows Explorer [WE]
lets start...
[QST] open Docker Quickstart Terminal
[QST] stop virtual-machine
$ docker-machine stop
[WE] open a windows explorer
[WE] go to the virtualBox installation dir
[WE] open a cmd and execute...
C:\Program Files\Oracle\VirtualBox>VBoxManage sharedfolder add "default" --name
"/d/SVN_FOLDERS/X2R2_WP6/nodejs" --hostpath "\?\d:\SVN_FOLDERS\X2R2_WP6\nodejs" --automount
check in the oracle virtual machine, that the new shared folder has appeared
[QST] start virtual-machine
$ docker-machine start
[QST] run container nodejs
docker stop nodejs
docker rm nodejs
docker run -d -it --rm --name nodejs -v /d/SVN_FOLDERS/X2R2_WP6/nodejs:/usr/src/app -w /usr/src/app node2
[QST] open bash to the container
docker exec -i -t nodejs /bin/bash
[QST] execute dir and you will see the shared files
I solved it!
Add a volume:
docker run -d -v my-named-volume:C:\MyNamedVolume testimage:latest
Mount a host directory:
docker run -d -v C:\Temp\123:C:\My\Shared\Dir testimage:latest

Resources