I am running the TensorFlow GPU docker image found here: https://www.tensorflow.org/install/install_linux#InstallingDocker
I am running this on Ubuntu.
I am new to docker containers and I was hoping someone could help me figure out how to make my Jupyter Notebook see the harddrives I have mounted to the host machine.
On the docker run command you need to include the -v or --volume parameter.
For example:
docker run -it -v /home/media/user:/tmp/media -p 8888:8888 tensorflow/tensorflow:latest-gpu bash
Then you will find those files in /tmp/media of the docker container.
Related
I'm new to using docker and my objective is to bind mount a docker image to a file path on my host machine (shown in the below directory) so I can:
Run a Jupyter Notebook instance without losing the data every time I end my terminal session
Link my Jupyter Notebook to the same path where my training data resides
I have tried at looking at many threads on the topic to little avail. I run the command shown below and am using Linux Mint:
sudo docker run -it --rm --gpus all -v "$(pwd):/media/hossamantarkorin/Laptop Data II/1- Educational/ML Training/Incident Detection/I75_I95 RITIS":"/tf" -p 8888:8888 tensorflow/tensorflow:2.3.0rc1-gpu-jupyter
What am I doing wrong here?
Thanks,
Hossam
This usually happens when docker is not running.
Try sudo service docker start before entering your command.
I just wanted to provide an update on this. The easiest way to work on your local directory is to:
Do a change directory to where you want to work
Run your docker while bind mounting to your pwd:
sudo docker run -it --rm --gpus all -v "$(pwd):/tf" -p 8888:8888 tensorflow/tensorflow:2.3.0rc1-gpu-jupyter
I want to use the official Tensorflow docker images to create and edit a Jupyter notebook stored on the host.
I'm a little confused with what switches I need to provide. To run a Tensorflow script on the host the docs suggest:
docker run -it --rm -v $PWD:/tmp -w /tmp tensorflow/tensorflow python ./script.py
..and to run the Jupyter service:
docker run -it -p 8888:8888 tensorflow/tensorflow:nightly-py3-jupyter
When I try merging the switches to run Jupyter + mount the host volume:
docker run -it --rm -v $PWD:/tmp -w /tmp -p 8888:8888 tensorflow/tensorflow:nightly-py3-jupyter
...its still accessing notebooks stored in the container, not the host.
Notebooks are stored inside the container /tf folder, so copying your files there will do the trick:
docker run -it --rm -v $PWD:/tf -p 8888:8888 tensorflow/tensorflow:nightly-py3-jupyter
The first command you mentioned is used to run a TensorFlow program developed on the host machine, not a notebook.
My centos version and docker version(install by yum)
Use docker common error in container
My docker run command:
docker run -it -d -u root --name jenkins3 -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker docker.io/jenkins/jenkins
but,its error when I exec docker info in jenkins container
/usr/bin/docker: 2: .: Can't open /etc/sysconfig/docker
Exposing the host's docker socket to your jenkins container will work with
-v /var/run/docker.sock:/var/run/docker.sock
but you will need to have the docker executable installed in your jenkins image via a Dockerfile.
It is likely the example you are looking at is already using a docker image. A quick google search brings up https://jpetazzo.github.io/2015/09/03/do-not-use-docker-in-docker-for-ci/ whose example uses a docker image (already has the executable installed):
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-ti docker
Also note from that same post your exact issue with mounting the binary:
Former versions of this post advised to bind-mount the docker binary from the host to the container. This is not reliable anymore, because the Docker Engine is no longer distributed as (almost) static libraries.
I run Jupyter Notebook with Docker and trying to mount local directory onto the intended Docker volume. But I am unable to see my files in the Jupyter notebook. The Docker command is
sudo nvidia-docker create -v ~/tf/src -it -p 8888:8888
-e PASSWORD=password
--name container_name gcr.io/tensorflow/tensorflow:latest-gpu
and the GUI of the Jupyter Notebook looks like
but ~/tf/src are not shown up in the Jupyter GUI.
What are needed for the files to shown up in the Jupyter? Am I initializing the container incorrectly for this?
the way you mount your volume i think its incorrect -v ~/tf/src it should be
-v /host/directory:/container/directory
Ferdi D's answer targets only files inside interpreter, not precisely files inside Jupyter GUI, which makes things a little bit confusing. I target the title Show volume files in docker jupyter notebook by more generally showing the filse inside Jupyter notebook.
Files inside the interpreters
The -v flag gets you the files in the interpreter or the notebook but not necessarily in the Jupyter GUI
for which you run
$ docker run --rm -it -p 6780:8888 -v "$PWD":/home/jovyan/ jupyter/r-notebook
because the mount point depends on a distribution and hence its path. Here, you ask your current directory to be mounted to Jupyter's path /home/jovyan.
Files inside Jupyter GUIs
To get the files in Jupyter GUI:
OS X
If you had some other than /home/jovyan at the current Jupyter version, the files would not appear in Jupyter GUI so use
$ docker run --rm -it -p 6780:8888 -v "$PWD":/home/jovyan/ jupyter/r-notebook
Some other distros
$ docker run --rm -it -p 6780:8888 -v "$PWD":/tmp jupyter/r-notebook
More generally
For checking up for /home/jovyan/ or /tmp, you can getwd() in R to see your working directory.
Further threads
Reddit discussion more generally on the topic here
Posting this as an answer since the location seems to have changed and the accepted answer doesn't spell it out in full how to get your local directory to show up in Tensorflow Jupyter (Type this on one line with an appropriate <localdir> and <dockerdir>):
docker run --runtime=nvidia -it
--name tensorflow
-p 8888:8888
-v ~/<localdir>:/tf/<dockerdir>
tensorflow/tensorflow:nightly-jupyter
Karl L thinks the solution is the following below. The solution moved here for everyone to judge it and make the question easier to read.
Solution
sudo nvidia-docker create -v /Users/user/tf/src:/notebooks
-it -p 8888:8888 -e PASSWORD=password
--name container_name gcr.io/tensorflow/tensorflow:latest-gpu
As #fendi-d pointed out I was mounting my volume incorrectly.
Then I was pointed to the incorrect mounting dir and I found the correct one in the tensorflow docker file
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/dockerfiles/dockerfiles/gpu.Dockerfile
Which configures the jupyter notebook and then copies files to "/notebooks"
# Set up our notebook config.
COPY jupyter_notebook_config.py /root/.jupyter/
# Copy sample notebooks.
COPY notebooks /notebooks
After I ran with the correct mounting path it showed my files located in "/Users/user/tf/src"
I am running a python interactive docker container on Ubuntu 14.04 using docker 17.03.1. I want to share files between local host and docker container so that files I create in the container are visible in the local directory and vice-versa. However when I run the following command I see an empty working directory in the container with no files.
docker run -e USER=$USER -e USERID=$UID -v /home/watts/python:/home/watts/python -w=/home/watts/python -p 8888:8888 --rm -it watts/python jupyter notebook --no-browser --notebook-dir=/home/watts/python --allow-root
I just run this command:
docker run -v `pwd`/home/watts/python:/home/watts/python -it kaggle/python /bin/bash
And after that, I started creating a few files both host and container. All files are visible in both sides.
Hopefully this will help.