I run Jupyter Notebook with Docker and trying to mount local directory onto the intended Docker volume. But I am unable to see my files in the Jupyter notebook. The Docker command is
sudo nvidia-docker create -v ~/tf/src -it -p 8888:8888
-e PASSWORD=password
--name container_name gcr.io/tensorflow/tensorflow:latest-gpu
and the GUI of the Jupyter Notebook looks like
but ~/tf/src are not shown up in the Jupyter GUI.
What are needed for the files to shown up in the Jupyter? Am I initializing the container incorrectly for this?
the way you mount your volume i think its incorrect -v ~/tf/src it should be
-v /host/directory:/container/directory
Ferdi D's answer targets only files inside interpreter, not precisely files inside Jupyter GUI, which makes things a little bit confusing. I target the title Show volume files in docker jupyter notebook by more generally showing the filse inside Jupyter notebook.
Files inside the interpreters
The -v flag gets you the files in the interpreter or the notebook but not necessarily in the Jupyter GUI
for which you run
$ docker run --rm -it -p 6780:8888 -v "$PWD":/home/jovyan/ jupyter/r-notebook
because the mount point depends on a distribution and hence its path. Here, you ask your current directory to be mounted to Jupyter's path /home/jovyan.
Files inside Jupyter GUIs
To get the files in Jupyter GUI:
OS X
If you had some other than /home/jovyan at the current Jupyter version, the files would not appear in Jupyter GUI so use
$ docker run --rm -it -p 6780:8888 -v "$PWD":/home/jovyan/ jupyter/r-notebook
Some other distros
$ docker run --rm -it -p 6780:8888 -v "$PWD":/tmp jupyter/r-notebook
More generally
For checking up for /home/jovyan/ or /tmp, you can getwd() in R to see your working directory.
Further threads
Reddit discussion more generally on the topic here
Posting this as an answer since the location seems to have changed and the accepted answer doesn't spell it out in full how to get your local directory to show up in Tensorflow Jupyter (Type this on one line with an appropriate <localdir> and <dockerdir>):
docker run --runtime=nvidia -it
--name tensorflow
-p 8888:8888
-v ~/<localdir>:/tf/<dockerdir>
tensorflow/tensorflow:nightly-jupyter
Karl L thinks the solution is the following below. The solution moved here for everyone to judge it and make the question easier to read.
Solution
sudo nvidia-docker create -v /Users/user/tf/src:/notebooks
-it -p 8888:8888 -e PASSWORD=password
--name container_name gcr.io/tensorflow/tensorflow:latest-gpu
As #fendi-d pointed out I was mounting my volume incorrectly.
Then I was pointed to the incorrect mounting dir and I found the correct one in the tensorflow docker file
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/dockerfiles/dockerfiles/gpu.Dockerfile
Which configures the jupyter notebook and then copies files to "/notebooks"
# Set up our notebook config.
COPY jupyter_notebook_config.py /root/.jupyter/
# Copy sample notebooks.
COPY notebooks /notebooks
After I ran with the correct mounting path it showed my files located in "/Users/user/tf/src"
Related
I have followed the steps in the official CUDA on WSL tutorial (https://docs.nvidia.com/cuda/wsl-user-guide/index.html#ch05-sub02-jupyter) to set up a jupyter notebook. However, I can't figure out how to change the initial working directory. I tried mounting a local directory with the -v switch as well as appending to the launch command --notebook-dir, but neither one of these solutions worked. The jupyter notebook will always start under "/tf" no matter what I do. Ideally, I would like this to be the same working directory as the one I have on Windows (C:\Users\MyUser).
The only thing I haven't tried is changing the WORKDIR in the docker image "tensorflow/tensorflow:latest-gpu-py3-jupyter" supplied by hub.docker.com as I am not even sure if it is possible to edit it (line 57).
Here is a sample command I have tried running:
docker run -it --gpus all -p 8888:8888 -v /c/Users/MyUser/MyFolder:/home/MyFolder/ tensorflow/tensorflow:latest-gpu-py3-jupyter jupyter notebook --allow-root --ip=0.0.0.0 --NotebookApp.allow_origin='https://colab.research.google.com' --notebook-dir=/c/Users/MyUser/
What is the easiest way to achieve this?
I was able to solve this problem by mounting the directory I want to work in under the local directory that is given in the command "Serving notebooks from local directory:/tf". In my case it's '/tf', but yours could be different. In addition, I changed the first '/' to '//'. Also, the container name should be the last argument (per https://stackoverflow.com/a/34503625). So in your case, the command looks like:
docker run -it --gpus all -p 8888:8888 -v //c/Users/MyUser/MyFolder:/tf/home/MyFolder tensorflow/tensorflow:latest-gpu-py3-jupyter
I have set up jupyter notebook in the correct port in docker , everytime I need to upload data into the notebook and do analysis , is there anyway I can get up my jupyter file location to a particular file ,keeping in mind I'm using docker.
You need to volume your folder to the Docker container. An example that you use jupyter/all-spark-notebook image, so you can run:
docker run -it --rm -p 8888:8888 -p 4040:4040 -v your-path:/home/jovyan/workspace jupyter/all-spark-notebook
Update your-path to the path contains your notebooks
I want to use the official Tensorflow docker images to create and edit a Jupyter notebook stored on the host.
I'm a little confused with what switches I need to provide. To run a Tensorflow script on the host the docs suggest:
docker run -it --rm -v $PWD:/tmp -w /tmp tensorflow/tensorflow python ./script.py
..and to run the Jupyter service:
docker run -it -p 8888:8888 tensorflow/tensorflow:nightly-py3-jupyter
When I try merging the switches to run Jupyter + mount the host volume:
docker run -it --rm -v $PWD:/tmp -w /tmp -p 8888:8888 tensorflow/tensorflow:nightly-py3-jupyter
...its still accessing notebooks stored in the container, not the host.
Notebooks are stored inside the container /tf folder, so copying your files there will do the trick:
docker run -it --rm -v $PWD:/tf -p 8888:8888 tensorflow/tensorflow:nightly-py3-jupyter
The first command you mentioned is used to run a TensorFlow program developed on the host machine, not a notebook.
I am running the TensorFlow GPU docker image found here: https://www.tensorflow.org/install/install_linux#InstallingDocker
I am running this on Ubuntu.
I am new to docker containers and I was hoping someone could help me figure out how to make my Jupyter Notebook see the harddrives I have mounted to the host machine.
On the docker run command you need to include the -v or --volume parameter.
For example:
docker run -it -v /home/media/user:/tmp/media -p 8888:8888 tensorflow/tensorflow:latest-gpu bash
Then you will find those files in /tmp/media of the docker container.
Suppose I am running CentOS. I installed docker, then run the image.
Suppose I use this image:
https://github.com/jupyter/docker-stacks/tree/master/pyspark-notebook
Then I run
docker run -it --rm -p 8888:8888 jupyter/pyspark-notebook
Now, I can open the browser with localhost:8088 and I can create a new Jupyter notebook, type code and run, etc.
However, how can I access the file I created and, for example, commit it to github. Furthermore, if I already have some code on github, how can I pull this code and access these code from docker?
Thank you very much,
You need to mount the volume
docker run -it --rm -p 8888:8888 -v /opt/pyspark-notebook:/home/jovyan jupyter/pyspark-notebook
You should have just executed !pwd in the a new notebook and found which folder it was storing the work in. And then mounted that as a volume. When you run it like above the files would be available on your host in /opt/pyspark-notebook