How to run Tensorboard and jupyter concurrently with docker? - docker

I'm starting to learn how to use TensorFlow to do machine learning. And find out docker is pretty convenient to deploy TensorFlow to my machine. However, the example that I could found did not work on my target setting. Which is
Under ubuntu16.04 os, using nvidia-docker to host jupyter and tensorboard service together(could be two container or one container with two service). And files create from jupyter should be visible to host OS.
Ubuntu 16.04
Dokcer
nvidia-docker
Jupyter
Tensorboard
Jupyter container
nvidia-docker run \
--name jupyter \
-d \
-v $(pwd)/notebooks:/root/notebooks \
-v $(pwd)/logs:/root/logs \
-e "PASSWORD=*****" \
-p 8888:8888 \
tensorflow/tensorflow:latest-gpu
Tensorboard container
nvidia-docker run \
--name tensorboard \
-d \
-v $(pwd)/logs:/root/logs \
-p 6006:6006 \
tensorflow/tensorflow:latest-gpu \
tensorboard --logdir /root/logs
I tried to mount logs folder to both container, and let Tensorboard access the result of jupyter. But the mount seems did work. When I create new file in jupyter container with notebooks folder, host folder $(pwd)/notebooks just appear nothing.
I also followed the instructions in Nvidia Docker, Jupyter Notebook and Tensorflow GPU
nvidia-docker run -d -e PASSWORD='winrar' -p 8888:8888 -p 6006:6006 gcr.io/tensorflow/tensorflow:latest-gpu-py3
Only Jupyter worked, tensorboard could not reach from port 6006.

I was facing the same problem today.
Short answer: I'm going to assume you are using the same container for both Jupyter Notebook and tensorboard. So, as you wrote, you can deploy the container with:
nvidia-docker run -d --name tensor -e PASSWORD='winrar'\
-p 8888:8888 -p 6006:6006 gcr.io/tensorflow/tensorflow:latest-gpu-py3
Now you can access both 8888 and 6006 ports but first you need to initialize tensorboard:
docker exec -it tensor bash
tensorboard --logdir /root/logs
About the other option: running jupyter and tensorboard in different containers. If you have problems mounting same directories in different containers (in the past there was a bug about that), since Docker 1.9 you can create independent volumes unlinked to particular containers. This may be a solution.
Create two volumes to store logs and notebooks.
Deploy both images with these volumes.
docker volume create --name notebooks
docker volume create --name logs
nvidia-docker run \
--name jupyter \
-d \
-v notebooks:/root/notebooks \
-v logs:/root/logs \
-e "PASSWORD=*****" \
-p 8888:8888 \
tensorflow/tensorflow:latest-gpu
nvidia-docker run \
--name tensorboard \
-d \
-v logs:/root/logs \
-p 6006:6006 \
tensorflow/tensorflow:latest-gpu \
tensorboard --logdir /root/logs

As an alternative, you can also use the ML Workspace Docker image. The ML Workspace is a web IDE that combines Jupyter, TensorBoard, VS Code, and many other tools & libraries into one convenient Docker image. Deploying a single workspace instance is as simple as:
docker run -p 8080:8080 mltooling/ml-workspace:latest
All tools are accessible from the same port. You can find information on how to access TensorBoard here.

Related

How to import a file from localhost into Docker?

I want to open a folder from my host machine in the Jupyter notebook application (like in this video: https://www.youtube.com/watch?v=W3bk2pojLoU). I tried some different versions of docker run -it --rm --name tf -v /Users/superuser/mywork:/notebooks -p 8888:8888 -p 6006:6006 tensorflow/tensorflow:latest-py3-jupyter, but it doesn't work. Something must be wrong, but I don't get what it is.
Thanks for every answer (Y)
I am going to speculate, but I think what you mean by 'it doesn't work' is that you do not see the mywork folder from the host in the file list within the Web UI of the jupiter. If that it the case, what you want to do/try is mount the volume to the /tf folder, ie
docker run -it --rm --name tf \
-v /Users/superuser/mywork:/tf/notebooks \
-p 8888:8888 \
-p 6006:6006 \
tensorflow/tensorflow:latest-py3-jupyter

How to run TensorBoard in Docker container without root privileges?

I am running tensorflow-gpu in a Docker container.
At the moment I am only able to run and access TensorBoard when I access the running Docker container using root privileges. I would like to accomplish this without using root privileges. How can this be accomplished?
Here some information on what I am doing and what worked out:
I am running a tensorflow-gpu using the provided docker containers from TensorFlow using the following command.
$ docker run \
-u $(id -u username):$(id -g username) \
-it --rm --runtime=nvidia \
-v $(realpath ~/data/workspace/notebooks):/tf/notebooks \
-v $(realpath ~/data/workspace/):/tf/workspace \
-v $(realpath ~/data/images/):/tf/images \
-p 8888:8888 -p 6007-6015:6007-6015 tensorflow/tensorflow:2.0.0a0-gpu-py3-jupyter
In the command line for starting container I added additional ports for TensorBoard.
I accomplished to run TensorBoard when doing the following.
The container is running (using the commands above for startup)
→ each attempt to run and access the TensorBoard out of the running Jupyter notebook fails
From the docker host PC I run the following commands:
$ docker psto get the container name
$ sudo docker exec -it <container name> bash
→ I tried this with and without sudo, without the command below will not work
tf-docker /tf > tensorboard --logdir <log directory> --port 6007
Now I am able to access the TensorBoard on localhost:6007
I am new to Docker, TensorFlow, and I am a newcomer to Linux (Ubuntu).
I would like to accomplish what I described above without the usage of root privileges.
Is there a way to do it without?
What would be the best/correct way?
What is your best practice advice?
Edit 2019-06-24:
I do not know why it did not workout in the first place, perhaps I used the wrong port. This is what I accomplished until now.
I start the container using the following command line where I changed the port for TensorBoard to 6006
$ docker run \
-u $(id -u username):$(id -g username) \
-it --rm --runtime=nvidia \
-v $(realpath ~/data/workspace/notebooks):/tf/notebooks \
-v $(realpath ~/data/workspace/):/tf/workspace \
-v $(realpath ~/data/images/):/tf/images \
-p 8888:8888 -p 6006:6006 tensorflow/tensorflow:2.0.0a0-gpu-py3-
jupyter
Then from the command line I start a bash shell inside the docker container without using root privileges: $ docker exec -it <container name> bash
After that, I start TensorBoard and use the link in the out put in a webbrowser: tf-docker /tf > tensorboard --logdir <log directory> --port 6007
Instead of the previous command I could also start Tensorboard from Jupyter notebook.
%reload_ext tensorboard.notebook
%tensorboard --logdir=<log directory> --port=6006
Edit 2019-10-09:
Since using the TensorFlow 2.0.0 release with TensorBoard 2.0.0 I have to start TensorBoard the following:
$ tensorboard --logdir=<log directory> --host 0.0.0.0 --port 6006
Without explicitly adding the host option it does not work.
The steps I followed and I could visualise the results with tensorboard:
when creating a the container, open/map an external port for tensorboard:
> nvidia-docker run -d --name tkra_tensorb --ipc=host -it -p 8513:8090
> -p 3014:6006 -v /data:/data tkra_tb
inside the container, run tensorboard:
> tensorboard --logdir /data/tkra/MyDatasets/resnet101/checkpoints/
> --host 0.0.0.0 --port 6006
Open tensorboard in my browser: <server_address>:3014

How to save and edit a Jupyter notebook in a host directory using official Tensorflow docker container?

I want to use the official Tensorflow docker images to create and edit a Jupyter notebook stored on the host.
I'm a little confused with what switches I need to provide. To run a Tensorflow script on the host the docs suggest:
docker run -it --rm -v $PWD:/tmp -w /tmp tensorflow/tensorflow python ./script.py
..and to run the Jupyter service:
docker run -it -p 8888:8888 tensorflow/tensorflow:nightly-py3-jupyter
When I try merging the switches to run Jupyter + mount the host volume:
docker run -it --rm -v $PWD:/tmp -w /tmp -p 8888:8888 tensorflow/tensorflow:nightly-py3-jupyter
...its still accessing notebooks stored in the container, not the host.
Notebooks are stored inside the container /tf folder, so copying your files there will do the trick:
docker run -it --rm -v $PWD:/tf -p 8888:8888 tensorflow/tensorflow:nightly-py3-jupyter
The first command you mentioned is used to run a TensorFlow program developed on the host machine, not a notebook.

How can I save zeppelin notebook from a docker?

I am using a docker-container for spark-zeppelin. The docker image was fund here,
https://github.com/Gmousse/docker-zeppelin-python3
I can start an image and work using this command,
docker run -it -p 8080:8080 -p 8081:8081 gmousse/docker-zeppelin-python3
To be able to communicate with the host, I have mounted some paths to host with volume flag like this,
docker run -it -v /cephfs:/cephfs -p 8080:8080 -p 8081:8081 gmousse/docker-zeppelin-python3
it works fine. Now to mount the zeppelin working directory I added this,
docker run -it -v /cephfs:/cephfs -v my_path_on_host:/zeppelin -p 8080:8080 -p 8081:8081 gmousse/docker-zeppelin-python3
And this does not run.
In this command actually it is looking for a zeppelin.sh file in /zeppelin and fails.
Any idea, how can I mount a local volume, and be able to save zeppelin notebook on the host?
Thank you for your time, in advance...
It is very handy to store notebooks on local file system especially under version control.
So you need to mount only notebook folder, but you tried to mount whole zeppelin folder and on start container could not find zeppelin files.
Correct mount examples:
docker run \
-p 8080:8080 \
-v /home/user/zeppelin_notebooks:/zeppelin/notebook \
apache/zeppelin:0.8.0
docker run \
-p 8080:8080 \
--mount type=bind,source="$(pwd)"/zeppelin_notebooks,target=/zeppelin/notebook \
--rm --name zeppelin apache/zeppelin:0.8.0
for My apache zeppelin docker hosted on window 10, the pwd is /opt/zeppelin, the default path for notebooks is /opt/zeppelin/notebook, so I mount my window path as below, Therefore, All notebooks are being save in "C:/Zeppelin/notebook"
docker run -p 8080:8080 -v C:/Zeppelin/Data/:/opt/zeppelin/Data/ -v C:/Zeppelin/notebook:/opt/zeppelin/notebook --name zeppelin apache/zeppelin:0.10.0

Inception and Work directory of Docker

I am using docker to run tensorflow and retrain inception module. I use the following code:
docker run -it \
--publish 6006:6006 \
--volume ${HOME}/tf_files:/tf_files \
--workdir /tf_files \
tensorflow/tensorflow:1.1.0 bash
Then I use
python retrain.py
bottleneck_dir=bottlenecks
how_many_training_steps=500
model_dir=inception
summaries_dir=training_summaries/basic
output_graph=retrained_graph.pb
output_labels=retrained_labels.txt
image_dir=flower_photos
When I run these codes, directory of flower_photos should be inside docker container. However, I want this directory to be in my home directory instead (/user/documents/flower_photos). What should I do?
You could use a volume in order to associate a host folder to a container folder:
docker run -it \
...
-v /user/documents/flower_photos:/path/to/inception/flower_photos
That way, the inception module would find an existing folder with your host content.

Resources