How to run TensorBoard in Docker container without root privileges? - docker

I am running tensorflow-gpu in a Docker container.
At the moment I am only able to run and access TensorBoard when I access the running Docker container using root privileges. I would like to accomplish this without using root privileges. How can this be accomplished?
Here some information on what I am doing and what worked out:
I am running a tensorflow-gpu using the provided docker containers from TensorFlow using the following command.
$ docker run \
-u $(id -u username):$(id -g username) \
-it --rm --runtime=nvidia \
-v $(realpath ~/data/workspace/notebooks):/tf/notebooks \
-v $(realpath ~/data/workspace/):/tf/workspace \
-v $(realpath ~/data/images/):/tf/images \
-p 8888:8888 -p 6007-6015:6007-6015 tensorflow/tensorflow:2.0.0a0-gpu-py3-jupyter
In the command line for starting container I added additional ports for TensorBoard.
I accomplished to run TensorBoard when doing the following.
The container is running (using the commands above for startup)
→ each attempt to run and access the TensorBoard out of the running Jupyter notebook fails
From the docker host PC I run the following commands:
$ docker psto get the container name
$ sudo docker exec -it <container name> bash
→ I tried this with and without sudo, without the command below will not work
tf-docker /tf > tensorboard --logdir <log directory> --port 6007
Now I am able to access the TensorBoard on localhost:6007
I am new to Docker, TensorFlow, and I am a newcomer to Linux (Ubuntu).
I would like to accomplish what I described above without the usage of root privileges.
Is there a way to do it without?
What would be the best/correct way?
What is your best practice advice?
Edit 2019-06-24:
I do not know why it did not workout in the first place, perhaps I used the wrong port. This is what I accomplished until now.
I start the container using the following command line where I changed the port for TensorBoard to 6006
$ docker run \
-u $(id -u username):$(id -g username) \
-it --rm --runtime=nvidia \
-v $(realpath ~/data/workspace/notebooks):/tf/notebooks \
-v $(realpath ~/data/workspace/):/tf/workspace \
-v $(realpath ~/data/images/):/tf/images \
-p 8888:8888 -p 6006:6006 tensorflow/tensorflow:2.0.0a0-gpu-py3-
jupyter
Then from the command line I start a bash shell inside the docker container without using root privileges: $ docker exec -it <container name> bash
After that, I start TensorBoard and use the link in the out put in a webbrowser: tf-docker /tf > tensorboard --logdir <log directory> --port 6007
Instead of the previous command I could also start Tensorboard from Jupyter notebook.
%reload_ext tensorboard.notebook
%tensorboard --logdir=<log directory> --port=6006
Edit 2019-10-09:
Since using the TensorFlow 2.0.0 release with TensorBoard 2.0.0 I have to start TensorBoard the following:
$ tensorboard --logdir=<log directory> --host 0.0.0.0 --port 6006
Without explicitly adding the host option it does not work.

The steps I followed and I could visualise the results with tensorboard:
when creating a the container, open/map an external port for tensorboard:
> nvidia-docker run -d --name tkra_tensorb --ipc=host -it -p 8513:8090
> -p 3014:6006 -v /data:/data tkra_tb
inside the container, run tensorboard:
> tensorboard --logdir /data/tkra/MyDatasets/resnet101/checkpoints/
> --host 0.0.0.0 --port 6006
Open tensorboard in my browser: <server_address>:3014

Related

run docker in jenkins container (docker in docker)

Im trying to run docker inside jenkins container, i used this command to create jenkins container
docker run -p 8080:8080 -p 50000:50000 -d -v jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock -v $(which docker):/usr/bin/docker jenkins/jenkins:latest
then this command to access jenkins container bash
docker exec -u 0 -it <container-id> bash, whenever i run docker i get this error
docker: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.32' not found (required by docker) docker: /lib/x86_64-linux-gnu/libc.so.6: version `GLIBC_2.34' not found (required by docker)
What is creating this problem and what ways in order to solve it ?
This is not reliable anymore, because the Docker Engine is no longer distributed as (almost) static libraries.
so run docker run -p 8080:8080 -p 50000:50000 -d -v jenkins_home:/var/jenkins_home -v /var/run/docker.sock:/var/run/docker.sock jenkins/jenkins:latest
then this command to access jenkins container bash as root user docker exec -u 0 -it <container-id> bash
Once inside the Jenkins container, simply run this command to install docker inside of the Jenkins container: curl https://get.docker.com/ > dockerinstall && chmod 777 dockerinstall && ./dockerinstall
this command gets the docker quick installation script and runs the script which then installs docker inside of the container
Exit out of the Jenkins container interactive shell, and run the following command to change permissions on “docker.sock” for added security sudo chmod 666 /var/run/docker.sock
solved by downgrading my server OS to Ubuntu 18

How to import a file from localhost into Docker?

I want to open a folder from my host machine in the Jupyter notebook application (like in this video: https://www.youtube.com/watch?v=W3bk2pojLoU). I tried some different versions of docker run -it --rm --name tf -v /Users/superuser/mywork:/notebooks -p 8888:8888 -p 6006:6006 tensorflow/tensorflow:latest-py3-jupyter, but it doesn't work. Something must be wrong, but I don't get what it is.
Thanks for every answer (Y)
I am going to speculate, but I think what you mean by 'it doesn't work' is that you do not see the mywork folder from the host in the file list within the Web UI of the jupiter. If that it the case, what you want to do/try is mount the volume to the /tf folder, ie
docker run -it --rm --name tf \
-v /Users/superuser/mywork:/tf/notebooks \
-p 8888:8888 \
-p 6006:6006 \
tensorflow/tensorflow:latest-py3-jupyter

How to save and edit a Jupyter notebook in a host directory using official Tensorflow docker container?

I want to use the official Tensorflow docker images to create and edit a Jupyter notebook stored on the host.
I'm a little confused with what switches I need to provide. To run a Tensorflow script on the host the docs suggest:
docker run -it --rm -v $PWD:/tmp -w /tmp tensorflow/tensorflow python ./script.py
..and to run the Jupyter service:
docker run -it -p 8888:8888 tensorflow/tensorflow:nightly-py3-jupyter
When I try merging the switches to run Jupyter + mount the host volume:
docker run -it --rm -v $PWD:/tmp -w /tmp -p 8888:8888 tensorflow/tensorflow:nightly-py3-jupyter
...its still accessing notebooks stored in the container, not the host.
Notebooks are stored inside the container /tf folder, so copying your files there will do the trick:
docker run -it --rm -v $PWD:/tf -p 8888:8888 tensorflow/tensorflow:nightly-py3-jupyter
The first command you mentioned is used to run a TensorFlow program developed on the host machine, not a notebook.

How to run Tensorboard and jupyter concurrently with docker?

I'm starting to learn how to use TensorFlow to do machine learning. And find out docker is pretty convenient to deploy TensorFlow to my machine. However, the example that I could found did not work on my target setting. Which is
Under ubuntu16.04 os, using nvidia-docker to host jupyter and tensorboard service together(could be two container or one container with two service). And files create from jupyter should be visible to host OS.
Ubuntu 16.04
Dokcer
nvidia-docker
Jupyter
Tensorboard
Jupyter container
nvidia-docker run \
--name jupyter \
-d \
-v $(pwd)/notebooks:/root/notebooks \
-v $(pwd)/logs:/root/logs \
-e "PASSWORD=*****" \
-p 8888:8888 \
tensorflow/tensorflow:latest-gpu
Tensorboard container
nvidia-docker run \
--name tensorboard \
-d \
-v $(pwd)/logs:/root/logs \
-p 6006:6006 \
tensorflow/tensorflow:latest-gpu \
tensorboard --logdir /root/logs
I tried to mount logs folder to both container, and let Tensorboard access the result of jupyter. But the mount seems did work. When I create new file in jupyter container with notebooks folder, host folder $(pwd)/notebooks just appear nothing.
I also followed the instructions in Nvidia Docker, Jupyter Notebook and Tensorflow GPU
nvidia-docker run -d -e PASSWORD='winrar' -p 8888:8888 -p 6006:6006 gcr.io/tensorflow/tensorflow:latest-gpu-py3
Only Jupyter worked, tensorboard could not reach from port 6006.
I was facing the same problem today.
Short answer: I'm going to assume you are using the same container for both Jupyter Notebook and tensorboard. So, as you wrote, you can deploy the container with:
nvidia-docker run -d --name tensor -e PASSWORD='winrar'\
-p 8888:8888 -p 6006:6006 gcr.io/tensorflow/tensorflow:latest-gpu-py3
Now you can access both 8888 and 6006 ports but first you need to initialize tensorboard:
docker exec -it tensor bash
tensorboard --logdir /root/logs
About the other option: running jupyter and tensorboard in different containers. If you have problems mounting same directories in different containers (in the past there was a bug about that), since Docker 1.9 you can create independent volumes unlinked to particular containers. This may be a solution.
Create two volumes to store logs and notebooks.
Deploy both images with these volumes.
docker volume create --name notebooks
docker volume create --name logs
nvidia-docker run \
--name jupyter \
-d \
-v notebooks:/root/notebooks \
-v logs:/root/logs \
-e "PASSWORD=*****" \
-p 8888:8888 \
tensorflow/tensorflow:latest-gpu
nvidia-docker run \
--name tensorboard \
-d \
-v logs:/root/logs \
-p 6006:6006 \
tensorflow/tensorflow:latest-gpu \
tensorboard --logdir /root/logs
As an alternative, you can also use the ML Workspace Docker image. The ML Workspace is a web IDE that combines Jupyter, TensorBoard, VS Code, and many other tools & libraries into one convenient Docker image. Deploying a single workspace instance is as simple as:
docker run -p 8080:8080 mltooling/ml-workspace:latest
All tools are accessible from the same port. You can find information on how to access TensorBoard here.

Virtualbox inside Docker

I'm trying to get VirtualBox to run inside of Docker. I'm using this: https://registry.hub.docker.com/u/jess/virtualbox/dockerfile/.
When I run the command:
sudo docker run -d \
-v /tmp/.X11-unix:/tmp/.X11-unix \
-e DISPLAY=unix$DISPLAY \
--privileged \
--name virtualbox \
jess/virtualbox
It adds virtualbox inside a container. When I run sudo docker start container_id, it echoes back the container_id but doesn't add it to the running containers. I check with sudo docker ps and it is not there; however, it is there with sudo docker ps -a.
What am I doing wrong? I get no errors either.
EDIT: I'm running Docker in Ubuntu 15.04 (Not inside VirtualBox)
You have to let docker to connect to your local X server. There are different ways to do this. One straight way is running xhost +local:docker before running your container (i.e.: before docker run).

Resources