How to import a file from localhost into Docker? - docker

I want to open a folder from my host machine in the Jupyter notebook application (like in this video: https://www.youtube.com/watch?v=W3bk2pojLoU). I tried some different versions of docker run -it --rm --name tf -v /Users/superuser/mywork:/notebooks -p 8888:8888 -p 6006:6006 tensorflow/tensorflow:latest-py3-jupyter, but it doesn't work. Something must be wrong, but I don't get what it is.
Thanks for every answer (Y)

I am going to speculate, but I think what you mean by 'it doesn't work' is that you do not see the mywork folder from the host in the file list within the Web UI of the jupiter. If that it the case, what you want to do/try is mount the volume to the /tf folder, ie
docker run -it --rm --name tf \
-v /Users/superuser/mywork:/tf/notebooks \
-p 8888:8888 \
-p 6006:6006 \
tensorflow/tensorflow:latest-py3-jupyter

Related

Cannot open folder in a docker container

I am really new in working with Docker. Now I want to open a particular folder in the Docker container so that I could save created Jupyter Notebook files. I am doing it on Windows 10.
If I try to do it this way:
docker run -it -p 8888:8888 -v C:/Users/Larry/AI/bootcamp:/home/jovyan/bootcamp --rm --name jupyter jupyter/tensorflow-notebook
I get an error:
C:\Program Files\Docker Toolbox\docker.exe: Error response from daemon: invalid mode: /home/jovyan/bootcamp.
If I do it this way:
docker run -it -p 8888:8888 -v /User/Larry/AI/bootcamp:/home/jovyan/bootcamp --rm --name jupyter jupyter/tensorflow-notebook
The container is created and I can create a new Jupyter file, but it is not saved. Does anyone see what is wrong?
This could be related to this issue - the : in C: is confusing the argument parser.
The workaround might be simply to rewrite the volume mount as mentioned in the github issue:
docker run --mount type=bind,source=/path/with:colon,destination=/mnt
Update
docker run -it -p 8888:8888 --mount type=bind,source=C:/Users/Larry/AI/bootcamp,destination=/home/jovyan/bootcamp --rm --name jupyter jupyter/tensorflow-notebook

How to run TensorBoard in Docker container without root privileges?

I am running tensorflow-gpu in a Docker container.
At the moment I am only able to run and access TensorBoard when I access the running Docker container using root privileges. I would like to accomplish this without using root privileges. How can this be accomplished?
Here some information on what I am doing and what worked out:
I am running a tensorflow-gpu using the provided docker containers from TensorFlow using the following command.
$ docker run \
-u $(id -u username):$(id -g username) \
-it --rm --runtime=nvidia \
-v $(realpath ~/data/workspace/notebooks):/tf/notebooks \
-v $(realpath ~/data/workspace/):/tf/workspace \
-v $(realpath ~/data/images/):/tf/images \
-p 8888:8888 -p 6007-6015:6007-6015 tensorflow/tensorflow:2.0.0a0-gpu-py3-jupyter
In the command line for starting container I added additional ports for TensorBoard.
I accomplished to run TensorBoard when doing the following.
The container is running (using the commands above for startup)
→ each attempt to run and access the TensorBoard out of the running Jupyter notebook fails
From the docker host PC I run the following commands:
$ docker psto get the container name
$ sudo docker exec -it <container name> bash
→ I tried this with and without sudo, without the command below will not work
tf-docker /tf > tensorboard --logdir <log directory> --port 6007
Now I am able to access the TensorBoard on localhost:6007
I am new to Docker, TensorFlow, and I am a newcomer to Linux (Ubuntu).
I would like to accomplish what I described above without the usage of root privileges.
Is there a way to do it without?
What would be the best/correct way?
What is your best practice advice?
Edit 2019-06-24:
I do not know why it did not workout in the first place, perhaps I used the wrong port. This is what I accomplished until now.
I start the container using the following command line where I changed the port for TensorBoard to 6006
$ docker run \
-u $(id -u username):$(id -g username) \
-it --rm --runtime=nvidia \
-v $(realpath ~/data/workspace/notebooks):/tf/notebooks \
-v $(realpath ~/data/workspace/):/tf/workspace \
-v $(realpath ~/data/images/):/tf/images \
-p 8888:8888 -p 6006:6006 tensorflow/tensorflow:2.0.0a0-gpu-py3-
jupyter
Then from the command line I start a bash shell inside the docker container without using root privileges: $ docker exec -it <container name> bash
After that, I start TensorBoard and use the link in the out put in a webbrowser: tf-docker /tf > tensorboard --logdir <log directory> --port 6007
Instead of the previous command I could also start Tensorboard from Jupyter notebook.
%reload_ext tensorboard.notebook
%tensorboard --logdir=<log directory> --port=6006
Edit 2019-10-09:
Since using the TensorFlow 2.0.0 release with TensorBoard 2.0.0 I have to start TensorBoard the following:
$ tensorboard --logdir=<log directory> --host 0.0.0.0 --port 6006
Without explicitly adding the host option it does not work.
The steps I followed and I could visualise the results with tensorboard:
when creating a the container, open/map an external port for tensorboard:
> nvidia-docker run -d --name tkra_tensorb --ipc=host -it -p 8513:8090
> -p 3014:6006 -v /data:/data tkra_tb
inside the container, run tensorboard:
> tensorboard --logdir /data/tkra/MyDatasets/resnet101/checkpoints/
> --host 0.0.0.0 --port 6006
Open tensorboard in my browser: <server_address>:3014

How can I save zeppelin notebook from a docker?

I am using a docker-container for spark-zeppelin. The docker image was fund here,
https://github.com/Gmousse/docker-zeppelin-python3
I can start an image and work using this command,
docker run -it -p 8080:8080 -p 8081:8081 gmousse/docker-zeppelin-python3
To be able to communicate with the host, I have mounted some paths to host with volume flag like this,
docker run -it -v /cephfs:/cephfs -p 8080:8080 -p 8081:8081 gmousse/docker-zeppelin-python3
it works fine. Now to mount the zeppelin working directory I added this,
docker run -it -v /cephfs:/cephfs -v my_path_on_host:/zeppelin -p 8080:8080 -p 8081:8081 gmousse/docker-zeppelin-python3
And this does not run.
In this command actually it is looking for a zeppelin.sh file in /zeppelin and fails.
Any idea, how can I mount a local volume, and be able to save zeppelin notebook on the host?
Thank you for your time, in advance...
It is very handy to store notebooks on local file system especially under version control.
So you need to mount only notebook folder, but you tried to mount whole zeppelin folder and on start container could not find zeppelin files.
Correct mount examples:
docker run \
-p 8080:8080 \
-v /home/user/zeppelin_notebooks:/zeppelin/notebook \
apache/zeppelin:0.8.0
docker run \
-p 8080:8080 \
--mount type=bind,source="$(pwd)"/zeppelin_notebooks,target=/zeppelin/notebook \
--rm --name zeppelin apache/zeppelin:0.8.0
for My apache zeppelin docker hosted on window 10, the pwd is /opt/zeppelin, the default path for notebooks is /opt/zeppelin/notebook, so I mount my window path as below, Therefore, All notebooks are being save in "C:/Zeppelin/notebook"
docker run -p 8080:8080 -v C:/Zeppelin/Data/:/opt/zeppelin/Data/ -v C:/Zeppelin/notebook:/opt/zeppelin/notebook --name zeppelin apache/zeppelin:0.10.0

Using LOAD CSV to import a local file to Neo4j in a Docker container

So I've been trying to import an external CSV file into my graphdb.
My neo4j is stored in a Docker container.
I placed the file in NEO_HOME/import, as implied.
I called the LOAD CSV command with "file:///mycsv.csv" as an argument, and got the followng in return
Couldn't load the external resource at: file:/var/lib/neo4j/import/mycsv.csv
Since I'm running the Docker container on a Windows environment, I don't see where the /var directory should be. Even when browsing the container itself via the Docker Quickstart Terminal. I still cannot find /var/lib...
When trying to change the .conf file to a different import directory, it didn't help as well.
Did somebody have this before?
You have to explicitly mount your import folder when invoking docker:
docker run -e NEO4J_AUTH=none -p 7474:7474 -p 7687:7687 -v $PWD/plugins:/plugins -v $PWD/import:/var/lib/neo4j/import neo4j:3.1.3-enterprise
When you run this command:
docker run \
--name testneo4j \
-p7474:7474 -p7687:7687 \
-d \
-v $HOME/neo4j/data:/data \
-v $HOME/neo4j/logs:/logs \
-v $HOME/neo4j/import:/var/lib/neo4j/import \
-v $HOME/neo4j/plugins:/plugins \
--env NEO4J_AUTH=neo4j/test \
neo4j:latest
The physical directory on Windows will be probably located in C:\Users\<your user>\neo4j like this:
C:\Users\<your user>\neo4j data;C import;C logs;C plugins;C
https://i.stack.imgur.com/VuW46.png

How to run Tensorboard and jupyter concurrently with docker?

I'm starting to learn how to use TensorFlow to do machine learning. And find out docker is pretty convenient to deploy TensorFlow to my machine. However, the example that I could found did not work on my target setting. Which is
Under ubuntu16.04 os, using nvidia-docker to host jupyter and tensorboard service together(could be two container or one container with two service). And files create from jupyter should be visible to host OS.
Ubuntu 16.04
Dokcer
nvidia-docker
Jupyter
Tensorboard
Jupyter container
nvidia-docker run \
--name jupyter \
-d \
-v $(pwd)/notebooks:/root/notebooks \
-v $(pwd)/logs:/root/logs \
-e "PASSWORD=*****" \
-p 8888:8888 \
tensorflow/tensorflow:latest-gpu
Tensorboard container
nvidia-docker run \
--name tensorboard \
-d \
-v $(pwd)/logs:/root/logs \
-p 6006:6006 \
tensorflow/tensorflow:latest-gpu \
tensorboard --logdir /root/logs
I tried to mount logs folder to both container, and let Tensorboard access the result of jupyter. But the mount seems did work. When I create new file in jupyter container with notebooks folder, host folder $(pwd)/notebooks just appear nothing.
I also followed the instructions in Nvidia Docker, Jupyter Notebook and Tensorflow GPU
nvidia-docker run -d -e PASSWORD='winrar' -p 8888:8888 -p 6006:6006 gcr.io/tensorflow/tensorflow:latest-gpu-py3
Only Jupyter worked, tensorboard could not reach from port 6006.
I was facing the same problem today.
Short answer: I'm going to assume you are using the same container for both Jupyter Notebook and tensorboard. So, as you wrote, you can deploy the container with:
nvidia-docker run -d --name tensor -e PASSWORD='winrar'\
-p 8888:8888 -p 6006:6006 gcr.io/tensorflow/tensorflow:latest-gpu-py3
Now you can access both 8888 and 6006 ports but first you need to initialize tensorboard:
docker exec -it tensor bash
tensorboard --logdir /root/logs
About the other option: running jupyter and tensorboard in different containers. If you have problems mounting same directories in different containers (in the past there was a bug about that), since Docker 1.9 you can create independent volumes unlinked to particular containers. This may be a solution.
Create two volumes to store logs and notebooks.
Deploy both images with these volumes.
docker volume create --name notebooks
docker volume create --name logs
nvidia-docker run \
--name jupyter \
-d \
-v notebooks:/root/notebooks \
-v logs:/root/logs \
-e "PASSWORD=*****" \
-p 8888:8888 \
tensorflow/tensorflow:latest-gpu
nvidia-docker run \
--name tensorboard \
-d \
-v logs:/root/logs \
-p 6006:6006 \
tensorflow/tensorflow:latest-gpu \
tensorboard --logdir /root/logs
As an alternative, you can also use the ML Workspace Docker image. The ML Workspace is a web IDE that combines Jupyter, TensorBoard, VS Code, and many other tools & libraries into one convenient Docker image. Deploying a single workspace instance is as simple as:
docker run -p 8080:8080 mltooling/ml-workspace:latest
All tools are accessible from the same port. You can find information on how to access TensorBoard here.

Resources