Docker: error response from daemon: invalid mode: /tf - docker

I'm new to using docker and my objective is to bind mount a docker image to a file path on my host machine (shown in the below directory) so I can:
Run a Jupyter Notebook instance without losing the data every time I end my terminal session
Link my Jupyter Notebook to the same path where my training data resides
I have tried at looking at many threads on the topic to little avail. I run the command shown below and am using Linux Mint:
sudo docker run -it --rm --gpus all -v "$(pwd):/media/hossamantarkorin/Laptop Data II/1- Educational/ML Training/Incident Detection/I75_I95 RITIS":"/tf" -p 8888:8888 tensorflow/tensorflow:2.3.0rc1-gpu-jupyter
What am I doing wrong here?
Thanks,
Hossam

This usually happens when docker is not running.
Try sudo service docker start before entering your command.

I just wanted to provide an update on this. The easiest way to work on your local directory is to:
Do a change directory to where you want to work
Run your docker while bind mounting to your pwd:
sudo docker run -it --rm --gpus all -v "$(pwd):/tf" -p 8888:8888 tensorflow/tensorflow:2.3.0rc1-gpu-jupyter

Related

Docker: error while creating mount source path. How can i fix it?

Tks all, idk why, but now its working
I learn to use docker. I try mount a host directory in a Docker container: >docker run -it -v /Users/Kell/Desktop/data:/home/data 77
And this is error: docker: Error response from daemon: error while creating mount source path '/Users/Kell/Desktop/data': mkdir /Users: file exists.
**I use windows and docker 20.10.12, 77 is imageID **
I tried in another disk and tried many ways but still not working. Can u help me ?
If you learning docker from scratch it is recommended to use --mount and not -v anymore: Mount > v
The syntax of --mount and -v differs, so here you' find both: How to mount
Path style in Windows depends on the console you are using. Some are just working in one and not in another.
Windows-Style: docker run --rm -ti -v C:\Users\user\work:/work alpine
Pseudo-Linux-Style in Windows: docker run --rm -ti -v /c/Users/user/work:/work alpine as well as //c/
Inside WSL: docker run --rm -ti -v /mnt/c/Users/user/work:/work alpine
See: Path conversion in Windows

Issue in saving file from docker container to host

I am using below command to save files generated from Docker container to host machine. But my files are not being saved after I exit the container. I tried different ways but none is working.
docker run --rm -it -v "$(pwd)/sever-data/src:/data" test bash
Thank you
I tried the following command to save container data to the host directory and its working perfectly fine.
sudo docker run --rm -it -v $(pwd):/sever-data/src -w /sever-data/src

Trying to run "comitted" Docker image, get "cannot mount volume over existing file, file exists"

I am developing a Docker image. I started with a base image and was working inside it interactively, using bash. I installed a bunch of stuff, and the install (which included compiling a lot of code) took over 20 minutes, so to save my work, I used:
$ docker commit 0f08ac958391 myproject:wip
Now when I try to run the image:
$ docker run --rm -it myproject:wip
docker: Error response from daemon: cannot mount volume over existing file, file exists /var/lib/docker/overlay2/95aa9a9ea7cc0b1ba302adbd287e4d7059ee4fbe64183404df3bc65df332ee63/merged/run/host-services/ssh-auth.sock.
What is going on? How do I fix this?
Note about related/duplicate questions: while there are other questions about this error message, none of the answers directly explain why the error happens in this situation or what to do about it. In fact, most of the questions have no answers at all.
When I ran the base image, I included a mount for the SSH agent socket:
$ docker run --rm -it -v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock myproject:dev /bin/bash
This bind mounts a file from the host (actually the Docker daemon VM) to a file in the Docker container. When I committed the running image, the image contained the file /run/host-services/ssh-auth.sock. The image also contained an empty volume reference to /run/host-services/ssh-auth.sock. This means that when I ran
$ docker run --rm -it myproject:wip
It was equivalent to running
$ docker run -v /run/host-services/ssh-auth.sock --rm -it myproject:wip
Unfortunately, what that command does is create an anonymous volume and mount it into the directory /run/host-services/ssh-auth.sock in the container. This works if the container has such a directory or even if it does not. What causes it to fail is if the target name is taken up by a file. Docker will not mount a volume over a file.
The solution is to explicitly provide a mapping from a host file to the target volume. Any host file will do, but in my case it is best to use the original. So this works:
docker run --rm -it -v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock myproject:wip

How do I transfer a volume's data to another volume on my local host in docker?

I did
docker run -v /jenkins_home:/var/jenkins_home jenkins/jenkins:alpine
on Windows (with docker installed as a linux container).
However, after configuring jenkins on that container, I now wanted to transfer the data in that /jenkins_home volume into a C:\jenkins_home folder on my local windows host machine\another machine.
Any way I can get the data from the /jenkins_home to c:/jenkins_home?
I know I should have made it
docker run -v c:/jenkins_home:/var/jenkins_home jenkins/jenkins:alpine
at the start but mistakes were made and I was wondering how do I fix that as the above suggestion?
Tried running
docker run -it -p 8080:8080 -p 50000:50000 --volumes-from jenkins_old -v c:/jenkins_home:/var/jenkins_home --name jenkins_new jenkins/jenkins:alpine
but it doesn't transfer the data over using the new c:\jenkins_home folder
docker run -v /jenkins_home:/var/jenkins_home jenkins/jenkins:alpine
Can't get the data to transfer over from the /jenkins_home folder to c:\jenkins_home folder.
I don't know where the /jenkins_home would map to on windows, but you could try this:
docker run -it --rm -v /jenkins_home:/from -v c:\jenkins_home:/to alpine cp -r /from /to

How to access Docker (with Spark) file systems

Suppose I am running CentOS. I installed docker, then run the image.
Suppose I use this image:
https://github.com/jupyter/docker-stacks/tree/master/pyspark-notebook
Then I run
docker run -it --rm -p 8888:8888 jupyter/pyspark-notebook
Now, I can open the browser with localhost:8088 and I can create a new Jupyter notebook, type code and run, etc.
However, how can I access the file I created and, for example, commit it to github. Furthermore, if I already have some code on github, how can I pull this code and access these code from docker?
Thank you very much,
You need to mount the volume
docker run -it --rm -p 8888:8888 -v /opt/pyspark-notebook:/home/jovyan jupyter/pyspark-notebook
You should have just executed !pwd in the a new notebook and found which folder it was storing the work in. And then mounted that as a volume. When you run it like above the files would be available on your host in /opt/pyspark-notebook

Resources