Unable to docker run - docker

I am trying to setup an image of the osrm-backend on my docker. I am unable to run docker using the below commands (as mentioned in wiki)
docker run -t -v ${pwd}/data osrm/osrm-backend:v5.18.0 osrm-extract -p /opt/car.lua /data/denmark-latest.osm.pbf
docker run -t -v ${pwd}:/data osrm/osrm-backend:v5.18.0 osrm-contract /data/denmark-latest.osrm
docker run -t -i -p 5000:5000 -v ${pwd}/data osrm/osrm-backend:v5.18.0 osrm-routed /data/denmark-latest.osrm
I have already fetched the corresponding map using both wget and Invoke-WebRequest. Every time I run the first command from the above, it gives the error...
[error] Input file /data/denmark-latest.osm.pbf not found!
I have tried placing the downloaded maps in the corresponding location as well. Can anyone tell me what I am doing wrong here ?
I am using PowerShell on Windows 10

For me the problem was that docker was not able to access the C drive, even though the sharing was turned on in docker settings. After wasting lots of time, I turned off the sharing of C drive, and then turned it back on. After that when I mounted some folder to docker, it was able to see the files.
Docker share drive

Related

racadm - ERROR: Specified file <filename> does not exist

I'm trying to run racadm both in Windows Powershell using the official utility and on my Mac using this Docker container. In both instances, I can pull the RAC details, so I know my login and password are valid, but when I try to perform an sslkeyupload, I get the following error:
ERROR: Specified file file.pem does not exist.
The permissions on the file, at least on my Mac, are wide open (chmod 777) and are in the same directory I'm trying to run the script in:
docker run stackbot/racadm -r 10.10.1.4 -u root -p calvin sslkeyupload -t 1 -f ./file.pem
Anyone see anything obvious I may be doing wrong?
You're running the command inside a Docker container. It has no visibility to your local filesystem unless you explicitly expose your directory inside the container using the -v command line option:
docker run -v $PWD:$PWD -w $PWD ...
The -v option creates a bind mount, and the -w option sets the working directory.

Understanding a Docker .Sh file

The .sh file I am working with is:
docker run -d --rm -it --gpus '"device=0,1,2,3"' --ipc=host -v $HOME/Folder:/Folder tr_xl_container nohup python /Path/file.py -p/Path/ |& tee $HOME/Path/log.txt
I am confused about the -v and everything after that. Specifically, the -v $HOME/Folder:/Folder tr_xl_container section and -p/Path/. If someone would be able to help breakdown what those commands mean or point me to a reference that does, that would be very much appreciated. I checked Docker documentation and Linux command line documentation and did not come up with anything too helpful.
A docker run command is split up in 3 parts:
docker options
the image to run
a command for the container
In your case -d --rm -it --gpus '"device=0,1,2,3"' --ipc=host -v $HOME/Folder:/Folder are docker options.
tr_xl_container is the image name.
nohup python /Path/file.py -p/Path/ is the command sent to the container.
The last part, |& tee $HOME/Path/log.txt isn't run in the container, but takes the output from the docker run command and saves it in $HOME/Path/log.txt.
As for -v $HOME/Folder:/Folder, it's a volume mapping or more precisely, a bind mount. It creates a directory in the container with the path /Folder that is linked to the directory $Home/Folder on the host machine. That makes files in the host directory visible inside the container and if the container does anything with files in the /Folder directory, those changes will be visible in the host directory.
The command after the image name is for the container and it's up to the container what to do with it. From looking at it, it looks like it runs a Python program stored in /Path/file.py in the image. But to be sure, you'll need to know what the image does.

Running google automl‘s custom model locally using docker fails to find path /tmp/mounted_model/0001

I have been working on a ML model to classify some images. After saving the model I have exported it to my machine using gsutil and have set the environment for the container as specified in google‘s documentation. I am doubting there is something wrong with the following command:
sudo docker run --rm --name ${CONTAINER_NAME} -p ${PORT}:8501 -v ${YOUR_MODEL_PATH}:/tmp/mounted_model/0001 -t ${CPU_DOCKER_GCR_PATH}
I do not really understand what does the :/tmp/mounted_model/0001 stands for. Is it something automatically created? I even tried creating a folder in that directory and putting a the .pb model inside but error keeps occuring.
Please let me know if there is any solution for this issue. Thanks!
I had the same issue and the problem was in how I exported YOUR_MODEL_PATH.
That environment must be the folder path that contains saved_model.pb file
So, you have to download the saved_model.pb file into your machine.
gsutil gs://your_model_path/tf_saved_model-object-detection_20211124040443-2021-11-25T13:34:28.736944Z/ local_model_path
export YOUR_MODEL_PATH=local_model_path
Remember: local_model_path is a directory that inside contains the save_model.pb file.
Then run the command that you run before:
sudo docker run --rm --name ${CONTAINER_NAME} -p ${PORT}:8501 -v ${YOUR_MODEL_PATH}:/tmp/mounted_model/0001 -t ${CPU_DOCKER_GCR_PATH}

How to access Docker (with Spark) file systems

Suppose I am running CentOS. I installed docker, then run the image.
Suppose I use this image:
https://github.com/jupyter/docker-stacks/tree/master/pyspark-notebook
Then I run
docker run -it --rm -p 8888:8888 jupyter/pyspark-notebook
Now, I can open the browser with localhost:8088 and I can create a new Jupyter notebook, type code and run, etc.
However, how can I access the file I created and, for example, commit it to github. Furthermore, if I already have some code on github, how can I pull this code and access these code from docker?
Thank you very much,
You need to mount the volume
docker run -it --rm -p 8888:8888 -v /opt/pyspark-notebook:/home/jovyan jupyter/pyspark-notebook
You should have just executed !pwd in the a new notebook and found which folder it was storing the work in. And then mounted that as a volume. When you run it like above the files would be available on your host in /opt/pyspark-notebook

Docker - how can I copy a file from an image to a host?

My question is related to this question on copying files from containers to hosts; I have a Dockerfile that fetches dependencies, compiles a build artifact from source, and runs an executable. I also want to copy the build artifact (in my case it's a .zip produced by sbt dist in '../target/`, but I think this question also applies to jars, binaries, etc.
docker cp works on containers, not images; do I need to start a container just to get a file out of it? In a script, I tried running /bin/bash in interactive mode in the background, copying the file out, and then killing the container, but this seems kludgey. Is there a better way?
On the other hand, I would like to avoid unpacking a .tar file after running docker save $IMAGENAME just to get one file out (but that seems like the simplest, if slowest, option right now).
I would use docker volumes, e.g.:
docker run -v hostdir:out $IMAGENAME /bin/cp/../blah.zip /out
but I'm running boot2docker in OSX and I don't know how to directly write to my mac host filesystem (read-write volumes are mounting inside my boot2docker VM, which means I can't easily share a script to extract blah.zip from an image with others. Thoughts?
To copy a file from an image, create a temporary container, copy the file from it and then delete it:
id=$(docker create image-name)
docker cp $id:path - > local-tar-file
docker rm -v $id
Unfortunately there doesn't seem to be a way to copy files directly from Docker images. You need to create a container first and then copy the file from the container.
However, if your image contains a cat command (and it will do in many cases), you can do it with a single command:
docker run --rm --entrypoint cat yourimage /path/to/file > path/to/destination
If your image doesn't contain cat, simply create a container and use the docker cp command as suggested in Igor's answer.
docker cp $(docker create --name tc registry.example.com/ansible-base:latest):/home/ansible/.ssh/id_rsa ./hacked_ssh_key && docker rm tc
wanted to supply a one line solution based on pure docker functionality (no bash needed)
edit: container does not even has to be run in this solution
edit2: thanks to #Jonathan Dumaine for --rm so the container will be removed after, i just never tried, because it sounded illogical to copy something from somewhere which has been already removed by the previous command, but i tried it and it works
edit3: due the comments we found out --rm is not working as expected, it does not remove the container because it never runs, so I added functionality to delete the created container afterwards(--name tc=temporary-container)
edit 4: this error appeared, seems like a bug in docker, because t is in a-z and this did not happen a few months before.
Error response from daemon: Invalid container name (t), only [a-zA-Z0-9][a-zA-Z0-9_.-] are allowed
A much faster option is to copy the file from running container to a mounted volume:
docker run -v $PWD:/opt/mount --rm --entrypoint cp image:version /data/libraries.tgz /opt/mount/libraries.tgz
real 0m0.446s
** VS **
docker run --rm --entrypoint cat image:version /data/libraries.tgz > libraries.tgz
real 0m9.014s
Parent comment already showed how to use cat. You could also use tar in a similar fashion:
docker run yourimage tar -c -C /my/directory subfolder | tar x
Another (short) answer to this problem:
docker run -v $PWD:/opt/mount --rm -ti image:version bash -c "cp /source/file /opt/mount/"
Update - as noted by #Elytscha Smith this only works if your image has bash built in
Not a direct answer to the question details, but in general, once you pulled an image, the image is stored on your system and so are all its files. Depending on the storage driver of the local Docker installation, these files can usually be found in /var/lib/docker/overlay2 (requires root access). overlay2 should be the most common storage driver nowadays, but the path may differ.
The layers associated with an image can be found using $ docker inspect image IMAGE_NAME:TAG, look for a GraphDriver attribute.
At least in my local environment, the following also works to quickly see all layers associated with an image:
docker inspect image IMAGE_NAME:TAG | jq ".[0].GraphDriver.Data"
In one of these diff directories, the wanted file can be found.
So in theory, there's no need to create a temporary container. Ofc this solution is pretty inconvenient.
First pull docker image using docker pull
docker pull <IMG>:<TAG>
Then, create a container using docker create command and store the container id is a variable
img_id=$(docker create <IMG>:<TAG>)
Now, run the docker cp command to copy folders and files from docker container to host
docker cp $img_id:/path/in/container /path/in/host
Once the files/folders are moved, delete the container using docker rm
docker rm -v $img_id
You essentially had the best solution already. Have the container copy out the files for you, and then remove itself when it's complete.
This will copy the files from /inside/container/ to your machine at /path/to/hostdir/.
docker run --rm -v /path/to/hostdir:/mnt/out "$IMAGENAME" /bin/cp -r /inside/container/ /mnt/out/
Update - here's a better version without the tar file:
$id = & docker create image-name
docker cp ${id}:path .
docker rm -v $id
Old answer
PowerShell variant of Igor Bukanov's answer:
$id = & docker create image-name
docker cp ${id}:path - > local-file.tar
docker rm -v $id
I am using boot2docker on MacOS. I can assure you that scripts based on "docker cp" are portable. Because any command is relayed inside boot2docker but then the binary stream is relayed back to the docker command line client running on your mac. So write operations from the docker client are executed inside the server and written back to the executing client instance!
I am sharing a backup script for docker volumes with any docker container I provide and my backup scripts are tested both on linux and MacOS with boot2docker. The backups can be easily exchanged between platforms. Basically I am executing the following command inside my script:
docker run --name=bckp_for_volume --rm --volumes-from jenkins_jenkins_1 -v /Users/github/jenkins/backups:/backup busybox tar cf /backup/JenkinsBackup-2015-07-09-14-26-15.tar /jenkins
Runs a new busybox container and mounts the volume of my jenkins container with the name jenkins_jenkins_1. The whole volume is written to the file backups/JenkinsBackup-2015-07-09-14-26-15.tar
I have already moved archives between the linux container and my mac container without any adjustments to the backup or restore script. If this is what you want you find the whole script an tutorial here: blacklabelops/jenkins
You could bind a local path on the host to a path on the container, and then cp the desired file(s) to that path at the end of your script.
$ docker run -d \
-it \
--name devtest \
--mount type=bind,source="$(pwd)"/target,target=/app \
nginx:latest
Then there is no need to copy afterwards.

Resources