Recreating docker bind-mount - docker

If you bind-mount a non-existent file (on the host), docker will happily create a directory in its place and share it with the container. Upon "fixing" the mistake, (ie, stopping the container, replacing the directory with the file and starting the container), you'll get the following error:
docker: Error response from daemon: oci runtime error: container_linux.go:262: starting container process caused "process_linux.go:339: container init caused "rootfs_linux.go:57: mounting "[snip]" to [snip]" at "[snip]" caused "not a directory"""
: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.
Here are some steps to reproduce from a console:
# ensure test file does not exist
$ rm -f /test.txt
# run hello-world container with test.txt bind mount
$ docker run --name testctr -v /test.txt:/test.txt hello-world
Hello from Docker!
This message shows that your installation appears to be working correctly.
<snip>
# let's say what's in /
$ ls -l /
<snip>
dr-xr-xr-x 13 root root 0 Jul 17 01:08 sys
drwxr-xr-x 2 root root 4096 Jul 22 20:54 test.txt
drwxrwxrwt 1 root root 4096 Jul 17 09:01 tmp
<snip>
# let's correct the mistake and run the container again
$ rm -rf /test.txt
$ touch /test.txt
$ docker start testctr
Error response from daemon: OCI runtime create failed: container_linux.go:348: starting
container process caused "process_linux.go:402: container init caused \"rootfs_linux.go:58:
mounting \\\"/test.txt\\\" to rootfs \\\"/var/lib/docker/overlay2/
26fd6981e919e5915c31098fc74551314c4f05ce6c5e175a8be6191e54b7f807/merged\\\" at \\\"/var/lib/
docker/overlay2/26fd6981e919e5915c31098fc74551314c4f05ce6c5e175a8be6191e54b7f807/merged/
test.txt\\\" caused \\\"not a directory\\\"\"": unknown: Are you trying to mount a directory
onto a file (or vice-versa)? Check if the specified host path exists and is the expected
type
Error: failed to start containers: testctr
Note that even though we get this error when starting the existing container, creating a new one actually works.
So my question is, how do I fix this? I can see two different possibilities:
recreating the container:
somehow export command that created the container (docker run ...) into a variable (?)
delete the old container
run command generated in step 1
somehow tweak the existing container to fix the mount
this may be impossible to do via docker since apparently bind mounts are not managed by docker
PS: This question is also supposed to fix this one.

Two options for generating docker run ... for existing containers:
assaflavie/runlike - I went with this since the other seemed to have some issues with labels (but this one doesn't support bulk inspection)
$ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock assaflavie/runlike <container>
nexdrew/rekcod
$ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock nexdrew/rekcod <container>
The final script would look like (untested):
# get names of running containers (names persist recreation)
running_containers=$(docker ps --format "{{.Names}}")
# stop running containers
docker stop $running_containers
# generate recreate script
containers=$(docker ps --all --format "{{.Names}}")
echo '#!/bin/sh' > ./recreate.sh
while read -r cname; do
docker run --rm -v /var/run/docker.sock:/var/run/docker.sock assaflavie/runlike "$cname" >> ./recreate.sh
done <<< "$containers"
chmod +x ./recreate.sh
# ... do some action now (maybe also manually check recreate script) ...
# recreate containers
docker rm --force $containers
./recreate.sh
# restart containers that were previously running
docker start $running_containers
This seems to tackle my needs, but a few people noted that these tools might miss a docker feature or might even contain bugs (I noticed this already with rekcod, for example), so use with caution.

Related

How to mount an Arvados FUSE to a Docker container as volume

I would like to add a file system view of Arvados Keep to a Docker container. I created this file system view with arv-mount and it is based on File System in Userspace (FUSE).
Approach 1
$ docker run --rm -it -v /home/test/arv:/opt ubuntu:hirsute bash
docker: Error response from daemon: error while creating mount source path '/home/test/arv': mkdir /home/test/arv: file exists.
Approach 2
I also tried bind mounts
$ docker run --rm -it --mount type=bind,source=/home/test/arv,target=/opt,bind-propagation=rshared ubuntu:hirsute bash
docker: Error response from daemon: invalid mount config for type "bind": stat /home/test/arv: permission denied.
Both approaches I tried as non-root user (I configured Docker for non-root users) and root user.
I made it working by
Adding user_allow_other to /etc/fuse.conf
Creating FUSE mount with arv-mount --allow-other /home/test/arv

Trying to run "comitted" Docker image, get "cannot mount volume over existing file, file exists"

I am developing a Docker image. I started with a base image and was working inside it interactively, using bash. I installed a bunch of stuff, and the install (which included compiling a lot of code) took over 20 minutes, so to save my work, I used:
$ docker commit 0f08ac958391 myproject:wip
Now when I try to run the image:
$ docker run --rm -it myproject:wip
docker: Error response from daemon: cannot mount volume over existing file, file exists /var/lib/docker/overlay2/95aa9a9ea7cc0b1ba302adbd287e4d7059ee4fbe64183404df3bc65df332ee63/merged/run/host-services/ssh-auth.sock.
What is going on? How do I fix this?
Note about related/duplicate questions: while there are other questions about this error message, none of the answers directly explain why the error happens in this situation or what to do about it. In fact, most of the questions have no answers at all.
When I ran the base image, I included a mount for the SSH agent socket:
$ docker run --rm -it -v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock myproject:dev /bin/bash
This bind mounts a file from the host (actually the Docker daemon VM) to a file in the Docker container. When I committed the running image, the image contained the file /run/host-services/ssh-auth.sock. The image also contained an empty volume reference to /run/host-services/ssh-auth.sock. This means that when I ran
$ docker run --rm -it myproject:wip
It was equivalent to running
$ docker run -v /run/host-services/ssh-auth.sock --rm -it myproject:wip
Unfortunately, what that command does is create an anonymous volume and mount it into the directory /run/host-services/ssh-auth.sock in the container. This works if the container has such a directory or even if it does not. What causes it to fail is if the target name is taken up by a file. Docker will not mount a volume over a file.
The solution is to explicitly provide a mapping from a host file to the target volume. Any host file will do, but in my case it is best to use the original. So this works:
docker run --rm -it -v /run/host-services/ssh-auth.sock:/run/host-services/ssh-auth.sock myproject:wip

Docker compose reusing volumes

I'm trying to create a new Docker image that no longer uses volumes from a running container that does use images. The volumes were created using docker-compose file, not Dockerfile. The problem is, when I launch a new container via new docker-compose.yml file it still has the volumes mapped. I still need to keep these volumes and the original containers/images that use them. Also, if possible I would like to continue to use the same docker image, just add a new version, or :latest. Here's the steps I used:
New version of an existing image:
docker commit <image id> existingImage:new-version
Create a new image from current running container:
docker commit <Image ID> newimage
Create new docker-compose.yml with no volumes defined and run docker-compose with a different project name
docker-compose -p <new project name>
Running without docker-compose, just use docker run:
docker run -d -p 8093:80 <img>:<version>
Any time I run any combination of these the volumes are still mapped from the original image. So my question is, how to I create a container from an image that once had mapped volumes but I no longer want to use the volumes?
Edit:
Additional things I've tried:
Stop container, remove container, restart docker, run docker compose again. No luck.
Edit 2:
Decided to start over on the image. Using a base image, launched a container with an updated docker compose file that uses the now unrelated image. Run docker-compose -f up -d -> STILL has these same volumes mapped even though the image does not (and never has) any volumes mapped, and the current docker-compose.yml file does not map files. It looks like docker-compose caches what volumes are mapped for projects.
After searching for caching options in docker-compose, I came across this article: How to get docker-compose to always re-create containers from fresh images?
which seems to solve the problem of caching images but not containers caching volumes
According to another SO post what I am trying to do is not possible. For future reference, one cannot attach volumes to an image, and then later decide to remove them. A new image must be created without the volumes instead. Reference:
How to remove configure volumes in docker images
To remove volumes along with the containers used by docker-compose, use docker-compose down -v.
To start containers with docker-compose, leave your existing volumes intact, but not use those volumes, you should change your project name. You can use docker-compose -p new_project_name up -d for that.
Edit: here's an example showing how docker-compose does not reuse named volumes between different projects, but it does reuse and persist the volume unless you do a down -v:
$ docker-compose -p proj1 -f docker-compose.vol-named.yml up -d
Creating network "proj1_default" with the default driver
Creating volume "proj1_data" with default driver
Creating proj1_test_1 ...
Creating proj1_test_1 ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
71f2eb516f71 busybox "tail -f /dev/null" 5 seconds ago Up 2 seconds proj1_test_1
$ docker exec -it 71f /bin/sh
/ # ls /data
/ # echo "Hello proj1" >/data/data.txt
/ # exit
Volume is now populated, lets stop and start a new container to show it persist:
$ docker-compose -p proj1 -f docker-compose.vol-named.yml down
Stopping proj1_test_1 ... done
Removing proj1_test_1 ... done
Removing network proj1_default
$ docker-compose -p proj1 -f docker-compose.vol-named.yml up -d
Creating network "proj1_default" with the default driver
Creating proj1_test_1 ...
Creating proj1_test_1 ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
311900fd3d27 busybox "tail -f /dev/null" 5 seconds ago Up 3 seconds proj1_test_1
$ docker exec -it 311 /bin/sh
/ # cat /data/data.txt
Hello proj1
/ # exit
There's the expected persistent volume, lets run a different project at the same time to show the volume would be independent:
$ docker-compose -p proj2 -f docker-compose.vol-named.yml up -d
Creating network "proj2_default" with the default driver
Creating volume "proj2_data" with default driver
Creating proj2_test_1 ...
Creating proj2_test_1 ... done
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d39e6fc51436 busybox "tail -f /dev/null" 4 seconds ago Up 2 seconds proj2_test_1
311900fd3d27 busybox "tail -f /dev/null" 33 seconds ago Up 32 seconds proj1_test_1
$ docker exec -it d39 /bin/sh
/ # ls -al /data
total 8
drwxr-xr-x 2 root root 4096 Nov 6 19:56 .
drwxr-xr-x 1 root root 4096 Nov 6 19:56 ..
/ # exit
The volume is completely empty in the new project. Let's cleanup.
$ docker-compose -p proj2 -f docker-compose.vol-named.yml down -v
Stopping proj2_test_1 ...
Stopping proj2_test_1 ... done
Removing proj2_test_1 ... done
Removing network proj2_default
Removing volume proj2_data
$ docker volume ls
DRIVER VOLUME NAME
local proj1_data
Note the volume is there in proj1 from before.
$ docker-compose -p proj1 -f docker-compose.vol-named.yml down -v
Stopping proj1_test_1 ... done
Removing proj1_test_1 ... done
Removing network proj1_default
Removing volume proj1_data
$ docker volume ls
DRIVER VOLUME NAME
But doing a down -v deletes the volume.

Passing local CQL commands file to Cassandra Docker container

Is it possible to pass a local file for CQL commands to a Cassandra Docker container?
Using docker exec fails as it cannot find the local file:
me#meanwhileinhell:~$ ls -al
-rw-r--r-- 1 me me 1672 Sep 28 11:02 createTables.cql
me#meanwhileinhell:~$ docker exec -i cassandra_1 cqlsh -f createTables.cql
Can't open 'createTables.cql': [Errno 2] No such file or directory: ‘createTables.cql'
I would really like not to have to open a bash session and run a script that way.
The container needs to be able to access the script first before you can execute it (i.e. the script file needs to be inside the container). If this is just a quick one-off run of the script, the easiest thing to do is probably to just use the docker cp command to copy the script from your host to the container:
$ docker cp createTables.cql container_name:/path/in/container
You should then be able to use docker exec to run the script at whatever path you copied it to inside the container. If this is something that's a work in progress and you might be changing and re-running the script while you're working on it, you might be better off mounting a directory with your scripts from your host inside the container. For that you'll probably want the -v option of docker run.
Hope that helps!
If you want docker container sees files in host system, the only way is to map volume. You can mapped current directory to /tmp and run command again docker exec -i cassandra_1 cqlsh -f /tmp/createTables.cql

Docker run -v mounting a file as a directory

I'm trying to mount a directory containing a file some credentials onto a container with the -v flag, but instead of mounting that file as a file, it mounts it as a directory:
Run script:
$ mkdir creds/
$ cp key.json creds/key.json
$ ls -l creds/
total 4
-rw-r--r-- 1 root root 2340 Oct 12 22:59 key.json
$ docker run -v /*pathToWorkingDirectory*/creds:/creds *myContainer* *myScript*
When I look at the debug spew of the docker run command, I see that it creates the /creds/ directory, but for some reason creates key.json as a subdirectory under that, rather than copying the file.
I've seen some other posts saying that if you tell docker run to mount a file it can't find, it will create a directory on the container with the filename, but since I didn't specify the filename and it knew to name the new directory 'key.json' it seems like it was able to find the file, but created it as a directory anyway? Has anyone run into this before?
In case it's relevant, the script is being run in Docker-in-Docker in another container as part of GitLab's CI process.
You are running Docker-in-Docker, this means when you specify a -v volume, Docker will look for this directory on the host since the shared sock enabling Docker-in-Docker actualy means your run command starts a container alongside the runner container.
I explain this in more detail in this SO answer:
https://stackoverflow.com/a/46441426/2078207
Also notice the comment below this answer to get a direction for a solution.

Resources