Run command on multiple Docker containers at once - docker

I am wanting to run a command on all of my containers at one. Is that possible?
For instance, I have a directory that I need to delete on all of my containers. I have over 20 containers running at once, so going to each one would be a pain (yes I know that in proper docker world, I should not be doing this, but I am still in development stage and just need to see the result)

You can add volume in all containers using key -v
or define in Dockerfile
docker run -d -P --name web -v /webapp training/webapp python app.py
If you remove directory /webapp in docker host you will remove training/webapp directory on each container.
But you need define that shared folder on each container.
https://docs.docker.com/engine/userguide/containers/dockervolumes/

Related

how to copy files from one docker service to another, inside of docker bash

I am trying to copy a file from one docker-compose service to another while in the service's bash environment, but I cannot seem to figure out how to do it.
Can anybody provide me with an idea?
Here is the command I am attempting to run:
(docker cp ../db_backups/latest.sqlc pgadmin_1:/var/lib/pgadmin/storage/mine/)
The error is simply:
bash: docker: command not found
There's no way to do that by default. There are a few things you could do to enable that behavior.
The easiest solution is just to run docker cp on the host (docker cp from the first container to the host, then docker cp from the host to the second container).
If it all has to be done inside the container, the next easiest solution is probably to use a shared volume:
docker run -v shared:/shared --name containerA ...
docker run -v shared:/shared --name containerB ...
Then in containerA you can cp ../db_backups/latest.sqlc /shared, and in containerB you can cp /shared/latest.sqlc /var/lib/pgadmin/storage/mine.
This is a nice solution because it doesn't require installing anything inside the container.
Alternately, you could:
Install the docker CLI inside each container, and mount the Docker socket inside each container. This would let you run your docker cp command, but it gives anything inside the container complete control of your host (because access to docker == root access).
Run sshd in the target container, set up the necessary keys, and then use scp to copy things from the first container to the second container.

How to mount volume inside child docker created by parent docker sharing docker.sock

I am trying to create a wrapper container to build and run a set of containers using a docker-compose I cannot modify. The docker-compose mounts several volumes, but when starting the docker-compose from inside of the wrapper docker, the volumes are still mounted from the host since the docker .sock is volume mounted to be the host's docker.sock.
I would like to not have to use full docker-in-docker due to all the problems associated with it outlined in jpetazzo's article.
I would also like to avoid volume-from since I cannot edit the docker-compose file mentioned previously.
Is there a way to get this snippet to correctly use the parent docker's file instead of going to the host filesystem and mounting it from there?
FROM docker:latest
RUN mkdir -p /tmp/parent/ && echo "This is from the parent docker" > /tmp/parent/parent.txt
CMD docker run -v /tmp/parent/parent.txt:/root/parent.txt --rm ubuntu:18.04 bash -c "cat /root/parent.txt"
when run with a command akin to this:
docker build -t parent . && docker run --rm -v /var/run/docker.sock:/var/run/docker.sock parent
Make your paths the same on the host and inside of the docker image, e.g.
docker run -v /var/run/docker.sock:/var/run/docker.sock \
-v /home/user:/home/user -w /home/user/project parent_image ...
By mounting the volume as /home/user in the same location inside the image, a command like docker-compose up with relative bind mounts will use the container path names when talking to the docker socket, which will match the paths on the host.

How to allow docker sibling container to bind subdirectory from existing volume

I want to know how I can allow a "child" (sibling) docker container to access some subdirectory of an already mounted volume. As an explanation, this is a simple setup:
I have the following Dockerfile, which just installs Docker in a Docker container:
FROM ubuntu
RUN apt-get update && apt-get install -y curl
RUN curl -fsSL https://get.docker.com/ | sh
I have the following data directory on my host machine
/home/user/data/
data1.txt
subdir/
data2.txt
Build the parent image:
[host]$> docker build -t parent .
Then run the parent container:
[host]$> docker run --rm --name parent -it -v /home/user/data/:/data/ -v /var/run/docker.sock:/var/run/docker.sock parent
Now I have a running container, and am "inside" the new container. Since I have the docker socket bound to the parent, I am able to run docker commands to create "child" containers, which are actually sibling containers. The data volume has been successfully mapped:
[parent]$> ls /data/
subdir data1.txt
Now I want to create a sibling container that can only see the subdir directory:
[parent]$> docker run --rm --name child -it -v /data/subdir/:/data/ ubuntu
This creates a sibling container, and I am successfully "inside" the container, however the new data directory is empty. My assumption is because the volume I tell it to use "/data/" is mapped by the host to a directory that doesn't exist on the host, rather than using the volume defined when running the parent.
[child]$> ls /data/
<nothing>
What can I do to allow this mapping to work, so that the child can create files in the subdirectory, and that the parent container can see and access these files? The child is not allowed to see data1.txt (or anything else above the subdirectory).
"Sibling" container is the correct term, there is no direct relationship between what you have labeled the "parent" and "child" containers, even though you ran the docker command in one of the containers.
The container with the docker socket mounted still controls the dockerd running on the host, so any paths sent to dockerd via the API will be in the hosts scope.
There are docker commands where using the containers filesystem does change things. This is when the docker utility itself is accessing the local file system. docker build docker cp docker import docker export are examples where docker interacts with the local file system.
Mount
Use -v /home/user/data/subdir:/data for the second container
docker run --name parent_volume \
-it --rm -v /home/user/data:/data ubuntu
docker run --name child_volume \
-it --rm -v /home/user/data/subdir:/data ubuntu
The processes you run need to be careful with what is writing to data mounted into multiple containers so data doesn't get clobbered.

How to run postgres commands in a docker container?

I don't want to install postgres locally but as I have it in my docker container, I'd like to be able to run its commands and utils, like pg_dump myschema > schema.sql.
How can I run commands related to running containers inside of them?
docker exec -it <container> <cmd>
e.g.
docker exec -it your-container /bin/bash
There are different options
You can actually copy files to docker using docker cp command. Copy required files to docker and then you can go inside the docker and run the command.
Make some modification in docker file for docker image creation. Its actually really simple to create docker file. Then using EXPOSE option you can expose a port. After that you can use docker run --publish ie.. -p option to publish a container’s port(s) to the host. Then you can access postgres from outside and run scripts from outside by creating connection.
In the first option you need go inside the containers. For that first list running dockers using docker ps command. After that you can use docker exec -it container_name /bin/bash command

Does Docker update contents of volume when mounted if changes are made in Dockerfile?

I have Jenkins running in a Docker container. The home directory is in a host volume, in order to ensure that the build history is preserved when updates to the container are actioned.
I have updated the container, to create an additional file in the home directory. When the new container is pulled, I cannot see the changed file.
ENV JENKINS_HOME=/var/jenkins_home
RUN mkdir -p ${JENKINS_HOME}/.m2
COPY settings.xml ${JENKINS_HOME}/.m2/settings.xml
RUN chown -R jenkins:jenkins ${JENKINS_HOME}/.m2
VOLUME ["/var/jenkins_home"]
I am running the container like this:
docker run -v /host/directory:/var/jenkins_home -p 80:8080 jenkins
I had previous run Jenkins and so the home directory already exists on the host. When I pull the new container and run it, I see that the file .m2/settings.xml is not created. Why is this please?
Basically when you run:
docker run -v /host-src-dir:/container-dest-dir my_image
You will overlay your /container-dest-dir with what is in /host-src-dir
From Docs
$ docker run -d -P --name web -v /src/webapp:/webapp training/webapp python app.py
This command mounts the host directory, /src/webapp, into the
container at /webapp. If the path /webapp already exists inside the
container’s image, the /src/webapp mount overlays but does not remove
the pre-existing content. Once the mount is removed, the content is
accessible again. This is consistent with the expected behavior of the
mount command.
This SO question is also relevant docker mounting volumes on host
It seems you want it the other way around (i.e. the container is source and the host is destination).
Here is a workaround:
Create the volume in your Dockerfile
Run it without -v i.e.: docker run --name=my_container my_image
Run docker inspect --format='{{json .Mounts}}' my_container
This will give you output similar to:
[{"Name":"5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73","Source":"/var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data","Destination":"/var/jenkins_home","Driver":"local","Mode":"","RW":true,"Propagation":""}]
Which means your dir as it is on container was mounted into the host directory /var/lib/docker/volumes/5e2d41896b9b1b0d7bc0b4ad6dfe3f926c73/_data
Unfortunately, I do not know a way to make it mount on a specific host directory instead.

Resources