I have multiple containers which are running. I changed the docker log driver and want all my running containers to start using the new configuration. In docker's documentation, they said all containers must be recreated. I keep my applications in multiple folders, so I can't recreate them with a single command.
I have to go to each of the following folders one by one:
/home/docker/folder1
/home/docker/folder2
/home/docker/folder3
Then run the following command:
/usr/local/bin/docker-compose down
/usr/local/bin/docker-compose up -d
Is there any workaround?
I tried:
# /usr/local/bin/docker-compose down
ERROR:
Can't find a suitable configuration file in this directory or any
parent. Are you in the right directory?
Supported filenames: docker-compose.yml, docker-compose.yaml
You can list all running container and "give" them to docker container restart. For example:
docker container restart $(docker container ls --quiet)
If you only want to do it for container created by compose, use can filter by label
docker container restart \
$(docker container ls --quiet --filter label=com.docker.compose.project)
Alternatively, you can search for all compose files, and do something with them, for more advanced commands. Maybe this is required, if restart does not suffice.
find /home/docker -name docker-compose.yaml -type f \
| while read -r f; do
docker compose --file "$f" up --force-recreate -d
done
Or with xargs
find /home/docker -name docker-compose.yaml -type f \
| xargs --no-run-if-empty -I{} docker compose --file {} up --force-recreate -d
If you have jq installed, you could get the path to the compose files more reliably by using docker compose ls with the JSON formatted output
docker compose ls --format json \
| jq -rc 'map(.ConfigFiles) | .[]' \
| xargs -r -I{} docker compose -f {} up --force-recreate -d
If you want to take care of the name as well, you need to use that from the output too.
docker compose ls --format json \
| jq -c '.[]' | while read -r config; do
docker compose \
-p "$(echo "$config" | jq -r '.Name')" \
-f "$(echo "$config" | jq -r '.ConfigFiles')" \
up --force-recreate -d
done
This is still not 100% solid since you may have used more than 1 config file. In that case, you can replace the commas with colons in the ConfigFiles and set the env variable COMPOSE_FILE
docker compose ls --format json \
| jq -c '.[]' | while read -r config; do
COMPOSE_FILE="$(echo "$config" | jq -r '.ConfigFiles' | tr -s , :)" \
docker compose \
-p "$(echo "$config" | jq -r '.Name')" \
up --force-recreate -d
done
There is still one problem that you can't overcome that way, if you have used specific profiles, you have no way of knowing which ones. So hopefully docker container restart is enough, as it would avoid most issues related to finding compose files and using compose.
Related
I'm trying to create a docker container that will let me run firefox, so I can eventually use a jupyter notebook. Right now, although I have successfully installed firefox, I cannot get a window to open.
Following instructions from running-gui-apps-within-docker, I created an image (i.e. "sample") with Firefox and then tried to run it using
$ docker run -it --rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --net=host sample
When I did so, I got the following error:
root#machine:~# firefox
No protocol specified
Unable to init server: Could not connect: Connection refused
Error: cannot open display: :1
Using man docker run to understand the flags, I was not able to find the --net flag, though I did see a --network flag. However, replacing --net with --network didn't change anything. How do I specify a protocol, that will let me create an image from whose containers I will be able to run firefox?
PS - For what it's worth, when I check the value of DISPLAY, I get the predictable:
~# echo $DISPLAY
:1
I have been running firefox inside docker for quite some time so this is possible. With regards to the security aspects I think the following is the relevant parts:
Building
The build needs to match up uid/gid values with the user that is running the container. I do this with UID and GID build args:
Dockerfile
...
FROM fedora:35 as runtime
ENV DISPLAY=:0
# uid and gid in container needs to match host owner of
# /tmp/.docker.xauth, so they must be passed as build arguments.
ARG UID
ARG GID
RUN \
groupadd -g ${GID} firefox && \
useradd --create-home --uid ${UID} --gid ${GID} --comment="Firefox User" firefox && \
true
...
ENTRYPOINT [ "/entrypoint.sh" ]
Makefile
build:
docker pull $$(awk '/^FROM/{print $$2}' Dockerfile | sort -u)
docker build \
-t $(USER)/firefox:latest \
-t $(USER)/firefox:`date +%Y-%m-%d_%H-%M` \
--build-arg UID=`id -u` \
--build-arg GID=`id -g` \
.
entrypoint.sh
#!/bin/sh
# Assumes you have run
# pactl load-module module-native-protocol-tcp auth-ip-acl=127.0.0.1 auth-anonymous=1
# on the host system.
PULSE_SERVER=tcp:127.0.0.1:4713
export PULSE_SERVER
if [ "$1" = /bin/bash ]
then
exec "$#"
fi
exec /usr/local/bin/su-exec firefox:firefox \
/usr/bin/xterm \
-geometry 160x15 \
/usr/bin/firefox --no-remote "$#"
So I am running firefox as a dedicated non-root user, and I wrap it via xterm so that the container does not die if firefox accidentally exit or if you want to restart. It is a bit annoying having all these extra xterm windows, but I have not found any other way in preventing accidental loss of the .mozilla directory content (mapping out to a volume would prevent running multiple independent docker instances which I definitely want, and also from a privacy point of view not dragging along a long history is something I want. Whenever I do want to save something I make a copy of the .mozilla directory and save it on the host computer (and restore later in a new container)).
Running
run.sh
#!/bin/bash
export XSOCK=/tmp/.X11-unix
export XAUTH=/tmp/.docker.xauth
touch ${XAUTH}
xauth nlist ${DISPLAY} | sed -e 's/^..../ffff/' | uniq | xauth -f ${XAUTH} nmerge -
DISPLAY2=$(echo $DISPLAY | sed s/localhost//)
if [ $DISPLAY2 != $DISPLAY ]
then
export DISPLAY=$DISPLAY2
xauth nlist ${DISPLAY} | sed -e 's/^..../ffff/' | uniq | xauth -f ${XAUTH} nmerge -
fi
ARGS=$(echo $# | sed 's/[^a-zA-Z0-9_.-]//g')
docker run -ti --rm \
--user root \
--name firefox-"$ARGS" \
--network=host \
--memory "16g" --shm-size "1g" \
--mount "type=bind,target=/home/firefox/Downloads,src=$HOME/firefox_downloads" \
-v ${XSOCK}:${XSOCK} \
-v ${XAUTH}:${XAUTH} \
-e XAUTHORITY=${XAUTH} \
-e DISPLAY=${DISPLAY} \
${USER}/firefox "$#"
With this you can for instance run ./run.sh https://stackoverflow.com/ and get a container named firefox-httpsstackoverflow.com. If you then want to log into your bank completely isolated from all other firefox instances (protected by operating system process boundaries, not just some internal browser separation) you run ./run.sh https://yourbank.example.com/.
Try run xhost + in your docker host to allow conections with X server.
How do I move volumes from docker-for-mac into colima?
Will copy all the volumes from docker-for-mac and move them to colima.
Note: there will be a lot of volumes you may not want to copy over since they're temporary ones, you can ignore them by simply adding a | grep "YOUR FILTER" to the for loop, either before or after the awk.
The following code makes 2 assumptions:
you have docker-for-mac installed and running
you have colima running
That is all you need, now copy-and-paste this into your terminal. No need to touch anything.
(
# set -x # uncomment to debug
set -e
# ssh doesn't like file descriptor piping, we need to write the configuration into someplace real
tmpconfig=$(mktemp);
# Need to have permissions to copy the volumes, and need to remove the ControlPath and add ForwardAgent
(limactl show-ssh --format config colima | grep -v "^ ControlPath\| ^User"; echo " ForwardAgent=yes") > $tmpconfig;
# Setup root account
ssh -F $tmpconfig $USER#lima-colima "sudo mkdir -p /root/.ssh/; sudo cp ~/.ssh/authorized_keys /root/.ssh/authorized_keys"
# Loop over each volume inside docker-for-mac
for volume_name in $(DOCKER_CONTEXT=desktop-linux docker volume ls | awk '{print $2}'); do
echo $volume_name;
# Make the volume backup
DOCKER_CONTEXT=desktop-linux docker run -d --rm --mount source=$volume_name,target=/volume --name copy-instance busybox sleep infinate;
DOCKER_CONTEXT=desktop-linux docker exec copy-instance sh -c "tar czf /$volume_name.tar /volume";
DOCKER_CONTEXT=desktop-linux docker cp copy-instance:/$volume_name.tar /tmp/$volume_name.tar;
DOCKER_CONTEXT=desktop-linux docker kill copy-instance;
# Restore the backup inside colima
DOCKER_CONTEXT=colima docker volume create $volume_name;
ssh -F $tmpconfig root#lima-colima "rm -rf /var/lib/docker/volumes/$volume_name; mkdir -p /var/lib/docker/volumes/$volume_name/_data";
scp -r -F $tmpconfig /tmp/$volume_name.tar root#lima-colima:/tmp/$volume_name.tar;
ssh -F $tmpconfig root#lima-colima "tar -xf /tmp/$volume_name.tar --strip-components=1 --directory /var/lib/docker/volumes/$volume_name/_data";
done
)
I want to copy some file from a docker image without actually running a container from that image. Is it possible to do so? If yes, what would be steps required ?
There's not a docker cp for images because images are inmutable objects but:
you can create a non running container:
docker create --name cont1 some-image
docker cp cont1:/some/dir/file.tmp file.tmp
docker rm cont1
The full not accepted (now rejected) proposal for docker cp on images is here:
https://github.com/moby/moby/issues/16079
Here's a one-liner to copy a file from a Docker Image, through a container:
Image defines CMD
docker run --rm \
-v $(pwd):/binary my-image /bin/sh -c "cp /kube-bench /binary"
Image defines ENTRYPOINT
docker run --rm --entrypoint "/bin/sh" \
-v $(pwd):/binary kube-bench-image -c "cp /kube-bench /binary"
Both commands mounts the current directory in order to copy the file /kube-bench to it.
docker image save alpine -o alpine.tar
untar, it will have a hash named directory and inside there is a layer.tar which contains the image files.
The Alfonso Tienda response do not work with image without CMD or Entrypoint.
The jhernandez is a good start point to a solution, but it need to extract all files.
What about:
docker image save $IMAGE |
tar tf - |
grep layer.tar |
while read layer; do
docker image save $IMAGE |
tar -xf - $layer -O |
tar -xf - $FILENAME ; done
So, we can just set the IMAGEand FILENAME to extract like:
$ IMAGE=alpine
$ FILENAME=etc/passwd
$ docker image save $IMAGE | tar tf - | grep layer.tar | while read layer; do docker image save $IMAGE | tar -xf - $layer -O | tar -xf - $FILENAME ; done
And we can save a some storage space.
I saw some blog posts where people talk about JMeter and Docker. I understand that Docker will be helpful for setting up a container with all the dependencies. But they all run/create the containers in the same host. So ideally all the containers will share the host resources. It is like you run multiple instances of jmeter in the same host. It will not be helpful to generate more load.
When a host has 12GB RAM, I think 1 instance of JMeter with 10GB heap can generate more load than running 10 containers with 1 jmeter instance in each container.
What is the point of running docker here?
I made an automatic solution that can be easily integrated with Jenkins.
The dockerfile should be extended from java8 and add the JMeter build. This Docker image I will call jmeter-base:
FROM java:8
RUN mkdir /jmeter \
&& cd /jmeter/ \
&& wget https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-3.3.tgz \
&& tar -xvzf apache-jmeter-3.3.tgz \
&& rm apache-jmeter-3.3.tgz
ENV JMETER_HOME /jmeter/apache-jmeter-3.3/
# Add Jmeter to the Path
ENV PATH $JMETER_HOME/bin:$PATH
If you want to use a master-slave solution, this is the jmeter master Dockerfile:
FROM jmeter-base
WORKDIR $JMETER_HOME
# Ports to be exposed from the container for JMeter Master
RUN mkdir scripts
EXPOSE 60000
And this is the jmeter slave Dockerfile:
FROM jmeter-base
# Ports to be exposed from the container for JMeter Slaves/Server
EXPOSE 1099 50000
# Application to run on starting the container
ENTRYPOINT $JMETER_HOME/bin/jmeter-server \
-Dserver.rmi.localport=50000 \
-Dserver_port=1099
Now, with the both images, you should execute a script to execute you should know all slave IPs. This script make all the job:
#!/bin/bash
COUNT=${1-1}
docker build -t jmeter-base jmeter-base
docker-compose build && docker-compose up -d && docker-compose scale master=1 slave=$COUNT
SLAVE_IP=$(docker inspect -f '{{.Name}} {{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' $(docker ps -aq) | grep slave | awk -F' ' '{print $2}' | tr '\n' ',' | sed 's/.$//')
WDIR=`docker exec -it master /bin/pwd | tr -d '\r'`
mkdir -p results
for filename in scripts/*.jmx; do
NAME=$(basename $filename)
NAME="${NAME%.*}"
eval "docker cp $filename master:$WDIR/scripts/"
eval "docker exec -it master /bin/bash -c 'mkdir $NAME && cd $NAME && ../bin/jmeter -n -t ../$filename -R$SLAVE_IP'"
eval "docker cp master:$WDIR/$NAME results/"
done
docker-compose stop && docker-compose rm -f
I came to understand from this post from a friend of mine that we should not be running multiple docker containers in the same host to generate more load.
http://www.testautomationguru.com/jmeter-distributed-load-testing-using-docker/
Instead the usage of docker here is to quickly setup the jmeter environment.
I'm new to Docker and am excited about using the --volumes-from feature but there's something I'm not understanding.
If I want to use --volumes-from with two data-only containers, each of which exports volumes named /srv, how to I prevent the volume paths from colliding? I can map volume names when creating a bind mount using [host-dir]:[container-dir]; how do I do that with --volumes-from?
So what I want would look something like this:
docker run --name=DATA1 --volume=/srv busybox true
docker run --name=DATA2 --volume=/srv busybox true
docker run -t -i -rm --volumes-from DATA1:/srv1 --volumes-from DATA2:/srv2 ubuntu bash
It can be done, but it is not supported at this moment in docker commandline interface.
How-to
Find the volumes directories:
docker inspect DATA1 | grep "vfs/dir"
# output something like:
# "/srv": "/var/lib/docker/vfs/dir/<long vol id>"
So, you can automate this, and mount these directories at mount points of your choice:
# load directories in variables:
SRV1=$(docker inspect DATA1 | grep "vfs/dir" | awk '/"(.*)"/ { gsub(/"/,"",$2); print $2 }')
SRV2=$(docker inspect DATA2 | grep "vfs/dir" | awk '/"(.*)"/ { gsub(/"/,"",$2); print $2 }')
now, mount these volumes by real directories instead of the --volumes-from:
docker run -t -i -v $SRV1:/srv1 -v $SRV2:/srv2 ubuntu bash
IMO, the functionality is identical, because this is the same thing that is done when using --volumes-from.
For completeness...
#create data containers
docker run --name=d1 -v /svr1 busybox sh -c 'touch /svr1/some_data'
docker run --name=d2 -v /svr2 busybox sh -c 'touch /svr2/some_data'
# all together...
docker run --rm --volumes-from=d1 --volumes-from=d2 busybox sh -c 'find -name some_data'
# prints:
# ./svr2/some_data
# ./svr1/some_data
# cleanup...
docker rm -f d1 d2
The "--volumes-from=container" just map over the filesystem, like mount --bind
If you want to change the path, Jiri's answer is (currently) the only way.
But if you are in a limited environment you might want to use dockers built in inspect parsing capabilities:
# create data containers
docker run --name=DATA1 --volume=/srv busybox sh -c 'touch /srv/some_data-1'
docker run --name=DATA2 --volume=/srv busybox sh -c 'touch /srv/some_data-2'
# run with volumes and show the data
docker run \
-v $(docker inspect -f '{{ index .Volumes "/srv" }}' DATA1):/srv1 \
-v $(docker inspect -f '{{ index .Volumes "/srv" }}' DATA2):/srv2 \
--rm busybox sh -c 'find -name some_data-*'
# prints:
# ./srv2/some_data-2
# ./srv1/some_data-1
# ditch data containers...
docker rm -f DATA1 DATA2
this probably even works with the old bash version that comes with boot2docker.