Why exited docker conatiner are not getting removed? - docker

File name: dockerHandler.sh
#!/bin/bash
set -e
to=$1
shift
cont=$(docker run -d "$#")
code=$(timeout "$to" docker wait "$cont" || true)
docker kill $cont &> /dev/null
docker rm $cont
echo -n 'status: '
if [ -z "$code" ]; then
echo timeout
else
echo exited: $code
fi
echo output:
# pipe to sed simply for pretty nice indentation
docker logs $cont | sed 's/^/\t/'
docker rm $cont &> /dev/null
But whenever I check the docker container status after running the the docker image it is giving list of exited docker containers.
command: docker ps -as
Hence to delete those exited containers I am running manually below command
rm $(docker ps -a -f status=exited -q)

You should add the flag --rm to your docker command:
From Docker man:
➜ ~ docker run --help | grep rm
--rm Automatically remove the container when it exits

removed lines
docker kill $cont &> /dev/null
docker rm $cont
docker logs $cont | sed 's/^/\t/'
and used gtimeout instead timeout in Mac, it works fine.
To install gtimeout on Mac:
Installing CoreUtils
brew install coreutils
In line 8 of DockerTimeout.sh change timeout to gtimeout

Related

How to migrate volume data from docker-for-mac to colima

How do I move volumes from docker-for-mac into colima?
Will copy all the volumes from docker-for-mac and move them to colima.
Note: there will be a lot of volumes you may not want to copy over since they're temporary ones, you can ignore them by simply adding a | grep "YOUR FILTER" to the for loop, either before or after the awk.
The following code makes 2 assumptions:
you have docker-for-mac installed and running
you have colima running
That is all you need, now copy-and-paste this into your terminal. No need to touch anything.
(
# set -x # uncomment to debug
set -e
# ssh doesn't like file descriptor piping, we need to write the configuration into someplace real
tmpconfig=$(mktemp);
# Need to have permissions to copy the volumes, and need to remove the ControlPath and add ForwardAgent
(limactl show-ssh --format config colima | grep -v "^ ControlPath\| ^User"; echo " ForwardAgent=yes") > $tmpconfig;
# Setup root account
ssh -F $tmpconfig $USER#lima-colima "sudo mkdir -p /root/.ssh/; sudo cp ~/.ssh/authorized_keys /root/.ssh/authorized_keys"
# Loop over each volume inside docker-for-mac
for volume_name in $(DOCKER_CONTEXT=desktop-linux docker volume ls | awk '{print $2}'); do
echo $volume_name;
# Make the volume backup
DOCKER_CONTEXT=desktop-linux docker run -d --rm --mount source=$volume_name,target=/volume --name copy-instance busybox sleep infinate;
DOCKER_CONTEXT=desktop-linux docker exec copy-instance sh -c "tar czf /$volume_name.tar /volume";
DOCKER_CONTEXT=desktop-linux docker cp copy-instance:/$volume_name.tar /tmp/$volume_name.tar;
DOCKER_CONTEXT=desktop-linux docker kill copy-instance;
# Restore the backup inside colima
DOCKER_CONTEXT=colima docker volume create $volume_name;
ssh -F $tmpconfig root#lima-colima "rm -rf /var/lib/docker/volumes/$volume_name; mkdir -p /var/lib/docker/volumes/$volume_name/_data";
scp -r -F $tmpconfig /tmp/$volume_name.tar root#lima-colima:/tmp/$volume_name.tar;
ssh -F $tmpconfig root#lima-colima "tar -xf /tmp/$volume_name.tar --strip-components=1 --directory /var/lib/docker/volumes/$volume_name/_data";
done
)

Docker container exited, not running

I have java application to be run in one docker container which connect to myqsql db which is in another docker container , the problem is that javaserver container is exited and not running, mysql8server is running well.
I start running the shell script ./run.sh
#!/bin/bash
RECONNECT_BRIDGE=$(docker network ls | grep -c rconnect_bridge)
echo "RECONNECT_BRIDGE COUNT = $RECONNECT_BRIDGE"
if [ $RECONNECT_BRIDGE -ne 0 ]; then
docker network rm rconnect_bridge
echo "Removing previous reconnect bridge"
fi
docker network create rconnect_bridge
echo "reconnect_bridge has been successfully created"
MYSQL_CONTAINER=$(docker container ls -a | grep -c mysql8server)
echo "MYSQL_CONTAINER COUNT $MYSQL_CONTAINER"
if [ $MYSQL_CONTAINER -ne 0 ]; then
docker container stop mysql8server
docker container rm mysql8server
echo "Previous mysql8server stopped and removed"
fi
#check mysql directory
if [ ! -d "/u01/data/mysql" ]; then
mkdir -p /u01/data/mysql
chmod u+xrw /u01/data/mysql
echo "/u01/data/mysql folder has been created"
fi
#create mysql container
docker container run -d --name mysql8server --network rconnect_bridge -v /u01/data/mysql:/var/lib/mysql -e MYSQL_ROOT_PASSWORD=root mysql:8.0.25
echo "waiting for the mysql server to be launched"
sleep 30
echo "launching mysql8server"
#Build the javaserver image
docker build -t javaserver:1.0 .
JAVA_CONTAINER=$(docker container ls -a | grep -c javaserver)
if [ $JAVA_CONTAINER -ne 0 ]; then
docker container stop javaserver
docker container rm javaserver
fi
docker container run -it -d --name javaserver --network rconnect_bridge javaserver:1.0 /bin/bash
echo "java server launched successfully"
Dockerfile
FROM ubuntu:21.04
ENV JAVA_HOME=/u01/data/jdk-11
ENV PATH=$PATH:${JAVA_HOME}/bin
RUN mkdir -p /u01/data
WORKDIR /u01/data
ADD https://download.java.net/openjdk/jdk11/ri/openjdk-11+28_linux-x64_bin.tar.gz .
RUN gunzip openjdk-11+28_linux-x64_bin.tar.gz
RUN tar -xvf openjdk-11+28_linux-x64_bin.tar
RUN rm -f openjdk-11+28_linux-x64_bin.tar
ADD https://archive.apache.org/dist/tomcat/tomcat-9/v9.0.45/bin/apache-tomcat-9.0.45.tar.gz .
RUN gunzip apache-tomcat-9.0.45.tar.gz
RUN tar -xvf apache-tomcat-9.0.45.tar
RUN rm -f apache-tomcat-9.0.45.tar
COPY target/rconnect.war /u01/data/apache-tomcat-9.0.45/webapps/
RUN echo "copying the war file to the destination"
COPY src/main/db/db-schema.sql /u01/data/
COPY startup.sh .
RUN chmod u+x /u01/data/startup.sh
ENTRYPOINT ["/u01/data/startup.sh"]
CMD ["tail","-f","/dev/null"]
startup shell script file
#!/bin/bash
set -e
mysql -uroot -proot -hmysql8server < /u01/data/db-schema.sql
echo "creating the db schema"
/u01/data/apache-tomcat-9.0.45/bin/startup.sh &
exec "$#"

Makefile fails executing sudo docker kill

I am trying to execute the next Makefile, but when I run the docker kill it fails because don't detect the "$(sudo docker ps -q)" or it does not execute this part.
I have the next Makefile:
.PHONY: kill all services farr-api farr-ingest farr-from-on-premise farr-real-time-processing
farr-api:
cd apis/1api && sudo docker-compose up -d
farr-ingest:
cd apis/2ingest && sudo docker-compose up -d
farr-from-on-premise:
cd apis/3onpremise && sudo docker-compose up -d
farr-real-time-processing:
cd apis/4realtimeprocessing && sudo docker-compose up -d
services:
cd services && sudo docker-compose up -d
all: services farr-from-on-premise farr-real-time-processing farr-ingest farr-api
kill:
sudo docker kill $(sudo docker ps -q)
When I run make kill it throws the next error:
sudo docker kill
"docker kill" requires at least 1 argument(s).
See 'docker kill --help'.
Usage: docker kill [OPTIONS] CONTAINER [CONTAINER...]
Kill one or more running containers
make: *** [kill] Error 1
It looks like "$" does not detect by Makefile.
But if I run manually sudo docker kill $(sudo docker ps -q) it works fine.
The target should look like this:
kill:
for c in $$(sudo docker ps -q); do sudo docker kill $$c; done
UPDATE:
turns out that docker kill works with multiple containers as arguments, so just escaping dollar is enough to kill all containers
kill:
sudo docker kill $$(sudo docker ps -q)

execute a command within docker swarm service

Initialize swarm mode:
root#ip-172-31-44-207:/home/ubuntu# docker swarm init --advertise-addr 172.31.44.207
Swarm initialized: current node (4mj61oxcc8ulbwd7zedxnz6ce) is now a manager.
To add a worker to this swarm, run the following command:
Join the second node:
docker swarm join \
--token SWMTKN-1-4xvddif3wf8tpzcg23tem3zlncth8460srbm7qtyx5qk3ton55-6g05kuek1jhs170d8fub83vs5 \
172.31.44.207:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
# start 2 services
docker service create continuumio/miniconda3
docker service create --name redis redis:3.0.6
root#ip-172-31-44-207:/home/ubuntu# docker service ls
ID NAME REPLICAS IMAGE COMMAND
2yc1xjmita67 miniconda3 0/1 continuumio/miniconda3
c3ptcf2q9zv2 redis 1/1 redis:3.0.6
As shown above, redis has it's replica while miniconda does not seem to be replicated.
I do usually log-in to miniconda container to type these commands:
/opt/conda/bin/conda install jupyter -y --quiet && mkdir /opt/notebooks && /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser
The problem is that docker exec -it XXX bash command does not work with swarm mode.
You can execute commands by filtering container name without needing to pass the entire swarm container hash, just by the service name. Like this:
docker exec $(docker ps -q -f name=servicename) ls
There is one liner for accessing corresponding instance of the service for localhost:
docker exec -ti stack_myservice.1.$(docker service ps -f 'name=stack_myservice.1' stack_myservice -q --no-trunc | head -n1) /bin/bash
It is tested on PowerShell, but bash should be the same. The oneliner accesses the first instance, but replace '1' with the number of the instance you want to access in two places to get other one.
More complex example is for distributed case:
#! /bin/bash
set -e
exec_task=$1
exec_instance=$2
strindex() {
x="${1%%$2*}"
[[ "$x" = "$1" ]] && echo -1 || echo "${#x}"
}
parse_node() {
read title
id_start=0
name_start=`strindex "$title" NAME`
image_start=`strindex "$title" IMAGE`
node_start=`strindex "$title" NODE`
dstate_start=`strindex "$title" DESIRED`
id_length=name_start
name_length=`expr $image_start - $name_start`
node_length=`expr $dstate_start - $node_start`
read line
id=${line:$id_start:$id_length}
name=${line:$name_start:$name_length}
name=$(echo $name)
node=${line:$node_start:$node_length}
echo $name.$id
echo $node
}
if true; then
read fn
docker_fullname=$fn
read nn
docker_node=$nn
fi < <( docker service ps -f name=$exec_task.$exec_instance --no-trunc -f desired-state=running $exec_task | parse_node )
echo "Executing in $docker_node $docker_fullname"
eval `docker-machine env $docker_node`
docker exec -ti $docker_fullname /bin/bash
This script could be used later as:
swarm_bash stack_task 1
It just execute bash on required node.
EDIT 2017-10-06:
Nowadays you can create the overlay network with --attachable flag to enable any container to join the network. This is great feature as it allows a lot of flexibility.
E.g.
$ docker network create --attachable --driver overlay my-network
$ docker service create --network my-network --name web --publish 80:80 nginx
$ docker run --network=my-network -ti alpine sh
(in alpine container) $ wget -qO- web
<!DOCTYPE html>
<html>
<head>
....
You are right, you cannot run docker exec on docker swarm mode service. But you can still find out, which node is running the container and then run exec directly on the container. E.g.
docker service ps miniconda3 # find out, which node is running the container
eval `docker-machine env <node name here>`
docker ps # find out the container id of miniconda
docker exec -it <container id here> sh
In your case you first have to find out, why service cannot get the miniconda container up. Maybe running docker service ps miniconda3 shows some helpful error messages..?
Using the Docker API
Right now, Docker does not provide an API like docker service exec or docker stack exec for this. But regarding this, there already exists two issues dealing with this functionality:
github.com - moby/moby - Docker service exec
github.com - docker/swarmkit - Support for executing into a task
(Regarding the first issue, for me, it was not directly clear that this issue deals with exactly this kind of functionality. But Exec for Swarm was closed and marked as duplicate of the Docker service exec issue.)
Using Docker daemon over HTTP
As mentioned by BMitch on run docker exec from swarm manager, you could also configure the Docker daemon to use HTTP and than connect to every node without the need of ssh. But you should protect this using TLS authentication which is already integrated into Docker. Afterwards you would be able to execute the docker exec like this:
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem \
-H=$HOST:2376 exec $containerId $cmd
Using skopos-plugin-swarm-exec
There exists a github project which claims to solve the problem and provide the desired functionality binding the docker daemon:
docker run -v /var/run/docker.sock:/var/run/docker.sock \
datagridsys/skopos-plugin-swarm-exec \
task-exec <taskID> <command> [<arguments>...]
As far as I can see, this works by creating another container at same node where the container reside where the docker exec should by executed on. On this node this container mounts the docker daemon socket to be able to execute docker exec there locally.
For more information have a look at: skopos-plugin-swarm-exec
Using docker swarm helpers
There is also another project called docker swarm helpers which seems to be more or less a wrapper around ssh and docker exec.
Reference:
https://github.com/docker/swarmkit/issues/1895#issuecomment-302147604
https://github.com/docker/swarmkit/issues/1895#issuecomment-358925313
You can jump in a Swarm node and list the docker containers running using:
docker container ls
That will give you the container name in a format similar to: containername.1.q5k89uctyx27zmntkcfooh68f
You can then use the regular exec option to run commands on it:
docker container exec -it containername.1.q5k89uctyx27zmntkcfooh68f bash
created a small script for our docker swarm cluster.
this script takes 3 params. first is the service you want to connect to second the task you want to run this can be /bin/bash or any other process you want to run. Third is optional and will fill -c option for bash or sh
-n is optional to force it to connect to a node
it retrieves the node that runs the service and runs the command.
#! /bin/bash
set -e
task=${1}
service=$2
bash=$3
serviceID=$(sudo docker service ps -f name=$service -f desired-state=running $service -q --no-trunc |head -n1)
node=$(sudo docker service ps -f name=$service -f desired-state=running $service --format="{{.Node}}"| head -n1 )
sudo docker -H $node exec -it $service".1."$serviceID $bash -c "$task"
note: this requires the docker nodes to accept tcp connections by exposing docker on port 2375 on the worker nodes
For those who have multiple replicas and just want to run a command within any of them, here is another shortcut:
docker exec -it $(docker ps -q -f name=SERVICE_NAME | head -1) bash
I wrote script to exec command in docker swarm by service name. For example it can be used in cron. Also you can use bash pipelines and passes all params to docker exec command. But works only on same node where service started. I wish it could help someone
#!/bin/bash
# swarm-exec.sh
set -e
for ((i=1;i<=$#;i++)); do
val=${!i}
if [ ${val:0:1} != "-" ]; then
service_id=$(docker ps -q -f "name=$val");
if [[ $service_id == "" ]]; then
echo "Container $val not found!";
exit 1;
fi
docker exec ${#:1:$i-1} $service_id ${#:$i+1:$#};
exit 0;
fi
done
echo "Usage: $0 [OPTIONS] SERVICE_NAME COMMAND [ARG...]";
exit 1;
Example of using:
./swarm-exec.sh app_postgres pg_dump -Z 9 -F p -U postgres app > /backups/app.sql.gz
echo ls | ./swarm-exec.sh -i app /bin/bash
./swarm-exec.sh -it some_app /bin/bash
The simpliest command I found to docker exec into a swarm node (with a swarm manager at $SWARM_MANAGER_HOST) running the service $SERVICE_NAME (for example mystack_myservice) is the following:
SERVICE_JSON=$(ssh $SWARM_MANAGER_HOST "docker service ps $SERVICE_NAME --no-trunc --format '{{ json . }}' -f desired-state=running")
ssh -t $(echo $SERVICE_JSON | jq -r '.Node') "docker exec -it $(echo $SERVICE_JSON | jq -r '.Name').$(echo $SERVICE_JSON | jq -r '.ID') bash"
This asserts that you have ssh access to $SWARM_MANAGER_HOST as well as the swarm node currently running the service task.
This also asserts that you have jq installed (apt install jq), but if you can't or don't want to install it and you have python installed you can create the following alias (based on this answer):
alias jq="python3 -c 'import sys, json; print(json.load(sys.stdin)[sys.argv[2].partition(\".\")[-1]])'"
See addendum 2...
Example of a oneliner for entering the database my_db on node master:
DB_NODE_ID=master && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
In case you want to configure, say max_connections:
DB_NODE_ID=master && $(docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql -e "SET GLOBAL max_connections = 1000") && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
This approach allows to enter all database nodes (e.g. slaves) just by setting the DB_NODE_ID variable accordingly.
Example for slave s2:
DB_NODE_ID=s2 && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
or
DB_NODE_ID=s2 && $(docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql -e "SET GLOBAL max_connections = 1000") && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
Put this into your KiTTY or PuTTY configuration for master / s2 under Data/Command and you are set.
As an addendum:
The old, non swarm mode version reads simply
docker exec -it master mysql my_db
resp.
DB_ID=master && $(docker exec -it $DB_ID mysql -e "SET GLOBAL max_connections = 1000") && docker exec -it $DB_ID mysql tmp
Addendum 2:
As it turned out by example, the term docker ps -q -f name=$DB_NODE_ID may return wrong values under certain conditions.
The following approach works correctily:
docker ps -a | grep "_$DB_NODE_ID." | awk '{print $1}'
You may substitute the examples above accordingly.
Addendum 3:
Well, these terms look awful and they certainly are painful to type, so you may want to ease your work. On Linux, everybody knows how to do this. On Windws, you may want to use AHK.
This is the AHK term I use:
:*:ii::DB_NODE_ID=$(docker ps -a | grep "_." | awk '{{}print $1{}}') && docker exec -it $id ash{Left 49}
So when I type ii -- which is as simple as it can get -- I get the desired term with the cursor in place and just have to fill in the container name.
I edited the script Brian van Rooijen added above. Because my reputation is to low, I cannot add it
#! /bin/bash
set -e
service=${1}
shift
task="$*"
echo $task
serviceID=$(docker service ps -f name=$service -f desired-state=running $service -q --no-trunc |head -n1)
node=$(docker service ps -f name=$service -f desired-state=running $service --format="{{.Node}}"| head -n1 )
serviceName=$(docker service ps -f name=$service -f desired-state=running $service --format="{{.Name}}"| head -n1 )
docker -H $node exec -it $serviceName"."$serviceID $task
I had the issue that the container didn't exists with the hard coded .1. in the execution.
Take a look at my solution: https://github.com/binbrayer/swarmServiceExec.
This approach is based on Docker Machines. I also created the prototype of the script to call containers asynchronously and as a result simultaneously.

Error on script to clear cache docker container

I need to clear cache manually in my nginx docker container and would make a script, i have make a script that found the PID:
docker-pid
#!/bin/sh
exec docker inspect --format '{{ .State.Pid }}' "$#"
And another final script
clear_cache.sh
#!/bin/sh
PID=/usr/bin/docker-pid proxy_nginx_1
nsenter -m -p -u -n -i -t $PID
rm -rf /etc/nginx/cache/*
exit
I get this error:
./clear_cache.sh: line 2: proxy_nginx_1: command not found
if i launch docker-pid to shell, it works....Why?
In bash you have to use $(<COMMAND>) if you want to save the output of a command to a variable. So
clear_cache.sh
#!/bin/sh
PID=$(/usr/bin/docker-pid proxy_nginx_1)
nsenter -m -p -u -n -i -t $PID
rm -rf /etc/nginx/cache/*
exit
or
#!/bin/sh
PID=$(docker inspect --format '{{ .State.Pid }}' "$#")
nsenter -m -p -u -n -i -t $PID
rm -rf /etc/nginx/cache/*
exit

Resources