docker - how do you disable auto-restart on a container? - docker

I can enable auto-restart with --restart=always, but after I stop the container, how do I turn off that attribute?
I normally run a webserver and typically map port 80:
docker run -d --restart=always -p 80:80 -i -t myuser/myproj /bin/bash
But there are times when I want to run a newer version of my image, but I want to keep the old container around. The problem is that if there are multiple containers with --restart=always, only one of them (random?) starts because they're all contending for port 80 on the host.

You can use the --restart=unless-stopped option, as #Shibashis mentioned, or update the restart policy (this requires docker 1.11 or newer);
See the documentation for docker update and Docker restart policies.
docker update --restart=no my-container
that updates the restart-policy for an existing container (my-container)

Use the below to disable ALL auto-restarting (daemon) containers.
docker update --restart=no $(docker ps -a -q)
Use the following to disable restart a SINGLE container.
docker update --restart=no the-container-you-want-to-disable-restart
Rationale:
Docker provides restart policies to control whether your containers start automatically when they exit, or when Docker restarts. This is often very useful when Docker is running a key service.
Notes
If you are using docker-compose this might be useful to know.
restart no is the default restart policy, and it does not restart a
container under any circumstance. When always is specified, the
container always restarts. The on-failure policy restarts a container
if the exit code indicates an on-failure error.
restart: "no"
restart: always
restart: on-failure
restart: unless-stopped
restart: always

You can start your container with --restart=unless-stopped.

If you have a swarm restarting the containers, the swarm will restart any containers you stop or rm, irrespective of the restart option. That's a feature, not a bug.
Make sure you are not running a service you forgot about:
docker service ls
Then, you can stop the service
docker service rm <service id discovered with previous command>

Not a response to this question but to How to prevent docker from starting a container automatically on system startup?, which has been marked as a duplicate of this question.
If your container is started with restart=on-failure and has a faulty command that exits with a non-zero exit code when you stop the container with docker stop, it shows some weird behaviour: After stopping the container with docker stop, the container is stopped, but after restarting the docker daemon (or the system), it is started automatically again. To fix this, either fix the command of the container or use no or unless-stopped as the restart policy.

docker update --restart=yes/no <container-name/containerId>

To change the restart policy of all docker containers...
Identify the docker containers that will start on boot
This shell script will identify any docker containers that have a restart policy other than "no".
As the root user
CONTAINERS=$(for f in /var/lib/docker/containers/*/hostconfig.json ;
do
container=`echo $f | rev | cut -d '/' -f 2| rev`
jq \
--arg container "$container" \
--arg file "$f" '{"RestartPolicy":.RestartPolicy.Name, 'file':$file, 'container':$container} | select(.RestartPolicy != "no")' "$f" | \
jq .container -r | tr '\n' ' '
done)
or as NON-root...
CONTAINERS=$(for f in $(sudo sh -c "ls /var/lib/docker/containers/*/hostconfig.json");
do
container=`echo $f | rev | cut -d '/' -f 2| rev`
sudo jq \
--arg container "$container" \
--arg file "$f" '{"RestartPolicy":.RestartPolicy.Name, 'file':$file, 'container':$container} | select(.RestartPolicy != "no")' "$f" | \
jq .container -r | tr '\n' ' '
done)
(optionally) View the list of selected containers
echo $CONTAINERS
Sset all of the containers to "no" at once.
docker update --restart=no $CONTAINERS

Update only actively running containers
docker update --restart=no $(docker ps -q)

Related

Checking external Processes from inside a docker container [duplicate]

There are two containers A and B. Once container A starts, one process will be executed, then the container will stop. Container B is just an web application (say expressjs). Is it possible to kickstart A from container B ?
It is possible to grant a container access to docker so that it can spawn other containers on your host. You do this by exposing the docker socket inside the container, e.g:
docker run -v /var/run/docker.sock:/var/run/docker.sock --name containerB myimage ...
Now, if you have the docker client available inside the container, you will be able to control the docker daemon on your host and use that to spawn your "container A".
Before trying this approach, you should be aware of the security considerations: access to docker is the same as having root access on the host, which means if your web application has a remote compromise you have just handed the keys to your host to the attackers. This is described more fully in this article.
It is possible by mounting the docker socket.
Container A
It will print the time to the stdout (and its logs) and exit.
docker run --name contA ubuntu date
Container B
The trick is to mount the host's docker socket then install the docker client on the container. It will then interact with the daemon just as if you were using docker from the host. Once docker is installed, it simply restart container A every 5 seconds.
docker run --name contB -v /var/run/docker.sock:/var/run/docker.sock ubuntu bash -c "
apt-get update && apt-get install -y curl &&
curl -sSL https://get.docker.com/ | sh &&
watch --interval 5 docker restart contA"
You can see that contA is being called by looking at its logs
docker logs contA
That said, Docker is really meant for long running services. There's some talk over at the Docker github issues about specifying short lived "job" services for things like maintenance, cron jobs, etc, but nothing has been decided, much less coded. So it's best to build your system so that containers are up and stay up.
docker-compose.yml (credits to larsks)
# ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# ...
Dockerfile (credits to Aaron V)
# ...
ENV DOCKERVERSION=19.03.12
RUN curl -fsSLO https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKERVERSION}.tgz \
&& tar xzvf docker-${DOCKERVERSION}.tgz --strip 1 -C /usr/local/bin docker/docker \
&& rm docker-${DOCKERVERSION}.tgz
# ...
Node.js index.js (credits to Arpan Abhishek, Maulik Parmar and anishsane)
# ...
const { exec } = require("child_process");
# ...
exec('docker container ls -a --format "table {{.ID}}\t{{.Names}}" | grep <PART_OF_YOUR_CONTAINER_NAME> | cut -d" " -f1 | cut -f1 | xargs -I{} docker container restart -t 0 {}', (error, stdout, stderr) => {
if (error) {
console.log(`error: ${error.message}`);
return;
}
if (stderr) {
console.log(`stderr: ${stderr}`);
return;
}
console.log(`stdout: ${stdout}`);
});
# ...
Please make sure that your application is at least behind a password protection. Exposing docker.sock in any way is a security thing.
Here you can find other Docker client versions: https://download.docker.com/linux/static/stable/x86_64/
Please replace <PART_OF_YOUR_CONTAINER_NAME> with a part of your container name.

how to identify if any applications inside docker container as running as root

We use a lot of 3rd party images [Eg: gitlab , jenkins, centos7 ..] which we run inside our docker containers. I would like to know how to check if any of the applications running in the container is run as root user. Is it the same as checking on a normal server ps -elf|grep root but inside the container.
Running Containers
To get all processes and their UIDs inside your running containers on a host, you can do the following:
for c in $(docker ps -q); do docker inspect $c -f "{{ .Name }}:"; docker top $c | awk '{print $1, $2, $8}'; echo "--------------"; done
This will print something like
/webserver-dockerized_nginx_1:
UID PID CMD
root 13437 nginx:
systemd+ 13522 nginx:
systemd+ 13526 nginx:
systemd+ 13527 nginx:
systemd+ 13528 nginx:
--------------
for all containers you have running.
Images
To get the configured users for all images on a host you can do
docker image inspect $(docker image ls -q) -f "{{ .RepoTags }}: {{ .ContainerConfig.User }} {{ .Config.User }}"
This will output something like
[nginx:mainline-alpine]:
[memcached:alpine]: memcache memcache
[redis:5-alpine]:
As Marvin mentioned: If there is no user in the output, no USER was defined in the Dockerfile, thus the container will run as root (Reference: Docker Documentation)
You can attach the terminal to your running container and once you're inside you can run the ps command:
Attaching to the container
$ docker exec -it <container_id> /bin/bash
You can read more about docker exec in the official docs site: https://docs.docker.com/engine/reference/commandline/exec/
Hope it helps!
You could use docker top command in association with the process id...
Combining "docker ps" and "docker top" could make the thing..
You could do stg like that :
docker ps | perl -ne '#cols = split /\s{2,}/, $_; printf "%15s\n", $cols[0]' > tmp.txt && tail -n $(($(wc -l < tmp.txt)-1)) tmp.txt | xargs -L1 docker top | perl -ne '#cols = split /\s{2,}/, $_; printf "%15s %65s\n", $cols[0], $cols[7]' && rm tmp.txt
That's not a perfect answer ((ould be prettyfied), and also note that it only works for running container. It'd be safer to check this from a image point of view, before you run the container.
Then, every time you get an image, just check this way :
d image inspect <image id> | grep -i user
I might be wrong, but I think no user means root. Otherwise, you will have to analyse the output there.

How do I ensure only single docker image is running?

I'm building an application that I run in docker. I only want a single version of my application running at a given time, so I'm trying to stop the old iterations of containers as I start the new one.
mvn package docker:build
docker ps -q --filter ancestor="paul/my-app" | xargs docker stop
cd ./target/docker
docker build .
docker run -d paul/my-app
This creates the image as I expect and runs it like I want. If I run my script twice, however, I sometimes get two images running at the same time. Trying to diagnose this weirdness I ran this:
docker ps -a | awk '{ print $1,$2 }'
Now I see something I don't understand. The output of docker ps -a is
CONTAINER ID
aeb4c0486ef5 paul/my-app
b32be5e53df0 6d965c3e84f1
which means that I can't reliably stop containers by image name.
Can someone explain to me why the ID is a hash instead of paul/my-app? How can I reliably ensure only one version of my image exists/is running at any given time?
Thanks to user larsks for the --name argument. I've gotten my application acting as a singleton as I develop it.
I've split this into two discrete scripts.
run-docker.sh
#!/usr/bin/env bash
set -e
mvn package docker:build
./stop-docker.sh
cd ./target/docker
docker build .
docker run -d --name paul-my-app --restart unless-stopped paul/my-app
docker logs --follow paul-my-app
And it's counterpart
stop-docker.sh
#!/usr/bin/env bash
set -e
docker stop paul-my-app || true
docker image prune -f
docker container prune -f
docker volume prune -f
docker network prune -f
docker system prune -f

Is it possible to start a stopped container from another container

There are two containers A and B. Once container A starts, one process will be executed, then the container will stop. Container B is just an web application (say expressjs). Is it possible to kickstart A from container B ?
It is possible to grant a container access to docker so that it can spawn other containers on your host. You do this by exposing the docker socket inside the container, e.g:
docker run -v /var/run/docker.sock:/var/run/docker.sock --name containerB myimage ...
Now, if you have the docker client available inside the container, you will be able to control the docker daemon on your host and use that to spawn your "container A".
Before trying this approach, you should be aware of the security considerations: access to docker is the same as having root access on the host, which means if your web application has a remote compromise you have just handed the keys to your host to the attackers. This is described more fully in this article.
It is possible by mounting the docker socket.
Container A
It will print the time to the stdout (and its logs) and exit.
docker run --name contA ubuntu date
Container B
The trick is to mount the host's docker socket then install the docker client on the container. It will then interact with the daemon just as if you were using docker from the host. Once docker is installed, it simply restart container A every 5 seconds.
docker run --name contB -v /var/run/docker.sock:/var/run/docker.sock ubuntu bash -c "
apt-get update && apt-get install -y curl &&
curl -sSL https://get.docker.com/ | sh &&
watch --interval 5 docker restart contA"
You can see that contA is being called by looking at its logs
docker logs contA
That said, Docker is really meant for long running services. There's some talk over at the Docker github issues about specifying short lived "job" services for things like maintenance, cron jobs, etc, but nothing has been decided, much less coded. So it's best to build your system so that containers are up and stay up.
docker-compose.yml (credits to larsks)
# ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# ...
Dockerfile (credits to Aaron V)
# ...
ENV DOCKERVERSION=19.03.12
RUN curl -fsSLO https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKERVERSION}.tgz \
&& tar xzvf docker-${DOCKERVERSION}.tgz --strip 1 -C /usr/local/bin docker/docker \
&& rm docker-${DOCKERVERSION}.tgz
# ...
Node.js index.js (credits to Arpan Abhishek, Maulik Parmar and anishsane)
# ...
const { exec } = require("child_process");
# ...
exec('docker container ls -a --format "table {{.ID}}\t{{.Names}}" | grep <PART_OF_YOUR_CONTAINER_NAME> | cut -d" " -f1 | cut -f1 | xargs -I{} docker container restart -t 0 {}', (error, stdout, stderr) => {
if (error) {
console.log(`error: ${error.message}`);
return;
}
if (stderr) {
console.log(`stderr: ${stderr}`);
return;
}
console.log(`stdout: ${stdout}`);
});
# ...
Please make sure that your application is at least behind a password protection. Exposing docker.sock in any way is a security thing.
Here you can find other Docker client versions: https://download.docker.com/linux/static/stable/x86_64/
Please replace <PART_OF_YOUR_CONTAINER_NAME> with a part of your container name.

execute a command within docker swarm service

Initialize swarm mode:
root#ip-172-31-44-207:/home/ubuntu# docker swarm init --advertise-addr 172.31.44.207
Swarm initialized: current node (4mj61oxcc8ulbwd7zedxnz6ce) is now a manager.
To add a worker to this swarm, run the following command:
Join the second node:
docker swarm join \
--token SWMTKN-1-4xvddif3wf8tpzcg23tem3zlncth8460srbm7qtyx5qk3ton55-6g05kuek1jhs170d8fub83vs5 \
172.31.44.207:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
# start 2 services
docker service create continuumio/miniconda3
docker service create --name redis redis:3.0.6
root#ip-172-31-44-207:/home/ubuntu# docker service ls
ID NAME REPLICAS IMAGE COMMAND
2yc1xjmita67 miniconda3 0/1 continuumio/miniconda3
c3ptcf2q9zv2 redis 1/1 redis:3.0.6
As shown above, redis has it's replica while miniconda does not seem to be replicated.
I do usually log-in to miniconda container to type these commands:
/opt/conda/bin/conda install jupyter -y --quiet && mkdir /opt/notebooks && /opt/conda/bin/jupyter notebook --notebook-dir=/opt/notebooks --ip='*' --port=8888 --no-browser
The problem is that docker exec -it XXX bash command does not work with swarm mode.
You can execute commands by filtering container name without needing to pass the entire swarm container hash, just by the service name. Like this:
docker exec $(docker ps -q -f name=servicename) ls
There is one liner for accessing corresponding instance of the service for localhost:
docker exec -ti stack_myservice.1.$(docker service ps -f 'name=stack_myservice.1' stack_myservice -q --no-trunc | head -n1) /bin/bash
It is tested on PowerShell, but bash should be the same. The oneliner accesses the first instance, but replace '1' with the number of the instance you want to access in two places to get other one.
More complex example is for distributed case:
#! /bin/bash
set -e
exec_task=$1
exec_instance=$2
strindex() {
x="${1%%$2*}"
[[ "$x" = "$1" ]] && echo -1 || echo "${#x}"
}
parse_node() {
read title
id_start=0
name_start=`strindex "$title" NAME`
image_start=`strindex "$title" IMAGE`
node_start=`strindex "$title" NODE`
dstate_start=`strindex "$title" DESIRED`
id_length=name_start
name_length=`expr $image_start - $name_start`
node_length=`expr $dstate_start - $node_start`
read line
id=${line:$id_start:$id_length}
name=${line:$name_start:$name_length}
name=$(echo $name)
node=${line:$node_start:$node_length}
echo $name.$id
echo $node
}
if true; then
read fn
docker_fullname=$fn
read nn
docker_node=$nn
fi < <( docker service ps -f name=$exec_task.$exec_instance --no-trunc -f desired-state=running $exec_task | parse_node )
echo "Executing in $docker_node $docker_fullname"
eval `docker-machine env $docker_node`
docker exec -ti $docker_fullname /bin/bash
This script could be used later as:
swarm_bash stack_task 1
It just execute bash on required node.
EDIT 2017-10-06:
Nowadays you can create the overlay network with --attachable flag to enable any container to join the network. This is great feature as it allows a lot of flexibility.
E.g.
$ docker network create --attachable --driver overlay my-network
$ docker service create --network my-network --name web --publish 80:80 nginx
$ docker run --network=my-network -ti alpine sh
(in alpine container) $ wget -qO- web
<!DOCTYPE html>
<html>
<head>
....
You are right, you cannot run docker exec on docker swarm mode service. But you can still find out, which node is running the container and then run exec directly on the container. E.g.
docker service ps miniconda3 # find out, which node is running the container
eval `docker-machine env <node name here>`
docker ps # find out the container id of miniconda
docker exec -it <container id here> sh
In your case you first have to find out, why service cannot get the miniconda container up. Maybe running docker service ps miniconda3 shows some helpful error messages..?
Using the Docker API
Right now, Docker does not provide an API like docker service exec or docker stack exec for this. But regarding this, there already exists two issues dealing with this functionality:
github.com - moby/moby - Docker service exec
github.com - docker/swarmkit - Support for executing into a task
(Regarding the first issue, for me, it was not directly clear that this issue deals with exactly this kind of functionality. But Exec for Swarm was closed and marked as duplicate of the Docker service exec issue.)
Using Docker daemon over HTTP
As mentioned by BMitch on run docker exec from swarm manager, you could also configure the Docker daemon to use HTTP and than connect to every node without the need of ssh. But you should protect this using TLS authentication which is already integrated into Docker. Afterwards you would be able to execute the docker exec like this:
docker --tlsverify --tlscacert=ca.pem --tlscert=cert.pem --tlskey=key.pem \
-H=$HOST:2376 exec $containerId $cmd
Using skopos-plugin-swarm-exec
There exists a github project which claims to solve the problem and provide the desired functionality binding the docker daemon:
docker run -v /var/run/docker.sock:/var/run/docker.sock \
datagridsys/skopos-plugin-swarm-exec \
task-exec <taskID> <command> [<arguments>...]
As far as I can see, this works by creating another container at same node where the container reside where the docker exec should by executed on. On this node this container mounts the docker daemon socket to be able to execute docker exec there locally.
For more information have a look at: skopos-plugin-swarm-exec
Using docker swarm helpers
There is also another project called docker swarm helpers which seems to be more or less a wrapper around ssh and docker exec.
Reference:
https://github.com/docker/swarmkit/issues/1895#issuecomment-302147604
https://github.com/docker/swarmkit/issues/1895#issuecomment-358925313
You can jump in a Swarm node and list the docker containers running using:
docker container ls
That will give you the container name in a format similar to: containername.1.q5k89uctyx27zmntkcfooh68f
You can then use the regular exec option to run commands on it:
docker container exec -it containername.1.q5k89uctyx27zmntkcfooh68f bash
created a small script for our docker swarm cluster.
this script takes 3 params. first is the service you want to connect to second the task you want to run this can be /bin/bash or any other process you want to run. Third is optional and will fill -c option for bash or sh
-n is optional to force it to connect to a node
it retrieves the node that runs the service and runs the command.
#! /bin/bash
set -e
task=${1}
service=$2
bash=$3
serviceID=$(sudo docker service ps -f name=$service -f desired-state=running $service -q --no-trunc |head -n1)
node=$(sudo docker service ps -f name=$service -f desired-state=running $service --format="{{.Node}}"| head -n1 )
sudo docker -H $node exec -it $service".1."$serviceID $bash -c "$task"
note: this requires the docker nodes to accept tcp connections by exposing docker on port 2375 on the worker nodes
For those who have multiple replicas and just want to run a command within any of them, here is another shortcut:
docker exec -it $(docker ps -q -f name=SERVICE_NAME | head -1) bash
I wrote script to exec command in docker swarm by service name. For example it can be used in cron. Also you can use bash pipelines and passes all params to docker exec command. But works only on same node where service started. I wish it could help someone
#!/bin/bash
# swarm-exec.sh
set -e
for ((i=1;i<=$#;i++)); do
val=${!i}
if [ ${val:0:1} != "-" ]; then
service_id=$(docker ps -q -f "name=$val");
if [[ $service_id == "" ]]; then
echo "Container $val not found!";
exit 1;
fi
docker exec ${#:1:$i-1} $service_id ${#:$i+1:$#};
exit 0;
fi
done
echo "Usage: $0 [OPTIONS] SERVICE_NAME COMMAND [ARG...]";
exit 1;
Example of using:
./swarm-exec.sh app_postgres pg_dump -Z 9 -F p -U postgres app > /backups/app.sql.gz
echo ls | ./swarm-exec.sh -i app /bin/bash
./swarm-exec.sh -it some_app /bin/bash
The simpliest command I found to docker exec into a swarm node (with a swarm manager at $SWARM_MANAGER_HOST) running the service $SERVICE_NAME (for example mystack_myservice) is the following:
SERVICE_JSON=$(ssh $SWARM_MANAGER_HOST "docker service ps $SERVICE_NAME --no-trunc --format '{{ json . }}' -f desired-state=running")
ssh -t $(echo $SERVICE_JSON | jq -r '.Node') "docker exec -it $(echo $SERVICE_JSON | jq -r '.Name').$(echo $SERVICE_JSON | jq -r '.ID') bash"
This asserts that you have ssh access to $SWARM_MANAGER_HOST as well as the swarm node currently running the service task.
This also asserts that you have jq installed (apt install jq), but if you can't or don't want to install it and you have python installed you can create the following alias (based on this answer):
alias jq="python3 -c 'import sys, json; print(json.load(sys.stdin)[sys.argv[2].partition(\".\")[-1]])'"
See addendum 2...
Example of a oneliner for entering the database my_db on node master:
DB_NODE_ID=master && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
In case you want to configure, say max_connections:
DB_NODE_ID=master && $(docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql -e "SET GLOBAL max_connections = 1000") && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
This approach allows to enter all database nodes (e.g. slaves) just by setting the DB_NODE_ID variable accordingly.
Example for slave s2:
DB_NODE_ID=s2 && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
or
DB_NODE_ID=s2 && $(docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql -e "SET GLOBAL max_connections = 1000") && docker exec -it $(docker ps -q -f name=$DB_NODE_ID) mysql my_db
Put this into your KiTTY or PuTTY configuration for master / s2 under Data/Command and you are set.
As an addendum:
The old, non swarm mode version reads simply
docker exec -it master mysql my_db
resp.
DB_ID=master && $(docker exec -it $DB_ID mysql -e "SET GLOBAL max_connections = 1000") && docker exec -it $DB_ID mysql tmp
Addendum 2:
As it turned out by example, the term docker ps -q -f name=$DB_NODE_ID may return wrong values under certain conditions.
The following approach works correctily:
docker ps -a | grep "_$DB_NODE_ID." | awk '{print $1}'
You may substitute the examples above accordingly.
Addendum 3:
Well, these terms look awful and they certainly are painful to type, so you may want to ease your work. On Linux, everybody knows how to do this. On Windws, you may want to use AHK.
This is the AHK term I use:
:*:ii::DB_NODE_ID=$(docker ps -a | grep "_." | awk '{{}print $1{}}') && docker exec -it $id ash{Left 49}
So when I type ii -- which is as simple as it can get -- I get the desired term with the cursor in place and just have to fill in the container name.
I edited the script Brian van Rooijen added above. Because my reputation is to low, I cannot add it
#! /bin/bash
set -e
service=${1}
shift
task="$*"
echo $task
serviceID=$(docker service ps -f name=$service -f desired-state=running $service -q --no-trunc |head -n1)
node=$(docker service ps -f name=$service -f desired-state=running $service --format="{{.Node}}"| head -n1 )
serviceName=$(docker service ps -f name=$service -f desired-state=running $service --format="{{.Name}}"| head -n1 )
docker -H $node exec -it $serviceName"."$serviceID $task
I had the issue that the container didn't exists with the hard coded .1. in the execution.
Take a look at my solution: https://github.com/binbrayer/swarmServiceExec.
This approach is based on Docker Machines. I also created the prototype of the script to call containers asynchronously and as a result simultaneously.

Resources