There are two containers A and B. Once container A starts, one process will be executed, then the container will stop. Container B is just an web application (say expressjs). Is it possible to kickstart A from container B ?
It is possible to grant a container access to docker so that it can spawn other containers on your host. You do this by exposing the docker socket inside the container, e.g:
docker run -v /var/run/docker.sock:/var/run/docker.sock --name containerB myimage ...
Now, if you have the docker client available inside the container, you will be able to control the docker daemon on your host and use that to spawn your "container A".
Before trying this approach, you should be aware of the security considerations: access to docker is the same as having root access on the host, which means if your web application has a remote compromise you have just handed the keys to your host to the attackers. This is described more fully in this article.
It is possible by mounting the docker socket.
Container A
It will print the time to the stdout (and its logs) and exit.
docker run --name contA ubuntu date
Container B
The trick is to mount the host's docker socket then install the docker client on the container. It will then interact with the daemon just as if you were using docker from the host. Once docker is installed, it simply restart container A every 5 seconds.
docker run --name contB -v /var/run/docker.sock:/var/run/docker.sock ubuntu bash -c "
apt-get update && apt-get install -y curl &&
curl -sSL https://get.docker.com/ | sh &&
watch --interval 5 docker restart contA"
You can see that contA is being called by looking at its logs
docker logs contA
That said, Docker is really meant for long running services. There's some talk over at the Docker github issues about specifying short lived "job" services for things like maintenance, cron jobs, etc, but nothing has been decided, much less coded. So it's best to build your system so that containers are up and stay up.
docker-compose.yml (credits to larsks)
# ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# ...
Dockerfile (credits to Aaron V)
# ...
ENV DOCKERVERSION=19.03.12
RUN curl -fsSLO https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKERVERSION}.tgz \
&& tar xzvf docker-${DOCKERVERSION}.tgz --strip 1 -C /usr/local/bin docker/docker \
&& rm docker-${DOCKERVERSION}.tgz
# ...
Node.js index.js (credits to Arpan Abhishek, Maulik Parmar and anishsane)
# ...
const { exec } = require("child_process");
# ...
exec('docker container ls -a --format "table {{.ID}}\t{{.Names}}" | grep <PART_OF_YOUR_CONTAINER_NAME> | cut -d" " -f1 | cut -f1 | xargs -I{} docker container restart -t 0 {}', (error, stdout, stderr) => {
if (error) {
console.log(`error: ${error.message}`);
return;
}
if (stderr) {
console.log(`stderr: ${stderr}`);
return;
}
console.log(`stdout: ${stdout}`);
});
# ...
Please make sure that your application is at least behind a password protection. Exposing docker.sock in any way is a security thing.
Here you can find other Docker client versions: https://download.docker.com/linux/static/stable/x86_64/
Please replace <PART_OF_YOUR_CONTAINER_NAME> with a part of your container name.
Related
There are two containers A and B. Once container A starts, one process will be executed, then the container will stop. Container B is just an web application (say expressjs). Is it possible to kickstart A from container B ?
It is possible to grant a container access to docker so that it can spawn other containers on your host. You do this by exposing the docker socket inside the container, e.g:
docker run -v /var/run/docker.sock:/var/run/docker.sock --name containerB myimage ...
Now, if you have the docker client available inside the container, you will be able to control the docker daemon on your host and use that to spawn your "container A".
Before trying this approach, you should be aware of the security considerations: access to docker is the same as having root access on the host, which means if your web application has a remote compromise you have just handed the keys to your host to the attackers. This is described more fully in this article.
It is possible by mounting the docker socket.
Container A
It will print the time to the stdout (and its logs) and exit.
docker run --name contA ubuntu date
Container B
The trick is to mount the host's docker socket then install the docker client on the container. It will then interact with the daemon just as if you were using docker from the host. Once docker is installed, it simply restart container A every 5 seconds.
docker run --name contB -v /var/run/docker.sock:/var/run/docker.sock ubuntu bash -c "
apt-get update && apt-get install -y curl &&
curl -sSL https://get.docker.com/ | sh &&
watch --interval 5 docker restart contA"
You can see that contA is being called by looking at its logs
docker logs contA
That said, Docker is really meant for long running services. There's some talk over at the Docker github issues about specifying short lived "job" services for things like maintenance, cron jobs, etc, but nothing has been decided, much less coded. So it's best to build your system so that containers are up and stay up.
docker-compose.yml (credits to larsks)
# ...
volumes:
- /var/run/docker.sock:/var/run/docker.sock
# ...
Dockerfile (credits to Aaron V)
# ...
ENV DOCKERVERSION=19.03.12
RUN curl -fsSLO https://download.docker.com/linux/static/stable/x86_64/docker-${DOCKERVERSION}.tgz \
&& tar xzvf docker-${DOCKERVERSION}.tgz --strip 1 -C /usr/local/bin docker/docker \
&& rm docker-${DOCKERVERSION}.tgz
# ...
Node.js index.js (credits to Arpan Abhishek, Maulik Parmar and anishsane)
# ...
const { exec } = require("child_process");
# ...
exec('docker container ls -a --format "table {{.ID}}\t{{.Names}}" | grep <PART_OF_YOUR_CONTAINER_NAME> | cut -d" " -f1 | cut -f1 | xargs -I{} docker container restart -t 0 {}', (error, stdout, stderr) => {
if (error) {
console.log(`error: ${error.message}`);
return;
}
if (stderr) {
console.log(`stderr: ${stderr}`);
return;
}
console.log(`stdout: ${stdout}`);
});
# ...
Please make sure that your application is at least behind a password protection. Exposing docker.sock in any way is a security thing.
Here you can find other Docker client versions: https://download.docker.com/linux/static/stable/x86_64/
Please replace <PART_OF_YOUR_CONTAINER_NAME> with a part of your container name.
I have a setup with docker in docker and try to mount folders.
Let's say I have those folders that I wish to share with his parent. On the host, I created a file in /tmp/dind called foo. Host starts container 1, which starts container 2. This is the result I want to have.
Host | Container 1 | Container 2
/tmp/dind | /tmp/dind2 | /tmp/dind3
<-------> <------->
Instead, I get
Host | Container 1 | Container 2
/tmp/dind | /tmp/dind2 | /tmp/dind3
<------->
<----------------------->
Code here:
docker run --rm -it \
-v /tmp/dind:/tmp/dind2 \
-v /var/run/docker.sock:/var/run/docker.sock docker sh -c \
"docker run --rm -it \
-v /tmp/dind2:/tmp/dind3 \
-v /var/run/docker.sock:/var/run/docker.sock \
docker ls /tmp/dind3"
This outputs nothing, while the next command gives foo as result. I changed the mounted volume:
docker run --rm -it \
-v /tmp/dind:/tmp/dind2 \
-v /var/run/docker.sock:/var/run/docker.sock docker sh -c \
"docker run --rm -it \
-v /tmp/dind:/tmp/dind3 \
-v /var/run/docker.sock:/var/run/docker.sock \
docker ls /tmp/dind3"
The question is, what do I need to do in order to use Container 1 path and not host? Or do I misunderstand something about docker here?
For all that you say “Docker-in-Docker” and “dind”, this setup isn’t actually Docker-in-Docker: your container1 is giving instructions to the host’s Docker daemon that affect container2.
Host Container1
/-----
(Docker)
| Container2
\---->
(NB: this is generally the recommended path for CI-type setups. “Docker-in-Docker” generally means container1 is running its own, separate, Docker daemon, which tends to not be recommended.)
Since container1 is giving instructions to the host’s Docker, and the host’s Docker is launching container2, any docker run -v paths are always the host’s paths. Unless you know that some specific directory has already been mounted into your container, it’s hard to share files with “sub-containers”.
One way to get around this is to assert that there is a shared path of some sort:
docker run \
-v $PWD/exchange:/exchange \
-v /var/run/docker.sock:/var/run/docker.sock \
-e EXCHANGE_PATH=$PWD/exchange \
--name container1
...
# from within container1
mkdir $EXCHANGE_PATH/container2
echo hello world > $EXCHANGE_PATH/container2/file.txt
docker run \
-v $EXCHANGE_PATH/container2:/data
--name container2
...
When I’ve done this in the past (for a test setup that wanted to launch helper containers) I’ve used a painstaking docker create, docker cp, docker start, docker cp, Docker rm sequence. That’s extremely manual, but it has the advantage that the “local” side of a docker cp is always the current filesystem context even if you’re talking to the host’s Docker daemon from within a container.
It does not matter if container 2 binds the host path, because the changes to files in container 1 directly affect everything on the host path. So they all work on the same files.
So your setup is correct and will function the same as if they referenced in the way you described.
Update
If you want to make sure that the process do not modify the host files you could do the following:
Build a custom docker images which copies all data from folder a to folder b, where you execute the script on folder b. And then mount the files with ./:/a. This way you maintain flexibility on which files you bind to the container without letting the container modify the host files.
I hope this answers your question :)
I have a setup with docker in docker and try to mount folders.
Let's say I have those folders that I wish to share with his parent. On the host, I created a file in /tmp/dind called foo. Host starts container 1, which starts container 2. This is the result I want to have.
Host | Container 1 | Container 2
/tmp/dind | /tmp/dind2 | /tmp/dind3
<-------> <------->
Instead, I get
Host | Container 1 | Container 2
/tmp/dind | /tmp/dind2 | /tmp/dind3
<------->
<----------------------->
Code here:
docker run --rm -it \
-v /tmp/dind:/tmp/dind2 \
-v /var/run/docker.sock:/var/run/docker.sock docker sh -c \
"docker run --rm -it \
-v /tmp/dind2:/tmp/dind3 \
-v /var/run/docker.sock:/var/run/docker.sock \
docker ls /tmp/dind3"
This outputs nothing, while the next command gives foo as result. I changed the mounted volume:
docker run --rm -it \
-v /tmp/dind:/tmp/dind2 \
-v /var/run/docker.sock:/var/run/docker.sock docker sh -c \
"docker run --rm -it \
-v /tmp/dind:/tmp/dind3 \
-v /var/run/docker.sock:/var/run/docker.sock \
docker ls /tmp/dind3"
The question is, what do I need to do in order to use Container 1 path and not host? Or do I misunderstand something about docker here?
For all that you say “Docker-in-Docker” and “dind”, this setup isn’t actually Docker-in-Docker: your container1 is giving instructions to the host’s Docker daemon that affect container2.
Host Container1
/-----
(Docker)
| Container2
\---->
(NB: this is generally the recommended path for CI-type setups. “Docker-in-Docker” generally means container1 is running its own, separate, Docker daemon, which tends to not be recommended.)
Since container1 is giving instructions to the host’s Docker, and the host’s Docker is launching container2, any docker run -v paths are always the host’s paths. Unless you know that some specific directory has already been mounted into your container, it’s hard to share files with “sub-containers”.
One way to get around this is to assert that there is a shared path of some sort:
docker run \
-v $PWD/exchange:/exchange \
-v /var/run/docker.sock:/var/run/docker.sock \
-e EXCHANGE_PATH=$PWD/exchange \
--name container1
...
# from within container1
mkdir $EXCHANGE_PATH/container2
echo hello world > $EXCHANGE_PATH/container2/file.txt
docker run \
-v $EXCHANGE_PATH/container2:/data
--name container2
...
When I’ve done this in the past (for a test setup that wanted to launch helper containers) I’ve used a painstaking docker create, docker cp, docker start, docker cp, Docker rm sequence. That’s extremely manual, but it has the advantage that the “local” side of a docker cp is always the current filesystem context even if you’re talking to the host’s Docker daemon from within a container.
It does not matter if container 2 binds the host path, because the changes to files in container 1 directly affect everything on the host path. So they all work on the same files.
So your setup is correct and will function the same as if they referenced in the way you described.
Update
If you want to make sure that the process do not modify the host files you could do the following:
Build a custom docker images which copies all data from folder a to folder b, where you execute the script on folder b. And then mount the files with ./:/a. This way you maintain flexibility on which files you bind to the container without letting the container modify the host files.
I hope this answers your question :)
I have a setup with docker in docker and try to mount folders.
Let's say I have those folders that I wish to share with his parent. On the host, I created a file in /tmp/dind called foo. Host starts container 1, which starts container 2. This is the result I want to have.
Host | Container 1 | Container 2
/tmp/dind | /tmp/dind2 | /tmp/dind3
<-------> <------->
Instead, I get
Host | Container 1 | Container 2
/tmp/dind | /tmp/dind2 | /tmp/dind3
<------->
<----------------------->
Code here:
docker run --rm -it \
-v /tmp/dind:/tmp/dind2 \
-v /var/run/docker.sock:/var/run/docker.sock docker sh -c \
"docker run --rm -it \
-v /tmp/dind2:/tmp/dind3 \
-v /var/run/docker.sock:/var/run/docker.sock \
docker ls /tmp/dind3"
This outputs nothing, while the next command gives foo as result. I changed the mounted volume:
docker run --rm -it \
-v /tmp/dind:/tmp/dind2 \
-v /var/run/docker.sock:/var/run/docker.sock docker sh -c \
"docker run --rm -it \
-v /tmp/dind:/tmp/dind3 \
-v /var/run/docker.sock:/var/run/docker.sock \
docker ls /tmp/dind3"
The question is, what do I need to do in order to use Container 1 path and not host? Or do I misunderstand something about docker here?
For all that you say “Docker-in-Docker” and “dind”, this setup isn’t actually Docker-in-Docker: your container1 is giving instructions to the host’s Docker daemon that affect container2.
Host Container1
/-----
(Docker)
| Container2
\---->
(NB: this is generally the recommended path for CI-type setups. “Docker-in-Docker” generally means container1 is running its own, separate, Docker daemon, which tends to not be recommended.)
Since container1 is giving instructions to the host’s Docker, and the host’s Docker is launching container2, any docker run -v paths are always the host’s paths. Unless you know that some specific directory has already been mounted into your container, it’s hard to share files with “sub-containers”.
One way to get around this is to assert that there is a shared path of some sort:
docker run \
-v $PWD/exchange:/exchange \
-v /var/run/docker.sock:/var/run/docker.sock \
-e EXCHANGE_PATH=$PWD/exchange \
--name container1
...
# from within container1
mkdir $EXCHANGE_PATH/container2
echo hello world > $EXCHANGE_PATH/container2/file.txt
docker run \
-v $EXCHANGE_PATH/container2:/data
--name container2
...
When I’ve done this in the past (for a test setup that wanted to launch helper containers) I’ve used a painstaking docker create, docker cp, docker start, docker cp, Docker rm sequence. That’s extremely manual, but it has the advantage that the “local” side of a docker cp is always the current filesystem context even if you’re talking to the host’s Docker daemon from within a container.
It does not matter if container 2 binds the host path, because the changes to files in container 1 directly affect everything on the host path. So they all work on the same files.
So your setup is correct and will function the same as if they referenced in the way you described.
Update
If you want to make sure that the process do not modify the host files you could do the following:
Build a custom docker images which copies all data from folder a to folder b, where you execute the script on folder b. And then mount the files with ./:/a. This way you maintain flexibility on which files you bind to the container without letting the container modify the host files.
I hope this answers your question :)
I can enable auto-restart with --restart=always, but after I stop the container, how do I turn off that attribute?
I normally run a webserver and typically map port 80:
docker run -d --restart=always -p 80:80 -i -t myuser/myproj /bin/bash
But there are times when I want to run a newer version of my image, but I want to keep the old container around. The problem is that if there are multiple containers with --restart=always, only one of them (random?) starts because they're all contending for port 80 on the host.
You can use the --restart=unless-stopped option, as #Shibashis mentioned, or update the restart policy (this requires docker 1.11 or newer);
See the documentation for docker update and Docker restart policies.
docker update --restart=no my-container
that updates the restart-policy for an existing container (my-container)
Use the below to disable ALL auto-restarting (daemon) containers.
docker update --restart=no $(docker ps -a -q)
Use the following to disable restart a SINGLE container.
docker update --restart=no the-container-you-want-to-disable-restart
Rationale:
Docker provides restart policies to control whether your containers start automatically when they exit, or when Docker restarts. This is often very useful when Docker is running a key service.
Notes
If you are using docker-compose this might be useful to know.
restart no is the default restart policy, and it does not restart a
container under any circumstance. When always is specified, the
container always restarts. The on-failure policy restarts a container
if the exit code indicates an on-failure error.
restart: "no"
restart: always
restart: on-failure
restart: unless-stopped
restart: always
You can start your container with --restart=unless-stopped.
If you have a swarm restarting the containers, the swarm will restart any containers you stop or rm, irrespective of the restart option. That's a feature, not a bug.
Make sure you are not running a service you forgot about:
docker service ls
Then, you can stop the service
docker service rm <service id discovered with previous command>
Not a response to this question but to How to prevent docker from starting a container automatically on system startup?, which has been marked as a duplicate of this question.
If your container is started with restart=on-failure and has a faulty command that exits with a non-zero exit code when you stop the container with docker stop, it shows some weird behaviour: After stopping the container with docker stop, the container is stopped, but after restarting the docker daemon (or the system), it is started automatically again. To fix this, either fix the command of the container or use no or unless-stopped as the restart policy.
docker update --restart=yes/no <container-name/containerId>
To change the restart policy of all docker containers...
Identify the docker containers that will start on boot
This shell script will identify any docker containers that have a restart policy other than "no".
As the root user
CONTAINERS=$(for f in /var/lib/docker/containers/*/hostconfig.json ;
do
container=`echo $f | rev | cut -d '/' -f 2| rev`
jq \
--arg container "$container" \
--arg file "$f" '{"RestartPolicy":.RestartPolicy.Name, 'file':$file, 'container':$container} | select(.RestartPolicy != "no")' "$f" | \
jq .container -r | tr '\n' ' '
done)
or as NON-root...
CONTAINERS=$(for f in $(sudo sh -c "ls /var/lib/docker/containers/*/hostconfig.json");
do
container=`echo $f | rev | cut -d '/' -f 2| rev`
sudo jq \
--arg container "$container" \
--arg file "$f" '{"RestartPolicy":.RestartPolicy.Name, 'file':$file, 'container':$container} | select(.RestartPolicy != "no")' "$f" | \
jq .container -r | tr '\n' ' '
done)
(optionally) View the list of selected containers
echo $CONTAINERS
Sset all of the containers to "no" at once.
docker update --restart=no $CONTAINERS
Update only actively running containers
docker update --restart=no $(docker ps -q)