How to make a running docker container to detached mode? - docker

If I have ran a docker container / docker compose service without -d option, how do I change the running instance to detached mode? Is it possible?
# docker-compose up myservice &
[2] 12345

Docker supports a keyboard combination to gracefully detach from a container. Press Ctrl-P, followed by Ctrl-Q, to detach from your connection.
To change the configuration of a previously created container you can use docker update
docker update [OPTIONS] CONTAINER [CONTAINER...]
Note:
The docker update and docker container update commands are not supported for Windows containers.

Related

Restart a docker container from another running container

I am using docker-compose for deployment.
I want to restart my "centos-1" container from "centos-2" container. Both containers are running on the same host.
Please suggest, How could I achieve this in a simplest and automated way?
I followed How to run shell script on host from docker container? and tried to run a script on Host from "centos-2" container, but the script is executing inside a container and not on the host.
Script:
#!/bin/bash
sudo docker container restart centos-1
Error:
line 2: docker: command not found
(Docker isn't installed inside any centos-2 container)
You need:
Install docker CLI (command line interface) on second container. Do not confuse with full scale installation - you dont need docker daemon, only command line tool (docker executable)
Share you host's docker daemon (service) to make it accessible in second container. That is achieved with simply sharing /var/run/docker.sock when launching 2nd container, example:
docker run ... -v "/var/run/docker.sock:/var/run/docker.sock" container2 ...
Now you can execute any docker command, like docker stop from second container and these commands are happily passed to your main (and the only) docker daemon.
There is a approach from the CI-context to control the Docker Daemon on System from a running container called Docker-out-of-Docker (DooD):
you have to install docker inside your container
Map you docker installation from your system inside your container using volumes
-v /var/run/docker.sock:/var/run/docker.sock
Now each docker command inside your container are execute on the system docker installation. E.g. if you type docker image list inside your container there should be the same list as if your type the command on your system.

Create docker container from within a container

I have docker on my host machine with a container running. I was wondering if it's possible, and what the best approach would be, to "trigger" a container creation from the running container.
Let's say my machine is host and I have a container called app (with id 123456789) running on host.
root#host $ docker contain ls
123456789 app_mage .... app
I would like to create a container on host from within app
root#123456789 $ docker run --name app2 ...
root#host docker container ls
123456789 app_mage .... app
12345678A app_mage .... app2
What I need is for my app to be running on docker and to run arbitrary applications in an isolated environment (but I'd rather avoid docker-in-docker)
A majority of the Docker community will veer away from these types of designs, however it is very doable.
Similar to Starting and stopping docker container from other container you can simply mount the docker.sock file from the host machine into the container, giving it privilege to access the docker daemon.
To make things more automated, you could use the docker-py sdk to start containers from inside a container, which would in turn access the Docker deamon on the host machine hosting the container that you are spawning more containers from.
For example:
docker run -v /var/run/docker.sock:/var/run/docker.sock image1 --name test1
----
import docker
def create_container():
docker.from_env().containers.run("image2", name="test2")
This example starts container test1, and runs that method inside the newly created container, which in turn creates a new container test2 running on the same host as test1.

How to disown a docker container running inside SSH session

I have accessed a Remote Machine (call it , RM) through SSH (from my host). And I am running a docker image inside RM via my SSH session. Both are Ubuntu 16.04 based.
There are some processes running inside this docker container, so I can't exit the container.
So,how do I detach this ssh session from my host, so that those processes inside the docker would still run unaffected.
I am doing this, because I have to restart my host machine for some purpose.
PS:
In this link Correct way to detach from a container without stopping it, it's not running the docker container via SSH session. So two scenarios are different.
First, you have to start your Docker container in daemon (non-interactive) mode, using -d argument and dropping -it. Don't forget to name your container for further usage with --name foo option.
After container is started, you can control it using docker exec -it foo sh-or-whatever. If your ssh session will terminate, container will continue running. However, you docker exec session will be over.

Docker ps disappeared after restart system

Run docker container on a Linux system. From docker ps can see all the processes.
After restart the system and run docker ps can't see some containers, but use docker ps -a can see them. Is the container still running?
If you don't set the option --restart=always when run the docker container, these containers will not be started automatically, after you restart the system.
Restart policies (–restart)
always - Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container indefinitely. The container will also always start on daemon startup, regardless of the current state of the container.
Refer: docker run - Restart policies (–restart)

Docker swarm-manager displays old container information

I am using docker-machine with Google Compute Engine(GCE)
to run a
docker swarm cluster. I created a swarm successfully with 2
nodes
(swnd-01 & swnd-02) in the cluster. I created a daemon container
like this
in the swarm-manager environment:
docker run -d ubuntu /bin/bash
docker ps shows the container running on swnd-01. When I tried
executing a command over the container using docker exec I get the
error that container is not running while docker ps shows otherwise.
I ssh'ed into swnd-01 via docker-machine to come to know that container
exited as soon as it was created. I tried docker run command inside the
swnd-01 but it still exits. I don't understand the behavior.
Any suggestions will be thankfully received.
The reason it exits is that the /bin/bash command completes and a Docker container only runs as long as its main process (if you run such a container with the -it flags the process will keep running while the terminal is attached).
As to why the swarm manager thought the container was still running, I'm not sure. I guess there is a short delay while Swarm updates the status of everything.

Resources