Are docker containers destroyed when machine is restarted? - docker

I was having a network issue, so I restarted my local machine, which also aborted my docker default VM. So I ran the below command to restart my virtualbox instance docker-machine restart default.
I previously had built containers on default, but I want to know, do I need to rebuild those same containers now that I restarted default, or can I just run docker-compose up?

They are not destroyed. They are stopped though. You can check with
docker ps -a
This will show all containers, stopped and running. In order to start a container
docker start <container name or container id>

Related

Docker Container restart after VM reboot

we have deployed docker container and the restart policy is --restart unless-stopped. This will start the container after VM reboot. The docker service is also enabled to start the service after reboot.
The problem is whenever reboot happen, the list of containers and images all are gone. Solution for this is to restart the docker service. so after this container is coming up.
so question is why do we need to restart the docker service even after it is enabled to start after reboot?
appreciate help on this?
I don't know the truly internals of Docker Engine, but I make my assumptions:
On Virtual Machine (or native host) restart, the docker engine is stopped, so the containers receive a "stop" signal too (in background a systemctl stop docker will be performed)
What you are doing with systemctl restart docker (or similar command) is sending a "restart" signal to the docker engine, not a "stop" one.
You should use --restart always if you want to be sure that the containers are restarted automatically when the VM is freshly started/restarted.

How to disown a docker container running inside SSH session

I have accessed a Remote Machine (call it , RM) through SSH (from my host). And I am running a docker image inside RM via my SSH session. Both are Ubuntu 16.04 based.
There are some processes running inside this docker container, so I can't exit the container.
So,how do I detach this ssh session from my host, so that those processes inside the docker would still run unaffected.
I am doing this, because I have to restart my host machine for some purpose.
PS:
In this link Correct way to detach from a container without stopping it, it's not running the docker container via SSH session. So two scenarios are different.
First, you have to start your Docker container in daemon (non-interactive) mode, using -d argument and dropping -it. Don't forget to name your container for further usage with --name foo option.
After container is started, you can control it using docker exec -it foo sh-or-whatever. If your ssh session will terminate, container will continue running. However, you docker exec session will be over.

Rancher exited after host machine power off and reboot

Can someone explain me why after host machine reboot all containers are exited?How can i find a way to restart containers specially rancher containers and everything be as before?
You can use the docker restart policy to control container automatic startup. Check Start containers automatically for more info.
As for the current container that are stopped you need to start them manually:
docker ps -a
docker start <container>

Docker ps disappeared after restart system

Run docker container on a Linux system. From docker ps can see all the processes.
After restart the system and run docker ps can't see some containers, but use docker ps -a can see them. Is the container still running?
If you don't set the option --restart=always when run the docker container, these containers will not be started automatically, after you restart the system.
Restart policies (–restart)
always - Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container indefinitely. The container will also always start on daemon startup, regardless of the current state of the container.
Refer: docker run - Restart policies (–restart)

Docker swarm-manager displays old container information

I am using docker-machine with Google Compute Engine(GCE)
to run a
docker swarm cluster. I created a swarm successfully with 2
nodes
(swnd-01 & swnd-02) in the cluster. I created a daemon container
like this
in the swarm-manager environment:
docker run -d ubuntu /bin/bash
docker ps shows the container running on swnd-01. When I tried
executing a command over the container using docker exec I get the
error that container is not running while docker ps shows otherwise.
I ssh'ed into swnd-01 via docker-machine to come to know that container
exited as soon as it was created. I tried docker run command inside the
swnd-01 but it still exits. I don't understand the behavior.
Any suggestions will be thankfully received.
The reason it exits is that the /bin/bash command completes and a Docker container only runs as long as its main process (if you run such a container with the -it flags the process will keep running while the terminal is attached).
As to why the swarm manager thought the container was still running, I'm not sure. I guess there is a short delay while Swarm updates the status of everything.

Resources