Docker swarm-manager displays old container information - docker

I am using docker-machine with Google Compute Engine(GCE)
to run a
docker swarm cluster. I created a swarm successfully with 2
nodes
(swnd-01 & swnd-02) in the cluster. I created a daemon container
like this
in the swarm-manager environment:
docker run -d ubuntu /bin/bash
docker ps shows the container running on swnd-01. When I tried
executing a command over the container using docker exec I get the
error that container is not running while docker ps shows otherwise.
I ssh'ed into swnd-01 via docker-machine to come to know that container
exited as soon as it was created. I tried docker run command inside the
swnd-01 but it still exits. I don't understand the behavior.
Any suggestions will be thankfully received.

The reason it exits is that the /bin/bash command completes and a Docker container only runs as long as its main process (if you run such a container with the -it flags the process will keep running while the terminal is attached).
As to why the swarm manager thought the container was still running, I'm not sure. I guess there is a short delay while Swarm updates the status of everything.

Related

Containers not running in detached mode

Today I tried to run my containers in detached mode and I faced some issue.
When I ran the command docker container run -d nginx, the image of nginx got pulled and the output of the container was not shown as it was in detached mode.
Then I ran the command docker container ls which we all know shows only running containers and it showed my nginx container running.
Then I tried the same thing with the ubuntu image i.e.
docker container run -d ubuntu but when I ran docker container ls command my ubuntu image was not running, only the nginx container was running.
Why is it so?
You don't see a running container with the ubuntu image, because the container stops immediately after being started. while the nginx image starts an nginx server that keeps the container running, the ubuntu image executes a sh -c "bash" on start - bash is not a process that keeps on running after execution. You will be able to see your stopped ubuntu container with the docker ps -a
If you want to keep the ubuntu container running, you need to pass it a command that starts a process that keeps on running, e.g. docker run -d ubuntu tail -f /dev/null

How to disown a docker container running inside SSH session

I have accessed a Remote Machine (call it , RM) through SSH (from my host). And I am running a docker image inside RM via my SSH session. Both are Ubuntu 16.04 based.
There are some processes running inside this docker container, so I can't exit the container.
So,how do I detach this ssh session from my host, so that those processes inside the docker would still run unaffected.
I am doing this, because I have to restart my host machine for some purpose.
PS:
In this link Correct way to detach from a container without stopping it, it's not running the docker container via SSH session. So two scenarios are different.
First, you have to start your Docker container in daemon (non-interactive) mode, using -d argument and dropping -it. Don't forget to name your container for further usage with --name foo option.
After container is started, you can control it using docker exec -it foo sh-or-whatever. If your ssh session will terminate, container will continue running. However, you docker exec session will be over.

How can I know the docker launcher command from an exited container in CoreOS?

I am using CoreOS as the host system and there are many docker containers running on it. How can I get the docker launch command for these containers? I searched there are some ways by installing third party library to do the reverse engineering work. But it doesn't work on CoreOS since I can only install docker container there.
The reason I want to know the launcher command is that I have a running container (this container was launched by some other scripts). I attach to this container and fork a process. The container will exit with 137 code if I kill that process. It works fine if I launch the container by this command: docker run -it -d $NAME bash. I am not sure why this happens. There must be something different with the launcher command.

Docker ps disappeared after restart system

Run docker container on a Linux system. From docker ps can see all the processes.
After restart the system and run docker ps can't see some containers, but use docker ps -a can see them. Is the container still running?
If you don't set the option --restart=always when run the docker container, these containers will not be started automatically, after you restart the system.
Restart policies (–restart)
always - Always restart the container regardless of the exit status. When you specify always, the Docker daemon will try to restart the container indefinitely. The container will also always start on daemon startup, regardless of the current state of the container.
Refer: docker run - Restart policies (–restart)

How to make docker to stop container when host process attached to tty terminates

I'm running a docker containers interactively:
sudo docker run --rm -t -i CONTAINER_NAME bash
I need instances of containers to be purged after usage. Also container does not have any sense since tty is lost. When session is closed from container side (exit in bash) everything works fine but if my ssh session to host disconnects container stays running (shown in docker ps). This could also be reproduced by opening container in tmux window and then killing a window.
Is there a way to make docker to stop a container if host process (ssh session or tmux) that is attached to tty terminates?

Resources