How stop containers run with `docker-compose run` - docker

I'm trying to use docker-compose to orchestrate several containers. To troubleshoot, I frequently end up running bash from within a container by doing:
$ docker-compose run --rm run web bash
I always try pass the --rm switch so that these containers are removed when I exit the bash session. Sometimes though, they remain, and I see them at the output of docker-compose ps.
Name Command State Ports
----------------------------------------------------------------------------------
project_nginx_1 /usr/sbin/nginx Exit 0
project_nginx_run_1 bash Up 80/tcp
project_web_1 python manage.py runserver ... Exit 128
project_web_run_1 bash Up 8000/tcp
At this point, I am trying to stop and remove these components manually, but I can not manage to do this. I tried:
$ docker-compose stop project_nginx_run_1
No such service: project_nginx_run_1
I also tried the other commands rm, kill, etc..
What should I do to get rid of these containers?
Edit:
Fixed the output of docker-compose ps.

just stop those test containers with the docker stop command instead of using docker-compose.
docker-compose shines when it comes to start together many containers, but using docker-compose to start containers does not prevent you from using the docker command to do whatever you need to do with individual containers.
docker stop project_nginx_run_1 project_web_run_1
Also, since you are debugging containers, I suggest to use docker-compose exec <service id> bash to get a shell in a running container. This has the advantage of not starting a new container.

With docker-compose, services can be stopped in two ways, but I would like add some detailed info about both options.
In short
docker-compose down
Stop and remove containers, networks, images, and volumes
docker-compose stop
Stop services
In detail
If docker-compose run starts services project_nginx_run_1 and project_web_run_1, then
docker-compose down log will be
$ docker-compose down
Stopping project_nginx_run_1 ...
Stopping project_web_run_1 ...
.
. some service logs goes here
Stopping project_web_run_1 ... done
Stopping project_nginx_run_1 ... done
Removing project_web_run_1 ... done
Removing project_nginx_run_1 ... done
Removing network project_default
docker-compose stop log will be
$ docker-compose stop
Stopping project_nginx_run_1 ...
Stopping project_web_run_1 ...
.
. some service logs goes here
Stopping project_web_run_1 ... done
Stopping project_nginx_run_1 ... done

The docker-compose, unlike docker, use the names for it's containers defined in the yml file. Therefore, to stop just one container the command will be:
docker-compose stop nginx_run

docker-compose down
from within the directory where it was launched, is the only way I managed to confirm it was stopped, as in docker-compose ps no longer yields it!

Related

Stop docker container with docker-compose

docker-compose fails with a timeout:
docker-compose stop mycontainer
but docker succeeds:
docker stop mycontainer
My questions
What is the difference between docker-compose stop and docker stop?
Where can I get more detailed information about that problem? (I killed docker-compose after a few minutes)
How can I solve that problem with docker-compose?
docker-compose stop it stops running containers that are started when you run command docker-compose start . it base on docker-compose file.
Please take a look content of docker-compose file for more details.
Running docker ps to see what are containers running.
docker stop actually, docker stop running_container_id it stops running container.
Doc says that docker stop sends a SIGTERMand then a SIGKILL to the running container, while docker-compose stop does not mention that. Maybe that is the reason.
Docs:
Compose: https://docs.docker.com/compose/reference/stop/
Docker: https://docs.docker.com/engine/reference/commandline/stop/
You might want to check if you can reproduce that signalling with docker-compose.
Edit: More on signal-handling in Docker Compose: https://docs.docker.com/compose/faq/
Finally I solved that problem by restarting the host vm.
Apparently the docker daemon was in trouble.
Nevertheless I still wonder, why docker-compose stop did not work.

Docker container keeps stopping after 'docker start'

I'm fairly new to Docker. I have a long Dockerfile that I inherited from a previous developer, which has many errors and I'm trying to get it back to a working point. I commented out most of the file except for just the first line:
FROM ubuntu:14.04
I did the following:
docker build -t pm . to build the image - this works because I can see the image when I execute docker images
docker run <image-id> returns without error or any message. Now I'm expecting the container to be created from the image and started. But when I do a docker ps -a it shows the container exited:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
b05f9727f516 f216cfb59484 "/bin/bash" About a
minute ago Exited (0) About a minute ago
lucid_shirley
Not sure why can't I get a running container and why does it keep stopping after the docker run command.
executing docker logs <container_id> displays nothing - it just returns without any output.
Your Docker image doesn’t actually do anything, container stop when finish its job. Since here no foreground process running it will start and then immediately stop.
To confirm your container have no issues, try to put below code into a docker-compose.yml(in same folder as the Dockerfile) and run docker-compose up, now you will see your container is running without exiting.
version: '3'
services:
my-service:
build: .
tty: true
Please have a look here Docker official tutorial it will guide you to how to work with docker.
try
docker run -it <image> /bin/bash
to run a shell inside the container.
That won't do much for you, but that'll show you what is happening: as soon as you exit the shell, it will exit the container too.
Your container basically doesn't do anything: it has an image of Ubuntu but doesn't have an ENTRYPOINT or a CMD command to run 'something'
Containers are ephemeral when ran: they run a single command and exit when the command finishes.
Docker container categorized following way.
Task Based : When container start it will start processing and it complete the process then exited.
Background container : It will wait for some request.
As you not provided your docker file so I assume that you have only one statement.
FROM ubuntu:14.04
your build statement create image with name pm.
Now you run
docker run pm
It will start container and stop as you did not provide any entry point.
Now try this
This is one command prompt or terminal.
docker run -it pm /bin/bash
Open another terminal or command prompt.
docker ps ( Now you will see there is one container).
If you want to see container that is continuously running then use following image.
(This is just a example)
docker run -d -p 8099:80 nginx
Above line run one container with Nginx image and when you open your browser http://localhost:8099 you can see the response.
Docker Containers are closely related to the process they are running. This process is specified by the "CMD" part on the Dockerfile. This process has the PID "1". If you kill it, your container is killed. If you haven't one, your container will stop instantly. In your case, you have to "override" your CMD. You can do it with a simple : "docker run -it ubuntu:18.04 bash". "-it" is mandatory since it allows the stdin to be attached to your container.
Have fun with docker.
Each instruction of Dockerfile is a layer within a container which perform some task. In your docker file It's just the loading the ubuntu which is completed when you run the docker within a fraction of seconds and exit since process finished. So if want to have your container running all the time then there should be a foreground process running in your docker.
For testing if you run
docker run <imageid> echo hi it will return the output means your container is fine.

Ignore container exit when using docker-compose

I am setting up a test infrastructure using docker-compose. I want to use the docker-compose option --exit-code-from to return the exit code from the container that is running tests. However, I also have a container that runs migrations on my database container using the sequelize cli. This migrations container exits with code 0 when migrations are complete and then my tests run. This causes an issue with both the --exit-code-from and --abort-on-container-exit options. Is there a way to ignore when the migration container exits?
Don't use docker-compose up for running one-off tasks. Use docker-compose run instead, as the documentation suggests:
The docker-compose run command is for running “one-off” or “adhoc” tasks. It requires the service name you want to run and only starts containers for services that the running service depends on. Use run to run tests or perform an administrative task such as removing or adding data to a data volume container. The run command acts like docker run -ti in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container.
Source: https://docs.docker.com/compose/faq/
For example:
docker-compose build my_app
docker-compose run db_migrations # this starts the services it depends on, such as the db
docker-compose run my_app_tests
--exit-code-from implies --abort-on-container-exit, which according to documentation
--abort-on-container-exit Stops all containers if any container was stopped.
But you could try:
docker inspect <container ID> --format='{{.State.ExitCode}}'
You can get a list of all (including stopped) containers with
docker container ls -a
Here's a nice example: Checking the Exit Code of Stopped Containers

Difference between docker-compose run, start, up

I'm new in docker.
What is the difference between these?
docker run 'an image'
docker-compose run 'something'
docker-compose start 'docker-compose.yml'
docker-compose up 'docker-compose.yml'
Thanks in advance.
https://docs.docker.com/compose/faq/#whats-the-difference-between-up-run-and-start
What’s the difference between up, run, and start?
Typically, you want docker-compose up. Use up to start or restart all the services defined in a docker-compose.yml. In the default “attached” mode, you see all the logs from all the containers. In “detached” mode (-d), Compose exits after starting the containers, but the containers continue to run in the background.
The docker-compose run command is for running “one-off” or “adhoc” tasks. It requires the service name you want to run and only starts containers for services that the running service depends on. Use run to run tests or perform an administrative task such as removing or adding data to a data volume container. The run command acts like docker run -ti in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container.
The docker-compose start command is useful only to restart containers that were previously created, but were stopped. It never creates new containers.
Also: https://docs.docker.com/compose/reference/

How to stop all containers when one container stops with docker-compose?

Up until recently, when one was doing docker-compose up for a bunch of containers and one of the started containers stopped, all of the containers were stopped. This is not the case anymore since https://github.com/docker/compose/issues/741 and this is a really annoying for us: We use docker-compose to run selenium tests which means starting application server, starting selenium hub + nodes, starting tests driver, then exiting when tests driver stops.
Is there a way to get back old behaviour?
You can use:
docker-compose up --abort-on-container-exit
Which will stop all containers if one of your containers stops
In your docker compose file, setup your test driver container to depend on other containers (with depends_on parameter). Your docker compose file should look like this:
services:
application_server:
...
selenium:
...
test_driver:
entry_point: YOUR_TEST_COMMAND
depends_on:
- application_server
- selenium
With dependencies expressed this way, run:
docker-compose run test_driver
and all the other containers will shut down when the test_driver container is finished.
This solution is an alternative to the docker-compose up --abort-on-container-exit answer. The latter will also shut down all other containers if any of them exits (not only the test driver). It depends on your use case which one is more adequate.
Did you try the work around suggested on the link you provided?
Assuming your test script looked similar to this:
$ docker-compose rm -f
$ docker-compose build
$ docker-compose up --timeout 1 --no-build
When the application tests end, compose would exit and the tests finish.
In this case, with the new docker-compose version, change your test container to have a default no-op command (something like echo, or true), and change your test script as follows:
$ docker-compose rm -f
$ docker-compose build
$ docker-compose up --timeout 1 --no-build -d
$ docker-compose run tests test_command...
$ docker-compose stop
Using run allows you to get the exit status from the test run, and you only see the output of the tests (not all the dependencies).
Reference
If this is not acceptable, you could refer to Docker Remote API and watch for the stop event for the containers and act on it.
An example usage is this docker-gen tool written in golang which watches for container start events, to automatically regenerate configuration files.
I'm not sure this is the perfect answer to your problem, but maestro for Docker, lets you manage mulitple Docker containers as single unit.
It should feel familiar as you group them using a YAML file.

Resources