Why does Docker not remove stopped containers? [duplicate] - docker

This question already has an answer here:
Why do I have to delete docker containers?
(1 answer)
Closed 1 year ago.
I am new to Docker and just getting started. I pulled a basic ubuntu image and started a few containers with it and stopped them. When I run the command to list all the docker containers (even the stopped ones) I get an output like this:
> docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
099c42011f24 ubuntu:latest "/bin/bash" 6 seconds ago Exited (0) 6 seconds ago sleepy_mccarthy
dde61c10d522 ubuntu:latest "/bin/bash" 8 seconds ago Exited (0) 7 seconds ago determined_rosalind
cd1a6fa35741 ubuntu:latest "/bin/bash" 9 seconds ago Exited (0) 8 seconds ago unruffled_lichterman
ff926b6eba23 ubuntu:latest "/bin/bash" 10 seconds ago Exited (0) 10 seconds ago cool_rosalind
8bd50c2c4729 ubuntu:latest "/bin/bash" 12 seconds ago Exited (0) 11 seconds ago cranky_darwin
My question is, is there a reason why docker does not delete the stopped containers by default?

The examples you've provided show that you're using an Ubuntu container just to run bash. While this is fairly common pattern while learning Docker, it's not what docker is used for in production scenarios, which is what Docker cares about and is optimizing for.
Docker is used to deploy an application within a container with a given configuration.
Say you spool up a database container to hold information about your application. Then your docker host restarts for some reason, and that database disappears by default. That would be a disaster.
It's therefore much safer for Docker to assume that you want to keep your containers, images, volumes, and so on, unless you explicitly ask for them to be removed and decide this is what you want when you start them, with docker run --rm <image> for example.

In my opinion, it may have some reasons. Consider below condition:
I build my image and start the container (production environment, for some reason I stop the current container, do some changes to image and run another instance, so new container with new name is running.
I see new container does not work properly as expected, so as now I have the old container, I can run the old one and stop the new so the clients will not face any issues.
But what if containers were automatically deleted if they were stopped?
Simple answer, I would have lost my clients (even my job) simply:) And one person would be added to unemployed people :D
As #msanford mentioned, Docker assumes you want to keep your data, volumes, etc. so you'll probably re-use them when needed.
Since Docker is used to deploy and run applications (as simple as WordPress with MySQL but with some differences installing on Shared Hosting), usually it's not used for only running bash.
Surely it's good to learn Docker in the first steps by running things like bash or sh to see the contents of container.

Related

How to enable changes in AWX Containers?

I am trying to install additional python packages in AWX container awx_tasks so that the changes could enable the ansible modules like snow, ec2_elb_facts run (which have pre-requisites as Python modules). I have made the changes in the container using:
# docker exec -it 80ab6bf562a9 bash
where 80ab6bf562a9 is the container id for awx_task container.
and then installed the required packages inside the custom virtual environment (as mentioned in the AWX documentation). Post this, i have made the changes permanent by creating a new image with the container changes using:
# docker commit 80ab6bf562a9 ansible/awx_task:latest
Post this, ran the following command to map the new container with the newly created image with container changes.
# docker run --name awx_task -d 5290f9b3268c
Following are the containers post the above changes. Here, the newly created container which was mapped with the new image with changes in existing container is 968fb2a7da2f.
# docker container ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
968fb2a7da2f 5290f9b3268c "/tini -- /bin/sh -c…" 2 days ago Exited (143) 2 days ago awx_task
80ab6bf562a9 535bb2b8e1f3 "/tini -- /bin/sh -c…" 3 weeks ago Up 2 days 8052/tcp awx_task_OLD
aea2551951d5 b7c261b76010 "/tini -- /bin/sh -c…" 3 weeks ago Up 2 days 0.0.0.0:80->8052/tcp awx_web
e789a4a82a9e memcached:alpine "docker-entrypoint.s…" 3 weeks ago Up 2 days 11211/tcp memcached
a8c74584255c ansible/awx_rabbitmq:3.7.4 "docker-entrypoint.s…" 3 weeks ago Up 2 days 4369/tcp, 5671-5672/tcp, 15671-15672/tcp, 25672/tcp rabbitmq
25f6f6ca7766 postgres:9.6 "docker-entrypoint.s…" 3 weeks ago Up 2 days 5432/tcp postgres
Following are my images post above changes. Here, the newly created image (with changes) is 5290f9b3268c.
# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ansible/awx_task latest 5290f9b3268c 2 days ago 1.48GB
postgres 9.6 106bdfb062df 8 weeks ago 235MB
ansible/awx_task <none> 535bb2b8e1f3 8 weeks ago 1.07GB
ansible/awx_web <none> b7c261b76010 8 weeks ago 1.04GB
hello-world latest 2cb0d9787c4d 2 months ago 1.85kB
memcached alpine b40e8fa7e3e5 2 months ago 8.69MB
ansible/awx_rabbitmq 3.7.4 e08fe791079e 6 months ago 85.6MB
The new container is properly mapped with the new image (which has got the changes i wanted). The issue now is that when i stop the old container and start the new container AWX doesn't work. I can just view the UI, if i run any tasks like executing templates, it just freezes. It appears like the new container/images are not talking with the other containers like awx_rabbitmq, postgres etc. I have been reading multiple posts regarding this however, i couldn't find any single post which highlights anything regarding this.
I basically want the changes in the awx_task container to work so that i could achieve the goal of making the custom modules work. Could anyone suggest what can be done so that the new awx_task container could take the role of the older awx_task and AWX could work normally?
Since i found the way to do this, i will share the steps to make the required changes.
The python package versions can be controlled from the requirements directory, AWX Task and AWX Web Images related changes can be applied in the Dockerfile.j2 in the roles directory. Once the required changes are applied, we can run the setup using ansible-playbook install.yml -i inventory.
You should use the install.yml to restart the awx_task container, since it ensures the right environment variables are set, the right volumes are mapped, etc. Same command as you've used to install AWX:
ansible-playbook install.yml -i inventory.
See here for a full list of arguments that are used.

Specify Container ID of docker process to attach

On my remote server, some developers run the same docker images named "my_account/analysis". So, once detached from the docker process, it is struggling to know which is my own process.
The result of docker ps is like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6968e76b3746 my_account/analysis "bash" 44 hours ago Up 44 hours 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:8887->8887/tcp modest_jones
42d970206a29 my_account/analysis "bash" 7 days ago Up 7 days 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:32771->8885/tcp gallant_chandrasekhar
ac9f804b7fe0 my_account/analysis "bash" 11 days ago Up 11 days 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:8798->8798/tcp suspicious_mayer
e8e260aab4fb my_account/analysis "bash" 12 days ago Up 12 days 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:32770->8885/tcp nostalgic_euler
In this case, because I remember that I ran docker around 2 days ago, I attach my container by docker attach 6968e. However, usually we forgot this.
What is the best practice to detect the container ID of mine under the situation that there are a lot of containers with the same Image name?
The simple way is to name the containers
docker run --name my-special-container my_account/analysis
docker attach my-special-container
You can store the container ID in a file when it launches
docker run --cidfile ~/my-special-container my_account/analysis
docker attach $(cat ~/my-special-container)
You can add more detailed metadata with object labels, but they are not as easily accessible as names
docker run --label com.rkjt50r983.tag=special my_account/analysis
docker ps --filter 'label=com.rkjt50r983.tag=special'

How to exit "docker run" containers once the script those containers execute calls exit()

I have a docker-compose setup, which is deployed in three steps:
Build all the containers and dc up -d (dc is an alias for docker-compose)
Create database with: dc run web /usr/local/bin/python create_db.py
Populate database with: dc run -d web /usr/local/bin/python -u manage.py populateDB
Steps 2 and 3 create new containers (see the first two):
~/Documents/Project » docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2ead532ea58b myproject_web "/usr/local/bin/pytho" 8 minutes ago Up 8 minutes 8000/tcp myproject_web_run_2
64e1f81ecd1a myproject_web "/usr/local/bin/pytho" 9 minutes ago Restarting (0) About a minute ago 8000/tcp myproject_web_run_1
9f5c670d4d7f myproject_nginx "/usr/sbin/nginx" 40 minutes ago Up 40 minutes 0.0.0.0:80->80/tcp myproject_nginx_1
46d3e8c09c03 myproject_web "/usr/local/bin/gunic" 40 minutes ago Up 40 minutes 8000/tcp myproject_web_1
ea876e68c8c6 postgres:latest "/docker-entrypoint.s" 40 minutes ago Up 40 minutes 0.0.0.0:5432->5432/tcp myproject_postgres_1
Which is all well and good, except they don't exit when their job is finished.
For example, the create db script, as you can see, is always restarting after it has created the database. Once achromap_web_run_2 has finished populating the database, it will add a second copy of each record, then a third, etc.. forever.
On github, it seems like this was asked for from docker, and the docker run --rm flag handles it. But --rm and -d are incompatible, which I don't understand.
Do you know how to kill containers which have finished executing their functions? Specifically, how to get dc run web /usr/local/bin/python create_db.py to exit once create_db.py calls exit()? Or is there a better way?
I think you may be conflating two things here.
The --rm flag
This exists to clean up after a container is finished, so it doesn't hang around in the dead containers pool. As you already found, it is not compatible with -d. But in this case, you don't need it anyway.
The --restart flag
(Also available in docker-compose as the restart property.)
This flag sets the restart policy. By default it is set to no, but you can set it to a few other values, including always. I would suspect you have it set to always currently, which would force the container to restart every time it stops on its own.
If you manually stop the container (docker stop ...) then the auto-restart would not engage. But if the process exits on its own, or crashes, then the container will be restarted. This is available for the obvious reason, so your service will start up again if it crashes.
How to proceed
I would say what you need is to use exec instead of run for these tasks.
First, run your container normally (i.e. docker-compose up -d).
Instead of using run to execute create_db.py, use exec.
docker-compose exec web /usr/local/bin/python create_db.py
This will use your already-running container, execute the script one time, and when the script exits, you're done. Since you did not create a new container (like run was doing), there is no cleanup to do afterward.
Note that you do not need the -it flag that is often used with docker exec. docker-compose emulates a tty on exec by default.

How do I run a docker container based on its name from `docker ps -a`?

By default docker leaves a bunch of dead volumes around.
$ docker ps -a
61e99f563834 jolly_swanson user/name:version "command" 52 seconds ago Exited (130) 51 seconds ago
Why doesn't docker run jolly_swanson restart that container with its old data? I feel like I must be missing something from the documentation.
You seem to be confusing images and containers. Docker leaves dead containers around, not images (and not volumes either).
docker run creates a new container from an existing image. So docker run jolly_swanson does not work because jolly_swanson is the name of a container, not an image.
To start an existing container, use start, e.g. docker start jolly_swanson.

Why do docker-compose containers stay alive after finishing up command?

I use docker-compose for developing django apps: I have a fairly simple setup - one service for web serving and one for database. Sometimes I need to run manage.py commands through docker.
I use following command:
$ docker-compose run web ./manage.py migrate
which works just fine. But the problem is that docker keeps the containers even after the command was finished:
$ docker-compose ps
Name Command State Ports
------------------------------------------------------------------------------------
seeder_data_1 sh Exit 0
seeder_postgres_1 /docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
seeder_web_1 fab run_local Up 0.0.0.0:8000->8000/tcp
seeder_web_run_14 ./manage.py migrate Up
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
61d3aee5bb39 seeder_web "./manage.py migrate" 4 minutes ago Restarting (2) 55 seconds ago seeder_web_run_14
85e91cb9383c seeder_web "fab run_local" 5 minutes ago Up 5 minutes 0.0.0.0:8000->8000/tcp seeder_web_1
565b01dedb7b postgres:latest "/docker-entrypoint.s" 5 minutes ago Up 5 minutes 0.0.0.0:5432->5432/tcp seeder_postgres_1
This is rather annoying: I don’t want to have ton of zombie containers that were used only for one command. Is there some way how to automatically end the machines after the command returns? I know I can kill the manually but that is kind of annoying.
Or is there some logic behind this?
Sounds like you want the --rm flag for docker-compose run: https://docs.docker.com/compose/reference/run/. It has been mentioned a couple times to make it a default (https://github.com/docker/compose/issues/2774 and https://github.com/docker/compose/issues/943) but looks like they both lost steam.

Resources