I'm toying around with Docker Swarm. I've deployed a service an performed a couple of updates to see how it works. I'm observing that docker is keeping the old images around for the service.
How do I clean those up?
root#picday-manager:~# docker service ps picday
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
bk6pw0t8vw4r picday.1 ischyrus/picday:latest picday-manager Running Running 6 minutes ago
ocbcjpnc71e6 \_ picday.1 ischyrus/picday:latest picday-manager Shutdown Shutdown 6 minutes ago
lcqqhbp8d99q \_ picday.1 ischyrus/picday:latest picday-manager Shutdown Shutdown 11 minutes ago
db7mco0d4uk0 picday.2 ischyrus/picday:latest picday-manager Running Running 6 minutes ago
z43p0lcdicx4 \_ picday.2 ischyrus/picday:latest picday-manager Shutdown Shutdown 6 minutes ago
These are containers, not images. In docker, there's a rather significant difference between the two (images are the definition used to create a container). Inside of a swarm service, they are referred to as tasks. To adjust how many docker keeps by default, you can change the global threshold with:
docker swarm update --task-history-limit 1
The default value for this is 5.
To remove individual containers, you can remove the container from the host where it's running with:
docker container ls -a | grep picday
docker container rm <container id>
Related
This question already has an answer here:
Why do I have to delete docker containers?
(1 answer)
Closed 1 year ago.
I am new to Docker and just getting started. I pulled a basic ubuntu image and started a few containers with it and stopped them. When I run the command to list all the docker containers (even the stopped ones) I get an output like this:
> docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
099c42011f24 ubuntu:latest "/bin/bash" 6 seconds ago Exited (0) 6 seconds ago sleepy_mccarthy
dde61c10d522 ubuntu:latest "/bin/bash" 8 seconds ago Exited (0) 7 seconds ago determined_rosalind
cd1a6fa35741 ubuntu:latest "/bin/bash" 9 seconds ago Exited (0) 8 seconds ago unruffled_lichterman
ff926b6eba23 ubuntu:latest "/bin/bash" 10 seconds ago Exited (0) 10 seconds ago cool_rosalind
8bd50c2c4729 ubuntu:latest "/bin/bash" 12 seconds ago Exited (0) 11 seconds ago cranky_darwin
My question is, is there a reason why docker does not delete the stopped containers by default?
The examples you've provided show that you're using an Ubuntu container just to run bash. While this is fairly common pattern while learning Docker, it's not what docker is used for in production scenarios, which is what Docker cares about and is optimizing for.
Docker is used to deploy an application within a container with a given configuration.
Say you spool up a database container to hold information about your application. Then your docker host restarts for some reason, and that database disappears by default. That would be a disaster.
It's therefore much safer for Docker to assume that you want to keep your containers, images, volumes, and so on, unless you explicitly ask for them to be removed and decide this is what you want when you start them, with docker run --rm <image> for example.
In my opinion, it may have some reasons. Consider below condition:
I build my image and start the container (production environment, for some reason I stop the current container, do some changes to image and run another instance, so new container with new name is running.
I see new container does not work properly as expected, so as now I have the old container, I can run the old one and stop the new so the clients will not face any issues.
But what if containers were automatically deleted if they were stopped?
Simple answer, I would have lost my clients (even my job) simply:) And one person would be added to unemployed people :D
As #msanford mentioned, Docker assumes you want to keep your data, volumes, etc. so you'll probably re-use them when needed.
Since Docker is used to deploy and run applications (as simple as WordPress with MySQL but with some differences installing on Shared Hosting), usually it's not used for only running bash.
Surely it's good to learn Docker in the first steps by running things like bash or sh to see the contents of container.
I tried to start the docker service using swarm mode, but I am not able to connect to port 8080
~ $ docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
3tdzofpn6qo5 vigilant_wescoff replicated 0/1 shantanu/abc:latest *:8080->8080/tcp
~ $ docker service ps 3tdzofpn6qo5
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
iki0t3x1oqmz vigilant_wescoff.1 shantanuo/abc:latest ip-172-31-4-142.ap-south-1.compute.internal Ready Ready 1 second ago
z88nyixy7u10 \_ vigilant_wescoff.1 shantanu/abc:latest ip-172-31-4-142.ap-south-1.compute.internal Shutdown Complete 5 minutes ago
zf4fac2a4dlh \_ vigilant_wescoff.1 shantanu/abc:latest ip-172-31-4-142.ap-south-1.compute.internal Shutdown Complete 11 minutes ago
zzqj4lldmxox \_ vigilant_wescoff.1 shantanu/abc:latest ip-172-31-6-134.ap-south-1.compute.internal Shutdown Complete 14 minutes ago
z8eknet7oirq \_ vigilant_wescoff.1 shantanu/abc:latest ip-172-31-20-50.ap-south-1.compute.internal Shutdown Complete 17 minutes ago
I used docker for aws (community version)
https://docs.docker.com/docker-for-aws/#docker-community-edition-ce-for-aws
But I guess that should not make any difference and the container should work. I have tested it using docker run command it works as expected.
In case of swarm mode, how do I know what exactly is going wrong?
You can use docker events on managers to see what the orchestrator is doing (but you can't see the history).
You can use docker events on workers to see what containers/networks/volumes etc. are doing (but you can't see the history).
You can look at the docker service logs to see current and past container logs
You can use docker container inspect to see the exit (error) code of the stopped containers in that service task list.
On my remote server, some developers run the same docker images named "my_account/analysis". So, once detached from the docker process, it is struggling to know which is my own process.
The result of docker ps is like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6968e76b3746 my_account/analysis "bash" 44 hours ago Up 44 hours 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:8887->8887/tcp modest_jones
42d970206a29 my_account/analysis "bash" 7 days ago Up 7 days 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:32771->8885/tcp gallant_chandrasekhar
ac9f804b7fe0 my_account/analysis "bash" 11 days ago Up 11 days 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:8798->8798/tcp suspicious_mayer
e8e260aab4fb my_account/analysis "bash" 12 days ago Up 12 days 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:32770->8885/tcp nostalgic_euler
In this case, because I remember that I ran docker around 2 days ago, I attach my container by docker attach 6968e. However, usually we forgot this.
What is the best practice to detect the container ID of mine under the situation that there are a lot of containers with the same Image name?
The simple way is to name the containers
docker run --name my-special-container my_account/analysis
docker attach my-special-container
You can store the container ID in a file when it launches
docker run --cidfile ~/my-special-container my_account/analysis
docker attach $(cat ~/my-special-container)
You can add more detailed metadata with object labels, but they are not as easily accessible as names
docker run --label com.rkjt50r983.tag=special my_account/analysis
docker ps --filter 'label=com.rkjt50r983.tag=special'
I currently have 8 containers across 4 different host machines in my docker setup.
ubuntu#ip-192-168-1-8:~$ docker service ls
ID NAME MODE REPLICAS IMAGE
yyyytxxxxx8 mycontainers global 8/8 myapplications:latest
Running a ps -a on the service yields the following.
ubuntu#ip-192-168-1-8:~$ docker service ps -a yy
NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
mycontainers1 myapplications:latest: ip-192-168-1-5 Running Running 3 hours ago
\_ mycontainers2 myapplications:latest: ip-192-168-1-6 Running Running 3 hours ago
\_ mycontainers3 myapplications:latest: ip-192-168-1-7 Running Running 3 hours ago
\_ mycontainers4 myapplications:latest: ip-192-168-1-8 Running Running 3 hours ago
\_ mycontainers1 myapplications:latest: ip-192-168-1-5 Running Running 3 hours ago
\_ mycontainers2 myapplications:latest: ip-192-168-1-6 Running Running 3 hours ago
\_ mycontainers3 myapplications:latest: ip-192-168-1-7 Running Running 3 hours ago
\_ mycontainers4 myapplications:latest: ip-192-168-1-8 Running Running 3 hours ago
My question is how can i execute a restart of all the containers using the service ID? I dont want to manually log into every node and execute a restart.
In the latest stable version of docker 1.12.x, it is possible to restart the container by updating the service configuration, but in the docker 1.13.0 which is released soon, even if the service setting is not changed, by specifying the --force flag, the container will be restarted. If you do not mind to use the 1.13.0 RC4 you can do it now.
$ docker service update --force mycontainers
Update: Docker 1.13.0 has been released.
https://github.com/docker/docker/releases/tag/v1.13.0
Pre-Docker 1.13, I found that scaling all services down to 0, waiting for shutdown, then scaling everything back up to the previous level works.
docker service scale mycontainers=0
# wait
docker service scale mycontainers=8
Updating the existing service, swarm will recreate all containers. For example, you can simply update a property of the service to archive restarting.
Does anyone use docker service create with command like docker run -it ubuntu bash ?
e.g: docker service create --name test redis bash.
I want to run a temp container for debugging on production environment in swarm mode with the same network.
This is my result:
user#ubuntu ~/$ docker service ps test
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR
bmig9qd9tihw7q1kff2bn42ab test.1 redis ubuntu Ready Ready 3 seconds ago
9t4za9r4gb03az3af13akpklv \_ test.1 redis ubuntu Shutdown Complete 4 seconds ago
1php2be7ilp7psulwp31b3ib4 \_ test.1 redis ubuntu Shutdown Complete 10 seconds ago
drwyjdggd13n1emb66oqchmuv \_ test.1 redis ubuntu Shutdown Complete 15 seconds ago
b1zb5ja058ni0b4c0etcnsltk \_ test.1 redis ubuntu Shutdown Complete 21 seconds ago
When you create a service that startup Bash it will imediately stop because it is in detached mode.
You can have the same behavior if you run docker run -d ubuntu bash