I run docker ps and it shows that 5 containers that have been running for three weeks.
I then run docker-compose down but when I run docker ps again, they are all still running.
I have tried the following command but none seems to work
kill
stop
down --rmi local
rm
down
How can I stop these? I tried just bringing up my new docker-compose.yml and ignoring the olde one but I get:
ERROR: for apache Cannot create container for service apache: Conflict. The container name "/apache" is already in use by container "70c570d60e1248292f279a37634fd8b4ce7e2535d2bfa14b2c6e4089652c0152". You have to remove (or rename) that container to be able to reuse that name.
What to try to stop the old container?
You can list containers:
(base) docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c788727f0f7b postgres:14 "docker-entrypoint.s…" 7 days ago Up 7 days 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp dev-db
88e8ddcb7d4e redis "docker-entrypoint.s…" 7 days ago Up 7 days 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp beautiful_neumann
Delete container:
(base) docker rm -f c788727f0f7b # container_id
c788727f0f7b
List containers:
(base) docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
88e8ddcb7d4e redis "docker-entrypoint.s…" 7 days ago Up 7 days 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp beautiful_neumann
As you can see the container got stopped(c788727f0f7b).
You can list stopped containers using:
docker container ls -f 'status=exited'
Related
This question already has answers here:
Stop and remove all docker containers
(12 answers)
Closed 10 months ago.
I want running containers to be stopped and removed.
PS C:\Users\Bayram Eren> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
8d8299ddb6bf nginx "/docker-entrypoint.…" 8 minutes ago Up 8 minutes 80/tcp con1
554971502a96 nginx:latest "/docker-entrypoint.…" 17 minutes ago Up 17 minutes 80/tcp goofy_goldberg
PS C:\Users\Bayram Eren>
Docker container prune -f
Total reclaimed space: 0B
returns result
Why is this happening
You have to stop your containers before removing them:
docker stop 8d8299ddb6bf
docker stop 554971502a96
Or you can stop all running containers in one command
docker stop $(docker ps -q -f status=running)
Then you can call
docker container prune -f
I have created a container locally. Then, I run the following command:
docker ps -a
output is:
ONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
abc6f4d50931 airflow "/bin/zsh" 17 hours ago Exited (137) 21 minutes ago xenodochial_mclaren
Then I try to run the container with the following command, it create a new container with same IMAGE but different container ID instead opening the container with this image name which exist.
docker run -p 8080:8080 -it airflow /bin/zsh/
The output of docker images command is:
REPOSITORY TAG IMAGE ID CREATED SIZE
airflow airflow 63e2e36735a6 46 hours ago 704MB
airflow latest 63e2e36735a6 46 hours ago 704MB
docker/getting-started latest 083d7564d904 6 weeks ago 28MB
Why is this creating new containers?
If you run docker run ... you spin out a new cointainer from the image.
The status of your container is Exited as you can check from the docker ps -a output.
If you want to start again the same container, you can try docker start abc6f4d50931.
I have just installed Ubuntu 20.0 and installed docker using snap. I'm trying to run some different docker images for hbase and rabbitmq but each time I start an image, it immediately exists with 126 status.
$ sudo docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
4d58720fce3a dajobe/hbase "/opt/hbase-server" 5 seconds ago Exited (126) 4 seconds ago hbase-docker
b7a84731a05b harisekhon/hbase "/entrypoint.sh" About a minute ago Exited (126) 59 seconds ago optimistic_goldwasser
294b95ef081a harisekhon/hbase "/entrypoint.sh" About a minute ago Exited (126) About a minute ago goofy_tu
I have tried everything and tried to use docker inspect on separate images, but nothing gives away, why the containers exit out immediately. Any suggestions?
EDIT
When i run the command i run the following
$ sudo bash start-hbase.sh
It gives the output exactly like it should
Starting HBase container
Container has ID 3c3e36e1e0fbc59aa0783a4c7f3cb8690781b2d04e8f842749d629a9c25e0604
Updating /etc/hosts to make hbase-docker point to (hbase-docker)
Now connect to hbase at localhost on the standard ports
ZK 2181, Thrift 9090, Master 16000, Region 16020
Or connect to host hbase-docker (in the container) on the same ports
For docker status:
$ id=3c3e36e1e0fbc59aa0783a4c7f3cb8690781b2d04e8f842749d629a9c25e0604
$ docker inspect $id
I think the issue might be due to some permissions, because i tried to chck the logs as suggested in the comments, and get this error:
/bin/bash: /opt/hbase-server: Permission denied
Check if the filesystem is mounted with noexec option using mount command or in /etc/fstab. If yes, remove it and remount the filesystem (or reboot).
Quick solution is restart service docker and network-manager
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
210ebea2ef5f localhost.localdomain/foo "node app.js -C conf…" 12 minutes ago Restarting (1) 9 minutes ago foo
> docker stop 210ebea2ef5f
210ebea2ef5f
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
210ebea2ef5f localhost.localdomain/foo "node app.js -C conf…" 12 minutes ago Restarting (1) 9 minutes ago foo
huh?
> docker kill 210ebea2ef5f
Error response from daemon: Cannot kill container: 210ebea2ef5f: Container 210ebea2ef5f6f25265a3da88954fe111fabba99602ef628e0ee88630e26fd5d is not running
> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
210ebea2ef5f localhost.localdomain/foo "node app.js -C conf…" 12 minutes ago Restarting (1) 9 minutes ago foo
has anybody some enlightenment on what's going on here? I've started noticing this once I've enabled restart policies on my containers. This is running in docker for windows (18.09.3). The restart policy is set using docker compose as follows:
version: '3'
services:
foo:
build: .
image: localhost.localdomain/${repository_name}
container_name: ${container_name}
restart: unless-stopped:0
Are restart policies just buggy in docker for windows?
btw. a docker rm 210ebea2ef5f did finally remove the container from my docker ps list, but that's not the behavior I'd expect.
This looks like a bug with docker ps that is being fixed in 18.09.5. If you inspect the container with:
docker inspect 210ebea2ef5f --format '{{ json .State }}'
the status should show as exited.
The reason I suspect you're seeing this bug with docker ps is because the status shows 9 minutes ago where a normal crash loop happens in seconds. You can try the rc1 for 18.09.5 that was just pushed (this requires that you are pulling updates from the testing release), or wait for the final 18.09.5 to be released and update to that. It appears to only be an issue with the ps output, and have no effect on the behavior of the containers themselves.
Your restart policy is doing exactly what you've asked it to do.
If you look at the STATUS column in the output of docker ps, you see:
Restarting (1) 9 minutes ago
This typically means that the container is not running successfully: it starts, then exits immediately, and is then immediately restarted by Docker. This means that there are good odds that when you run docker kill, the container is in fact not running.
You could run docker stop <id> to stop the container and prevent it from restarting.
You would need to investigate your logs and your Dockerfile to determine why the container is exiting in the first place.
On my remote server, some developers run the same docker images named "my_account/analysis". So, once detached from the docker process, it is struggling to know which is my own process.
The result of docker ps is like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6968e76b3746 my_account/analysis "bash" 44 hours ago Up 44 hours 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:8887->8887/tcp modest_jones
42d970206a29 my_account/analysis "bash" 7 days ago Up 7 days 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:32771->8885/tcp gallant_chandrasekhar
ac9f804b7fe0 my_account/analysis "bash" 11 days ago Up 11 days 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:8798->8798/tcp suspicious_mayer
e8e260aab4fb my_account/analysis "bash" 12 days ago Up 12 days 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:32770->8885/tcp nostalgic_euler
In this case, because I remember that I ran docker around 2 days ago, I attach my container by docker attach 6968e. However, usually we forgot this.
What is the best practice to detect the container ID of mine under the situation that there are a lot of containers with the same Image name?
The simple way is to name the containers
docker run --name my-special-container my_account/analysis
docker attach my-special-container
You can store the container ID in a file when it launches
docker run --cidfile ~/my-special-container my_account/analysis
docker attach $(cat ~/my-special-container)
You can add more detailed metadata with object labels, but they are not as easily accessible as names
docker run --label com.rkjt50r983.tag=special my_account/analysis
docker ps --filter 'label=com.rkjt50r983.tag=special'