I need to run more than 70 docker containers at once. Later, these containers need to be stopped.
At the moment I can docker stop all of them with the shell command docker stop $(docker ps -f since=<last docker before>). It works OK, but if there are any containers started after mine, I have a problem as the above code will stop them too.
Is there any way I can close all of running containers with some kind of specific search?
I know there is an docker ps -f label=<some label>, but I just haven't figured out on how to use it yet.
If you're launching many containers at the same time, launch them all with
docker run --label=anyname other-docker-args-of-yours image:tag
And when you want to delete all your containers just do
docker stop $(docker ps -f label=anyname | awk 'NR>1 {print$1}')
where anyname is the label name you provide during the docker run command and
awk 'NR>1 {print$1}' ignores the column header CONTAINER_ID and just prints the values alone.
Edit-1:
I later realized that you can achieve the list of Container_ID without awk as well. I'd consider using the below line.
docker stop `docker ps -qaf label=anyname`
If you want to remove all stoppped containers also, then include a within the options, like instead of -qf use -qaf.
-q to print container IDs alone.
-a for all containers including stopped.
I succefully ran hello-world using docker run command , but when I check running containers with docker ps , this container was not visble under running containers ,
Any suggestions
Thank
Rajendar
The default hello-world image from docker has no extra service running inside it so therefore exits after printing the default text. As such you cannot view it using docker ps which is command for viewing currently running containers.
To view running/stopped containers, run docker ps -a
See the image on how the docker ps and docker ps -a command show different results for the `hello-world image.
How did you run it? If I remember correctly, the hello world example just echos and quits, so running docker ps immediately afterwards won't show you anything.
Try this instead:
docker ps -n 1
That will essentially show you the most recent container you ran and its state.
Just for fun, if you really want to watch the hello-world execution at runtime...
Open up a new terminal window and run the command docker events, then keep watching what happens when you run docker run hello-world in your original terminal window.
Magically, you will see your entire container life-cycle below:
1.container create (notice the funny name= attribute of your ephemeral container name)
2.image pull
3.container init
4.container start
5.container attach
6.container died
7.container cleanup
Enjoy!
When we restart a container using 'docker restart command', docker first stops and then starts the container.
My question is when the container is stopped? I wanted to know the exit status of the container.
i dont really get what you trying to say. .but if you wanna know the exit status, you can just issue
docker ps -a
command to list all exited container with their status code,
but if you want to check it with more specific condition, you can use something like :
docker ps -a --filter 'exited=0' that mean the container exited successfully,
or
docker ps -a --filter 'exited=137' 137 code meaning a SIGKILL(9) killed them.
here s more about docker filtering reference
oh, try using some punctuation marks in your sentence next time
EXIT status can be seen by using
docker ps -a
And you can check docker logs.
docker logs CONTAINER_NAME
for more information check - Docker logs
I have an app in a docker setup. I would like to run a script on the host that would run some commands in an existing (running container).
If I know the container id, say ... it's 50250e572090 ... then I can run the script like this
For example ...
#!/usr/bin/env bash
docker exec 50250e572090 example_command_1_here
docker exec 50250e572090 example_command_2_here
docker exec 50250e572090 example_command_3_here
docker exec 50250e572090 example_command_4_here
It's working great! ... but the thing here is that I only know the image name ... not the container id. To find the container id ... I use docker ps ... where I get something like this ...
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
50250e572090 aws_beanstalk/staging-app:latest "/sbin/my_init" 29 hours ago Up 29 hours 80/tcp, 443/tcp drunk_bardeen
It's output isn't something that I can use (pipe through). Which command can I run to get the container id as the output which can then be piped into the script? Or now that it's clear what I'm trying to achieve ... is there a better way?
Ps: My context is that I'm on elastic beanstalk ... but I don't see how this changes anything. Might as well be on the local host ... the problem is the same.
I was able to achieve this using the -q flag. Like so ...
#!/usr/bin/env bash
docker exec `docker ps -q` example_command_1_here
docker exec `docker ps -q` example_command_2_here
docker exec `docker ps -q` example_command_3_here
docker exec `docker ps -q` example_command_4_here
What you're requesting is not that easy. Multiple containers can use the same image.
You can use docker ps with a filter to only see containers derived from a specific image:
$ docker ps -q --filter "ancestor=aws_beanstalk/staging-app:latest"
Please note that this will return all running containers using the aws_beanstalk/staging-app:latest image which might be more than one.
You can run docker inspect command and get the Id of the container;
viswesn#viswesn-PC1:~$ docker inspect My_First_Docker | grep Id | awk '{print $2}'
"e3824f0121f24dded9792f133344a2d68b46ea13065481c30caf35d0ac6be40e",
I know this question is old, but I wanted a better answer than was given here, and I figured it out:
docker ps -q --no-trunc --format="{{.ID}}" --filter "ancestor=image/repo/and:tag"
You can leave off :tag if you want, or you can filter on something else entirely. The output will be the full, un-truncated ID of each matching container. No column headers or anything else extraneous.
If you only need the short version (first twelve hex digits) of the ID, leave off --no-trunc.
When trying to stop or restart a docker container I'm getting the following error message:
$ docker restart 5ba0a86f36ea
Error response from daemon: Cannot restart container 5ba0a86f36ea: [2] Container does not exist: container destroyed
Error: failed to restart containers: [5ba0a86f36ea]
But when I run
$ docker logs -f 5ba0a86f36ea
I can see the logs, so obviously the container does exist. Any ideas?
Edit:
sorry, I forgot to mention this:
When I run docker ps -a I see the container as up and running. However the application inside it is malfunctioning so I want to restart it, or just get a fresh version of that application online. But when I can't stop and remove the container, I also can't get a new application up and running, which would be listening to the same port.
I couldn't locate boot2docker in my machine. So, I came up with something that worked for me.
$ sudo systemctl restart docker.socket docker.service
$ docker rm -f <container id>
Check if it helps you as well.
All the docker:
start | restart | stop | rm --force | kill commands
may not work if the container is stuck. You can always restart the docker daemon. However, if you have other containers running, that may not be the option. What you can do is:
ps aux | grep <<container id>> | awk '{print $1 $2}'
The output contains:
<<user>><<process id>>
Then kill the process associated with the container like so:
sudo kill -9 <<process id from above command>>
That will kill the container and you can start a new container with the right image.
That looks like docker/docker/issues/12738, seen with docker 1.6 or 1.7:
Some container fail to stop properly, and the restart
We are seeing this issue a lot in our users hosts when they upgraded from 1.5.0 to 1.6.0.
After the upgrade, some containers cannot be stopped (giving 500 Server Error: Internal Server Error ("Cannot stop container xxxxx: [2] Container does not exist: container destroyed")) or forced destroyed (giving 500 Server Error: Internal Server Error ("Could not kill running container, cannot remove - [2] Container does not exist: container destroyed")).
The processes are still running on the host.
Sometimes, it works after restarting the docker daemon.
There are some workarounds:
I've tried all remote API calls for that unkillable container and here are results:
json, stats, changes, top, logs returned valid responses
stop, pause, wait, kill reported 404 (!)
After I finished with remote API, I double-checked docker ps (the container was still there), but then I retried docker kill and it worked! The container got killed and I could remove it.
Or:
What worked was to restart boot2docker on my host. Then docker rm -f
$ boot2docker stop
$ boot2docker start
$ docker rm -f 1f061139ba04
Worth knowing:
If you are running an ENTRYPOINT script ... the script will work with the shebang
#!/bin/bash -x
But will stop the container from stopping with
#!/bin/bash -xe
Enjoy
sudo aa-remove-unknown
This is what worked for me.
Check if there is any zombie process using "top" command.
docker ps | grep <<container name>>
Get the container id.
ps -ef | grep <<container id>>
ps -ef|grep defunct | grep java
And kill the container by Parent PID .
For anyone on a Mac who has Docker Desktop installed. I was able to just click the tray icon and say Restart Docker. Once it restarted was able to delete the containers.
If you're on a Mac and try this via Terminal: Use killall Docker to quit Docker.
Restart it in the Applications folder or with open /Applications/Docker.app.
Subsequently you can run a docker rm <id> for the concerned container.
I had the same problem on a windows host machine and none of the other options here worked for me. I ended up just needing to delete the physical container folder, which was located here:
C:\ProgramData\Docker\containers\[container guid]
I had stopped the docker service first just to be safe and when I restarted it, the broken containers were now gone and I was able to create new ones. I suspect the same will work on a linux host machine, but I do not know where the container folders are kept on that OS.
Ubuntu
Stop the container by using its system process ID.
Get the main process ID using:
docker inspect -f '{{.State.Pid}}' container-id
This will return an id as ´25430´.
Kill this with the command
sudo kill -9 25430
in my case, i couldn't delete container created with nomad jobs,
there's no output for the docker logs <ContainerID> and, in general, it looks like frozen.
until now the solution is: sudo service docker restart, may someone suggest better one?
i forgot that i had made the container start as a system service.
so if i stopped or killed the container, the service would bring it back.
if you are using systemctl, you can list all the running services with systemctl | grep running and find the name of the service.
then use
sudo systemctl disable <your_service_name> to stop it.
If you're on Ubuntu, make sure docker-compose isn't installed as a snap. This will cause all kinds of random issues, including the above.
Remove the snap:
sudo snap remove docker-compose
And install manually from the compose repository:
Docker compose installation instruction
Sometimes this is caused by problem of the docker daemon.
I solved the problem by restarting the docker service.
On Linux:
systemctl restart docker
In my case, docker rm $(docker ps -aq) works for me.