On my remote server, some developers run the same docker images named "my_account/analysis". So, once detached from the docker process, it is struggling to know which is my own process.
The result of docker ps is like this:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
6968e76b3746 my_account/analysis "bash" 44 hours ago Up 44 hours 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:8887->8887/tcp modest_jones
42d970206a29 my_account/analysis "bash" 7 days ago Up 7 days 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:32771->8885/tcp gallant_chandrasekhar
ac9f804b7fe0 my_account/analysis "bash" 11 days ago Up 11 days 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:8798->8798/tcp suspicious_mayer
e8e260aab4fb my_account/analysis "bash" 12 days ago Up 12 days 6023/tcp, 6073/tcp, 6800/tcp, 8118/tcp, 8888/tcp, 9050/tcp, 0.0.0.0:32770->8885/tcp nostalgic_euler
In this case, because I remember that I ran docker around 2 days ago, I attach my container by docker attach 6968e. However, usually we forgot this.
What is the best practice to detect the container ID of mine under the situation that there are a lot of containers with the same Image name?
The simple way is to name the containers
docker run --name my-special-container my_account/analysis
docker attach my-special-container
You can store the container ID in a file when it launches
docker run --cidfile ~/my-special-container my_account/analysis
docker attach $(cat ~/my-special-container)
You can add more detailed metadata with object labels, but they are not as easily accessible as names
docker run --label com.rkjt50r983.tag=special my_account/analysis
docker ps --filter 'label=com.rkjt50r983.tag=special'
Related
I run docker ps and it shows that 5 containers that have been running for three weeks.
I then run docker-compose down but when I run docker ps again, they are all still running.
I have tried the following command but none seems to work
kill
stop
down --rmi local
rm
down
How can I stop these? I tried just bringing up my new docker-compose.yml and ignoring the olde one but I get:
ERROR: for apache Cannot create container for service apache: Conflict. The container name "/apache" is already in use by container "70c570d60e1248292f279a37634fd8b4ce7e2535d2bfa14b2c6e4089652c0152". You have to remove (or rename) that container to be able to reuse that name.
What to try to stop the old container?
You can list containers:
(base) docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
c788727f0f7b postgres:14 "docker-entrypoint.s…" 7 days ago Up 7 days 0.0.0.0:5432->5432/tcp, :::5432->5432/tcp dev-db
88e8ddcb7d4e redis "docker-entrypoint.s…" 7 days ago Up 7 days 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp beautiful_neumann
Delete container:
(base) docker rm -f c788727f0f7b # container_id
c788727f0f7b
List containers:
(base) docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
88e8ddcb7d4e redis "docker-entrypoint.s…" 7 days ago Up 7 days 0.0.0.0:6379->6379/tcp, :::6379->6379/tcp beautiful_neumann
As you can see the container got stopped(c788727f0f7b).
You can list stopped containers using:
docker container ls -f 'status=exited'
This question already has an answer here:
Why do I have to delete docker containers?
(1 answer)
Closed 1 year ago.
I am new to Docker and just getting started. I pulled a basic ubuntu image and started a few containers with it and stopped them. When I run the command to list all the docker containers (even the stopped ones) I get an output like this:
> docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
099c42011f24 ubuntu:latest "/bin/bash" 6 seconds ago Exited (0) 6 seconds ago sleepy_mccarthy
dde61c10d522 ubuntu:latest "/bin/bash" 8 seconds ago Exited (0) 7 seconds ago determined_rosalind
cd1a6fa35741 ubuntu:latest "/bin/bash" 9 seconds ago Exited (0) 8 seconds ago unruffled_lichterman
ff926b6eba23 ubuntu:latest "/bin/bash" 10 seconds ago Exited (0) 10 seconds ago cool_rosalind
8bd50c2c4729 ubuntu:latest "/bin/bash" 12 seconds ago Exited (0) 11 seconds ago cranky_darwin
My question is, is there a reason why docker does not delete the stopped containers by default?
The examples you've provided show that you're using an Ubuntu container just to run bash. While this is fairly common pattern while learning Docker, it's not what docker is used for in production scenarios, which is what Docker cares about and is optimizing for.
Docker is used to deploy an application within a container with a given configuration.
Say you spool up a database container to hold information about your application. Then your docker host restarts for some reason, and that database disappears by default. That would be a disaster.
It's therefore much safer for Docker to assume that you want to keep your containers, images, volumes, and so on, unless you explicitly ask for them to be removed and decide this is what you want when you start them, with docker run --rm <image> for example.
In my opinion, it may have some reasons. Consider below condition:
I build my image and start the container (production environment, for some reason I stop the current container, do some changes to image and run another instance, so new container with new name is running.
I see new container does not work properly as expected, so as now I have the old container, I can run the old one and stop the new so the clients will not face any issues.
But what if containers were automatically deleted if they were stopped?
Simple answer, I would have lost my clients (even my job) simply:) And one person would be added to unemployed people :D
As #msanford mentioned, Docker assumes you want to keep your data, volumes, etc. so you'll probably re-use them when needed.
Since Docker is used to deploy and run applications (as simple as WordPress with MySQL but with some differences installing on Shared Hosting), usually it's not used for only running bash.
Surely it's good to learn Docker in the first steps by running things like bash or sh to see the contents of container.
I made a goof while trying to rename an image by following the steps on this page that say to create a tag then delete the original
Docker how to change repository name or rename image?
Now when I list the images it doesn't show up anymore. However, when I list the containers the image still shows up.
PS C:\Users\Grail> docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
ubuntu latest 157be28c0fe3 7 days ago 668MB
fedora latest a368cbcfa678 2 months ago 183MB
PS C:\Users\Grail> docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS
1ea5ffd50852 157be28c0fe3 "/bin/bash" 7 hours ago Exited (0) 7 hours ago
fb81990e756c 0d120b6ccaa8 "/bin/bash" 10 hours ago Exited (0) 24 minutes ago
081641b3e600 a368cbcfa678 "/bin/bash" 11 hours ago Exited (0) 31 minutes ago
Not only that, the image (0d120b6ccaa8) still shows up in my Docker Dashboard (running on Windows) and I can start/stop it without any problems.
Clearly the image still exists. Can I restore it such that I can see it when I list the images?
Can it be restored from the container?
If it's in a weird state/unrecoverable, how do I actually delete it so it's not taking up space?
Update:
Thanks to #prashanna I went down a path where I exported the container and imported to get the image:
docker export -o mycontainer.tar fb81990e756c
docker import mycontainer.tar
**docker commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]**
is for creating a new image from a container, meaning when you update or add new config or install new software, thus creating a new template images.
ref:docker commit
I am bit confused on the status of the docker container, especially with status as CREATED.
I know that when the container is running state it shows as below:
root#labadmin-VirtualBox:~/RAGHU/DOCKER# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1261afc2acc1 302fa07d8117 "/bin/bash" 43 minutes ago Up 43 minutes optimistic_thompson
And if the container is stopped, it shows as below:
root#labadmin-VirtualBox:~/RAGHU/DOCKER# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
935f902efac7 302fa07d8117 "/bin/bash" 44 minutes ago Exited (0) 44 minutes ago competent_golick
5eb1c2525e2e 302fa07d8117 "/bin/bash" 44 minutes ago Exited (0) 44 minutes ago friendly_saha
My confusion is in what state does the docker shows the status of the container as CREATED:
root#labadmin-VirtualBox:~/RAGHU/DOCKER# docker ps -a | grep -i created
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
01c63f92586b jenkins "/bin/tini -- /usr..." 5 weeks ago Created gloomy_jones
Docker status Created means that the container has been created from the image, but it has never been started.
This state can be achieved in this two ways.
Docker container has been created using docker create command (this is done to speed up container creation).
Docker container has been created using docker run but it hasn't been able to start successfully.
For further information check docker create reference: https://docs.docker.com/engine/reference/commandline/create/
I noticed that Docker appears to be using a large amount of disk space. I can see the directory /Users/me/.docker/machine/machines/default is 27.4GB
I recently cleaned up all of the images and containers I wasn't using. Now, when I run docker ps I see that no images are running.
~ $ docker ps
I can also see that I have 2 containers available.
~ $ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
42332s42d3 cds-abm "/bin/bash" 2 weeks ago Exited (130) 2 weeks ago evil_shockley
9ssd64ee41 tes-abm "/bin/bash" 2 weeks ago Exited (130) 2 weeks ago lonely_brattain
I can then see I have 3 images.
~ $ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
ghr/get latest 6708fffdd4dfa 2 weeks ago 2.428 GB
<none> <none> 96c5974ddse18 2 weeks ago 2.428 GB
fdbh/ere latest bf1da53766b1 2 weeks ago 2.225 GB
How can these be taking up nearly 30GB?
It is because you are not removing the volumes created by the containers when you stop the container. In the future, use -v when you remove the containers.
docker rm -v <container-id>
Regrading cleaning up the space, you have to ssh into docker-machine and remove all volumes created. To do so,
docker-machine ssh default
sudo -i # You may not get permission to enter in to volumes directory
cd /var/lib/docker/volumes
rm -rf *
Make sure none of your containers are currently running. Also, make sure that you don't need any of these volumes for later use (like DB containers).