I'm working with dockerized applications and docker-compose.
So sometimes I just run docker-compose up and other times run a particular service docker-compose run service1 /bin/bash
I'm noticing it's increasing a lot of different images and containers using it that way.
For example:
docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 7579570fc0f6 3 minutes ago 2.07GB
<none> <none> 5c4dff8b6808 8 minutes ago 1.34GB
<none> <none> abf3cb89f2fa 9 minutes ago 1.34GB
<none> <none> 7592dcccab3b 9 minutes ago 1.27GB
<none> <none> da2be213241c 9 minutes ago 1.27GB
<none> <none> 52bbbc8b88c8 4 weeks ago 1.96GB
<none> <none> 77a6403fe043 4 weeks ago 1.96GB
<none> <none> 4845935c3110 4 weeks ago 1.23GB
<none> <none> 48bca82f00c9 4 weeks ago 1.23GB
<none> <none> 63d77ddad079 4 weeks ago 1.94GB
<none> <none> 6729473d9848 4 weeks ago 1.94GB
<none> <none> e6ef1c44689f 4 weeks ago 1.23GB
And it's similar with docker container ls -a
It doesn't feel right, am I missing some good practice here? Should I add something to the docker-compose.yml to prevent this?
Update:
Have seen the comments about handling dangling images and , but my question here is actually how to prevent this.
For example, I know there's a way to run images with a remove tag to destroy it after stop, is it a good practice? I don't see it in the guides so I'm not using, however, the current scenario is making me worried about disk space...
Typically docker-compose up. Use up to start or restart all the services defined in a docker-compose.yml. In the default “attached” mode, you see all the logs from all the containers. In “detached” mode (-d), Compose exits after starting the containers, but the containers continue to run in the background.
The docker-compose run command is for running “one-off” or “adhoc” tasks. It requires the service name you want to run and only starts containers for services that the running service depends on. Use run to run tests or perform an administrative task such as removing or adding data to a data volume container. The run command acts like docker run -ti in that it opens an interactive terminal to the container and returns an exit status matching the exit status of the process in the container.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
I have made a Dockerfile to build an image which is based on another Dockerfile. It uses Alpine Linux as a base.
Everything works fine, but when I view the images with the --all switch, I get multiple instances of what appears to be my image and the image for Alpine Linux which it is based on:
$ docker image ls -a
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 5577c25bccd9 20 hours ago 137MB
<none> <none> 48f944e609b5 20 hours ago 137MB
paradroid/myapp latest f2a0f99986a6 20 hours ago 137MB
<none> <none> d846706db3f4 20 hours ago 137MB
<none> <none> f1410e1d307e 20 hours ago 137MB
<none> <none> e52e6203864a 20 hours ago 137MB
<none> <none> dd3be52289ce 20 hours ago 5.59MB
<none> <none> 8559960a8bd0 20 hours ago 5.59MB
<none> <none> 04020d8307b3 20 hours ago 5.59MB
<none> <none> fe0f1e73261c 20 hours ago 5.59MB
<none> <none> 12229a67b395 20 hours ago 5.59MB
alpine latest cc0abc535e36 13 days ago 5.59MB
This does not happen when building the Dockerfile which it is based on.
docker system prune does not remove them and any attempt to delete them individually gives this response:
$ docker image rm d846706db3f4
Error response from daemon: conflict: unable to delete d846706db3f4 (cannot be forced) - image has dependent child images
How can I find out what is causing this?
The <none:none> images are the intermediate layers resulted from your docker build.
Docker image is composed of layers. Each instruction in your Dockerfile results in a layer. These layers are re-used between different images. This results in efficient usage of disk space. So if a is layer being used by another image, you won't be able to delete the layer.
You can run the following command which will remove the layers which are not referenced in any of the images. These layers are called dangling.
docker rmi $(docker images -f "dangling=true" -q)
https://www.projectatomic.io/blog/2015/07/what-are-docker-none-none-images/
use
docker image rm -f imageid
for multiple
docker image rm -f imageid imageid
I have a dockerized application. When I am running it through docker-compose up, it runs fine and appears in docker images. But when I try to start a minikube cluster with vm-driver=None, then the cluster gives error and does not start. However, when I quit my docker application and start minikube cluster again, the cluster starts successfully. But then I couldnt find my docker application image I just ran. Instead I find images like below
k8s.gcr.io/coredns 1.2.2 367cdc8433a4 5 weeks ago 39.2MB
k8s.gcr.io/kubernetes-dashboard-amd64 v1.10.0 0dab2435c100 5 weeks ago 122MB
k8s.gcr.io/kube-proxy-amd64 v1.10.0 bfc21aadc7d3 6 months ago 97MB
k8s.gcr.io/kube-controller-manager-amd64 v1.10.0 ad86dbed1555 6 months ago 148MB
k8s.gcr.io/kube-apiserver-amd64 v1.10.0 af20925d51a3 6 months ago 225MB
k8s.gcr.io/kube-scheduler-amd64 v1.10.0 704ba848e69a 6 months ago 50.4MB
Is this expected behavior? What is the reason if so?
minikube start --vm-driver=none
Update: I am working in an Ubuntu VM.
From the Minikube documentation:
minikube was designed to run Kubernetes within a dedicated VM, and assumes that it has complete control over the machine it is executing on. With the none driver, minikube and Kubernetes run in an environment with very limited isolation, which could result in:
Decreased security
Decreased reliability
Data loss
It is not expected behavior that Minikube will delete your docker images. I have tried to reproduce your issue. I had a few docker images on my Ubuntu VM.
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest e445ab08b2be 13 days ago 126MB
busybox latest db8ee88ad75f 2 weeks ago 1.22MB
perl latest bbac4a88d400 3 weeks ago 889MB
alpine latest b7b28af77ffe 3 weeks ago 5.58MB
Later tried to run minikube.
$ sudo minikube start --vm-driver=none
😄 minikube v1.2.0 on linux (amd64)
💡 Tip: Use 'minikube start -p <name>' to create a new cluster, or 'minikube delete' to delete this one.
...
⌛ Verifying: apiserver proxy etcd scheduler controller dns
🏄 Done! kubectl is now configured to use "minikube"
I still have all docker images and minikube is working as expected.
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-5c98db65d4-4vd2q 1/1 Running 8 21d
coredns-5c98db65d4-xjx22 1/1 Running 8 21d
etcd-minikube 1/1 Running 5 21d
...
$ sudo docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
nginx latest e445ab08b2be 13 days ago 126MB
busybox latest db8ee88ad75f 2 weeks ago 1.22MB
perl latest bbac4a88d400 3 weeks ago 889MB
alpine latest b7b28af77ffe 3 weeks ago 5.58MB
After exit from minikube, I still had all docker images.
As you mentioned in original thread, you have used minikube start --vm-driver=none. If you will use minikube start without sudo you will receive error like:
$ minikube start --vm-driver=none
😄 minikube v1.2.0 on linux (amd64)
💣 Unable to load config: open /home/$user/.minikube/profiles/minikube/config.json: permission denied
or if want stop minikube without sudo:
$ minikube stop
💣 Unable to stop VM: open /home/$user/.minikube/machines/minikube/config.json: permission denied
😿 Sorry that minikube crashed. If this was unexpected, we would love to hear from you:
👉 https://github.com/kubernetes/minikube/issues/new
Please try use sudo with minikube commands.
Let me know if that helped. If not please provide your error message.
When I run docker image ls I see this
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> c012c1e2eb45 4 hours ago 2.59GB
<none> <none> a2013debf751 5 hours ago 2.59GB
<none> <none> 0bbb2f67083c 5 hours ago 2.59GB
<none> <none> af18b070061b 29 hours ago 182MB
<none> <none> 186a8fac122e 35 hours ago 1.67GB
<none> <none> 0f90c1bb12a3 35 hours ago 182MB
<none> <none> b94009e70416 13 days ago 631MB
<none> <none> 880d8e6713cf 2 weeks ago 631MB
b/2018-external latest 128d208a6c83 2 weeks ago 207MB
2018-external latest 128d208a6c83 2 weeks ago 207MB
b/2018-web <none> 128d208a6c83 2 weeks ago 207MB
nginx latest 9e7424e5dbae 2 weeks ago 108MB
node 8-alpine 4db2697ce114 4 weeks ago 67.7MB
b_web latest d5a0ea011c0a 5 weeks ago 182MB
<none> <none> 957c22ababec 5 weeks ago 182MB
docker_web latest 70b443ed0495 5 weeks ago 182MB
docker_app latest 509d58a68224 5 weeks ago 756MB
b_app latest 509d58a68224 5 weeks ago 756MB
mysql 5.6 96dc914914f5 5 weeks ago 299MB
mysql latest 5fac85ee2c68 8 weeks ago 408MB
redis latest 1fb7b6c8c0d0 2 months ago 107MB
alpine 3.6 76da55c8019d 2 months ago 3.97MB
nginx 1.13.3-alpine ba60b24dbad5 5 months ago 15.5MB
keymetrics/pm2-docker-alpine 6 4a09bfc067d6 5 months ago 75.3MB
dockercloud/cli latest 051238cd0a37 6 months ago 64.2MB
andrewmclagan/nginx-hhvm latest ec6cc741eb0e 7 months ago 580MB
nginx 1.10 0346349a1a64 8 months ago 182MB
php 7.0.8-fpm 75b880f3a420 17 months ago 375MB
tutum/haproxy latest 33bc771bec1e 18 months ago 232MB
php 7.0.4-fpm 81d7a2fdc6dc 21 months ago 494MB
How do I know if which image are safely to remove ?
My attention is to remove all of them, but I am not sure if I should be more carful before doing that.
You can use docker image prune to remove "dangling" images; those that are not tagged (eg <none>) and are not referenced by any running container.
I find the docker image prune -a more useful. It will remove any image that is not used by a running container. In that sense prune -a is a cleanup step that you can take after your environment is running correctly.
Images that are referred to as dangling are safe to remove. Those are the images that don't have a tag. They result when a new build of an image appears and the new image takes the tag leaving the dangling images with <none>:<none>
tag.
Dangling images can be listed using docker images --filter "dangling=true" and can be removed by running docker image prune.
This command also work for me.
docker rmi $(docker images -f "dangling=true" -q)
Purging All Unused or Dangling Images, Containers, Volumes, and Networks:
Clean up any resources — images, containers, volumes, and networks — that are dangling (not associated with a container):
docker system prune
Remove any stopped containers and all unused images (not just dangling images):
docker system prune -a
Remove one or more specific images:docker rmi Image Image
Remove dangling images:docker images purge
Removing images according to a pattern:docker images -a | grep "pattern" | awk '{print $3}' | xargs docker rmi
Remove all images:docker rmi $(docker images -a -q)
Remove one or more specific containers: docker rm ID_or_Name ID_or_Name
Remove a container upon exit(run and remove): docker run --rm image_name
Remove all exited containers: docker rm $(docker ps -a -f status=exited -q)
Remove containers using more than one filter: docker rm $(docker ps -a -f status=exited -f status=created -q)
Stop and remove all containers:
docker stop $(docker ps -a -q)
docker rm $(docker ps -a -q)
Origin article: digitalocean
I am trying to deploy an application to a server. I am using Meteorup for deployment. This deployment config on this post is what I use Meteor-up terminates after running deploy.
This is part of the questions I have raised and yet none have solved it https://github.com/zodern/meteor-up/issues/734[1]. After much troubleshooting, I ran docker images, it actually added names to the images of mongodb and abernix/meteord but for the one I want to deploy it did not. Why? Since containers are created from images definitely it will not see the container as there is no image to reference for creation.
See this:
root#hostname:~# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
<none> <none> 06f352515b0a 41 minutes ago 659MB
<none> <none> e29de6b55976 About an hour ago 659MB
abernix/meteord base 7941fc48936e 2 months ago 520MB
mongo 3.4.1 0dffc7177b06 8 months ago 402MB
When I ran docker ps it showed that only mongo container was created:
root#christdoes:~# docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2e7f21a50dfb mongo:3.4.1 "/entrypoint.sh mo..." 2 hours ago Up 2 hours 127.0.0.1:27017->27017/tcp mongodb
How do I resolve this issue?
I am unable to explicitly delete some untagged docker images. Specifically, those with the tag <none>.
If I run docker images -a I get something like this:
<none> <none> 91e54dfb1179 3 months ago 188.3 MB
<none> <none> d74508fb6632 3 months ago 188.3 MB
<none> <none> c22013c84729 3 months ago 188.3 MB
<none> <none> d3a1f33e8a5a 3 months ago 188.1 MB
<none> <none> 4a5e6db8c069 3 months ago 125.1 MB
<none> <none> 2c49f83e0b13 3 months ago
However, when I type docker rmi -f 2c49f83e0b13 I get:
Error response from daemon: Conflict, 2c49f83e0b13wasn't deleted
Error: failed to remove images: [2c49f83e0b13]
Any idea what could be the problem?
These might be intermediary docker images for some images I'm actually using. But if that's the case, then the completed docker images should already have those prior images and I don't explicitly need those intermediary images to be there.
Make sure the image is actually dangling (meaning it is not referenced by any other image, or is not parent of an image)
docker images --filter "dangling=true" -q --no-trunc
If it is dangling (and should be removed), then there is a couple of pending bug reporting the impossibility to delete such images: issue 13625, issue 12487.
The container should be stopped first before you can remove the image :
docker rm $(docker ps -a -q)
Source : https://github.com/docker/docker/pull/6112
It happen to me also and restarting the docker engine and all the containers using this image solve the issue.