Build projects .NET based on sdk and aspnet docker images.
When build images on a PC, a large amount of cache of about 20 GB (approximately 3-5 image builds) is created, and when you reach the limit in Docker Desktop settings, no image can be assembled.
I control space on PC docker system df, then docker system prune -a
Today found command docker builder prune -f to remove only cache, but...
But then docker upload both sdk and aspnet images, and
assembly takes about 18 minutes, instead of 9 min
As a result of the assembly I get 4 images by ~ 250 Mb, but the cache after one assembly reaches 3 Gb
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 4 0 427.9MB 427.9MB (100%)
Containers 0 0 0B 0B
Local Volumes 0 0 0B 0B
Build Cache 80 0 2.452GB 2.452GB
docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
webuicontrol test 820995769b28 4 minutes ago 303MB
sqlservicecontrol test 42534d41499f 5 minutes ago 242MB
opcuacontrol test 234ab5c5a120 5 minutes ago 242MB
sharecontrol test ade90fad5138 6 minutes ago 265MB
The question is, how do I keep the cache from growing?
Why does every time the image is reassembled and the cache grows?
Tell me with tips
Related
I want to run container and receive error:
docker run -ti --rm grafana/promtail:2.5.0 -config.file=/etc/promtail/config.yml
docker: Error response from daemon: mkdir /var/lib/docker/overlay2/0cad6a6645e2445a9985d5c9e9c6909fa74ee1a30425b407ddfac13684bd9d31-init: no space left on device.
At first, I thought I have a lot of volumes and images cached. So I clean docker with:
docker prune
docker builder prune
But in a while, the same error occur. When I check my Docker Desktop configuration, I can see I am using all available disk size for images:
Disk image size:
59.6 GB (59.5 GB used)
I have 13 images on my system and together its less than 5GB:
REPOSITORY TAG IMAGE ID CREATED SIZE
logstashloki latest 157966144f3b 3 days ago 761MB
minio/minio <none> 717586e37f7f 4 days ago 232MB
grafana/grafana <none> 31a8875955e5 9 days ago 277MB
docker.elastic.co/beats/filebeat 8.3.2 e7b210caf528 3 weeks ago 295MB
k8s.gcr.io/kube-apiserver v1.24.0 b62a103951f4 2 months ago 126MB
k8s.gcr.io/kube-scheduler v1.24.0 b81513b3bfb4 2 months ago 50MB
k8s.gcr.io/kube-controller-manager v1.24.0 59fad34d4fe0 2 months ago 116MB
k8s.gcr.io/kube-proxy v1.24.0 66e1443684b0 2 months ago 106MB
k8s.gcr.io/etcd 3.5.3-0 a9a710bb96df 3 months ago 178MB
grafana/promtail 2.5.0 aa21fd577ae2 3 months ago 177MB
grafana/loki 2.5.0 369cbd28ef9b 3 months ago 60MB
k8s.gcr.io/pause 3.7 e5a475a03805 4 months ago 514kB
k8s.gcr.io/coredns/coredns v1.8.6 edaa71f2aee8 9 months ago 46.8MB
From output of docker system df there is no suspicious size of container, images or volumes:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 13 13 2.35GB 69.57MB (2%)
Containers 21 21 35.15kB 0B (0%)
Local Volumes 2 0 2.186MB 2.186MB (100%)
Build Cache 20 0 0B 0B
I am new to MacOS and cannot determine what take all my space and how to clean all that space and where are all that data stored on system?
I had tried following the advice here, specifically:
run docker system prune, which freed about 6GB
increased the Disk image size on docker desktop preferences to 64 GB (43 GB used)
but am still seeing this when running skaffold: exiting dev mode because first build failed: couldn't build "user/orders": docker build: Error response from daemon: Error processing tar file(exit status 1): write /tsconfig.json: no space left on device. Another run of skaffold gave me this on another occasion:
exiting dev mode because first build failed: couldn't build "exiting dev mode because first build failed: couldn't build "user/orders": unable to stream build output: failed to create rwlayer: mkdir /var/lib/docker/overlay2/7c6618702ad15fe0fa7d4655109aa6326fb4f954df00d2621f62d66d7b328ed9/diff: no space left on device
Also, when running docker system df, I see this:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 10 0 28.86GB 28.86GB (100%)
Containers 0 0 0B 0B
Local Volumes 30 0 15.62GB 15.62GB (100%)
Build Cache 0 0 0B 0B
I also have about 200GB of physical hard drive space available.
I'm hoping I don't have to manually run rm * as proposed here, which was for a linux distro.
if you're running on Mac and have 200GB free, will increasing help you?
I'm having an issue with running docker on a very powerful AWS linux (ubuntu) instance.
When I attempt to pull/extract docker, I get the following error:
docker: failed to register layer: Error processing tar file(exit status 1): write /opt/conda/lib/libmkl_mc3.so: no space left on device..
I'd like to increase the volume of space that docker is running on in order to allow this file to download (there's plenty of space on the machine as a whole) but I'm unsure how to do this. I've trawled through a number of similar problems on here and none of the provided solutions have proven successful for me.
Any advice would be appreciated.
Output of docker system df:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 0 0 0B 0B
Containers 0 0 0B 0B
Local Volumes 0 0 0B 0B
Build Cache 0 0 0B 0B
I have docker in version 18.03.1-ce that support command docker system df. His output:
Images space usage:
REPOSITORY TAG IMAGE ID CREATED ago SIZE SHARED SIZE UNIQUE SiZE CONTAINERS
registry.gitlab.com/precisesale/app latest b7833546c2cf About an hour ago ago 252.1MB 123.8MB 128.4MB 1
healthdiary/app latest 565c6d3906e6 2 days ago ago 312.2MB 123.8MB 188.4MB 1
mongo latest f93ff881751f 5 days ago ago 367.6MB 0B 367.6MB 2
nginx latest b175e7467d66 6 weeks ago ago 108.9MB 0B 108.9MB 1
jwilder/docker-gen latest 8959ee34c769 2 months ago ago 19.91MB 4.148MB 15.77MB 1
jrcs/letsencrypt-nginx-proxy-companion latest 17939ceb7a52 2 months ago ago 86.86MB 4.148MB 82.71MB 1
Containers space usage:
CONTAINER ID IMAGE COMMAND LOCAL VOLUMES SIZE CREATED ago STATUS NAMES
c20dc3438552 healthdiary/app "./entrypoint.sh nod…" 0 0B 8 minutes ago ago Up 8 minutes healthdiary_app_1
bf8c4307dcbb mongo:latest "docker-entrypoint.s…" 1 0B 8 minutes ago ago Up 8 minutes healthdiary_mongo_1
47fced8d18fe registry.gitlab.com/precisesale/app "./entrypoint.sh nod…" 0 0B 9 minutes ago ago Up 9 minutes precisesale_app_1
597d97d5c1fa mongo:latest "docker-entrypoint.s…" 1 0B 9 minutes ago ago Up 9 minutes precisesale_db_1
b5bb14faa910 jwilder/docker-gen "/usr/local/bin/dock…" 0 0B 7 hours ago ago Up 19 minutes nginx-gen
8eee2bee084a nginx "nginx -g 'daemon of…" 0 2B 7 hours ago ago Up 19 minutes nginx-web
6b8b0cd5d938 jrcs/letsencrypt-nginx-proxy-companion "/bin/bash /app/entr…" 0 1.66kB 7 hours ago ago Up 19 minutes nginx-letsencrypt
Local Volumes space usage:
VOLUME NAME LINKS SIZE
0a40fac6ca98e776dad972c8193362a51a485b3305979e58996545d97310a3c7 1 0B
929b0b88849ad4d390efd4666e6a0e5f82e0e6dd34f7a09f609de90b190e6148 1 0B
Build cache usage: 0B
Even if I do not take into account savings from shared space from two first containers summary size is 1147.5 MB
But if I measure size of docker overlay2 on disc by du I get
du -hs /var/lib/docker/overlay2/
2.7G /var/lib/docker/overlay2/
Where is reason of difference in size of containers measured by docker system df and du?
I was wondering the same thing some time ago.
It’s not a bug, it’s a feature :-)
du -sh /var/lib/docker/overlay2
is not showing objective value because merge folders have been mounted using overlay driver and du output is not actual disk allocation size.
You can see the actual disk allocation size by examining only diff folders like:
du -shc /var/lib/docker/overlay2/*/diff
You can test this in your environment like this:
run
df -h /dev/sd*
du -shc /var/lib/docker/overlay2/*/diff
du -sh /var/lib/docker/overlay2
Now start 20 centos containers and observe what has change:
for i in {1..20}; do docker run -itd centos bash; done
df -h /dev/sd*
du -shc /var/lib/docker/overlay2/*/diff
du -sh /var/lib/docker/overlay2
You can see that the actual disk allocation (df command) is just cca 200MB more than before, but “du” on whole folder outputs 4.2G allocation.
“du” on “diff” folders shows 212M what is correct.
This is how Docker works and what makes it great!
I need to mount host directory /data to 6 containers h1,h2,h3,h4,h5,h6
/data is an external hard disk mounted on the host. The 6 containers can be opened and closed easily.
The 6 containers will go into their own sub-directories of /data to analyze data independently and produce new data locally. All sub-directories have nothing to do with each other.
A relevant question is here, but no preferred answer is given.
How to do that? Below are the containers and images I have now.
$ docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d9bd9334a1e7 ubuntu "/usr/bin/bash" 19 hours ago Up 18 hours h6
23679fe7252b ubuntu "/usr/bin/bash" 19 hours ago Up 18 hours h5
e2864e38e746 ubuntu "/usr/bin/bash" 19 hours ago Up 18 hours h4
c8996a304638 ubuntu "/usr/bin/bash" 19 hours ago Up 18 hours h3
9acd2a223d86 ubuntu "/usr/bin/bash" 19 hours ago Up 18 hours h2
5690b8c7b6da ubuntu "/usr/bin/bash" 2 days ago Up 12 hours h1
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
docker.io/hello-world latest f2a91732366c 2 months ago 1.85 kB
docker.io/ubuntu 27 422dc563ca32 2 months ago 252 MB
docker.io/ubuntu latest 422dc563ca32 2 months ago 252 MB
IF those containers are already running, you cannot easily add /data to them.
Except maybe with docker cp.
But the best practice remains either:
make images with /data already in it (Dockerfile ADD)
or use the existing image and launch your container, but with the -v (volume) option. See Use volumes.