I'm having an issue with running docker on a very powerful AWS linux (ubuntu) instance.
When I attempt to pull/extract docker, I get the following error:
docker: failed to register layer: Error processing tar file(exit status 1): write /opt/conda/lib/libmkl_mc3.so: no space left on device..
I'd like to increase the volume of space that docker is running on in order to allow this file to download (there's plenty of space on the machine as a whole) but I'm unsure how to do this. I've trawled through a number of similar problems on here and none of the provided solutions have proven successful for me.
Any advice would be appreciated.
Output of docker system df:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 0 0 0B 0B
Containers 0 0 0B 0B
Local Volumes 0 0 0B 0B
Build Cache 0 0 0B 0B
Related
Build projects .NET based on sdk and aspnet docker images.
When build images on a PC, a large amount of cache of about 20 GB (approximately 3-5 image builds) is created, and when you reach the limit in Docker Desktop settings, no image can be assembled.
I control space on PC docker system df, then docker system prune -a
Today found command docker builder prune -f to remove only cache, but...
But then docker upload both sdk and aspnet images, and
assembly takes about 18 minutes, instead of 9 min
As a result of the assembly I get 4 images by ~ 250 Mb, but the cache after one assembly reaches 3 Gb
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 4 0 427.9MB 427.9MB (100%)
Containers 0 0 0B 0B
Local Volumes 0 0 0B 0B
Build Cache 80 0 2.452GB 2.452GB
docker images -a
REPOSITORY TAG IMAGE ID CREATED SIZE
webuicontrol test 820995769b28 4 minutes ago 303MB
sqlservicecontrol test 42534d41499f 5 minutes ago 242MB
opcuacontrol test 234ab5c5a120 5 minutes ago 242MB
sharecontrol test ade90fad5138 6 minutes ago 265MB
The question is, how do I keep the cache from growing?
Why does every time the image is reassembled and the cache grows?
Tell me with tips
I had tried following the advice here, specifically:
run docker system prune, which freed about 6GB
increased the Disk image size on docker desktop preferences to 64 GB (43 GB used)
but am still seeing this when running skaffold: exiting dev mode because first build failed: couldn't build "user/orders": docker build: Error response from daemon: Error processing tar file(exit status 1): write /tsconfig.json: no space left on device. Another run of skaffold gave me this on another occasion:
exiting dev mode because first build failed: couldn't build "exiting dev mode because first build failed: couldn't build "user/orders": unable to stream build output: failed to create rwlayer: mkdir /var/lib/docker/overlay2/7c6618702ad15fe0fa7d4655109aa6326fb4f954df00d2621f62d66d7b328ed9/diff: no space left on device
Also, when running docker system df, I see this:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 10 0 28.86GB 28.86GB (100%)
Containers 0 0 0B 0B
Local Volumes 30 0 15.62GB 15.62GB (100%)
Build Cache 0 0 0B 0B
I also have about 200GB of physical hard drive space available.
I'm hoping I don't have to manually run rm * as proposed here, which was for a linux distro.
if you're running on Mac and have 200GB free, will increasing help you?
Good morning all,
In the process of trying to train myself in Docker, I'm having trouble.
I created a docker container from a wordpress image, via docker compose.
[root#vps672971 ~]# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
57bb123aa365 wordpress:latest "docker-entrypoint.s…" 16 hours ago Up 2 0.0.0.0:8001->80/tcp royal-by-jds-wordpress-container
I would like to allocate more memory to this container, however after the execution of the following command, the information returned by docker stats are not correct.
docker container update --memory 3GB --memory-swap 4GB royal-by-jds-wordpress-container
docker stats
CONTAINER ID NAME CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
57bb123aa365 royal-by-jds-wordpress-container 0.01% 9.895MiB / 1.896GiB 0.51% 2.68kB / 0B 0B / 0B 6
I also tried to request API engine to retrieve information about my container, but the limitation displayed is not correct either.
curl --unix-socket /var/run/docker.sock http:/v1.21/containers/royal-by-jds-wordpress-container/stats
[...]
"memory_stats":{
"usage":12943360,
"max_usage":12955648,
"stats":{},
"limit":2035564544
},
[...]
It seems that the modification of the memory allocated to the container didn't work.
Anyone have an idea?
Thank you in advance.
Maxence
I got some application which will call the pvcreate each time.
I can see the volumes in my vm as follow:
$ pvscan
PV /dev/vda5 VG ubuntu-vg lvm2 [99.52 GiB / 0 free]
Total: 1 [99.52 GiB] / in use: 1 [99.52 GiB] / in no VG: 0 [0 ]
$ pvcreate --metadatasize=128M --dataalignment=256K '/dev/vda5'
Can't initialize physical volume "/dev/vda5" of volume group "ubuntu-vg" without -ff
$ pvcreate --metadatasize=128M --dataalignment=256K '/dev/vda5' -ff
Really INITIALIZE physical volume "/dev/vda5" of volume group "ubuntu-vg" [y/n]? y
Can't open /dev/vda5 exclusively. Mounted filesystem?
I have also tried wipsfs and observed the same result for above commands
$ wipefs -af /dev/vda5
/dev/vda5: 8 bytes were erased at offset 0x00000218 (LVM2_member): 4c 56 4d 32 20 30 30 31
How can I execute pvcreate?
Anything to be added for my vm?
It seems your hdd (/dev/vda5) is already been used in your ubuntu-vg. I think you can not use same hdd partition in 2 different PV's. or you can not add it again.
CONTAINER CPU % MEM USAGE / LIMIT MEM % NET I/O BLOCK I/O PIDS
48c16e180af6 0.20% 91.48MiB / 31.31GiB 0.29% 3.86kB / 0B 85.3MB / 0B 33
f734efe5a249 0.00% 472KiB / 31.31GiB 0.00% 3.97kB / 0B 12.3kB / 0B 1
165a7b031093 0.00% 480KiB / 31.31GiB 0.00% 9.49kB / 0B 3.66MB / 0B 1
Does anyone know how to get resource consumption of a specific Docker container within its running environment?
Outside of a container, we can get it easily by typing a command "docker stats". Besides, if I try to get resource consumption inside a container, it will get the consumption (RAM, CPU) of the physical computer which the container runs on.
Another option is using 'htop' command, but it does not show the result exactly compared to 'docker stats' command.
If you want the processes consumption inside the container, you can go into the container and monitor the processes.
docker exec -it <container-name> watch ps -aux
Notice that after running the above command, the container doesn't know about any docker processes running.