got huge problem, all my inodes seems to be used.
I've cleaned all volumes unused
Cleaned all container and images
with command -> docker prune
but still it seems that it stay full :
Filesystem Inodes IUsed IFree IUse% Mounted on
none 3200000 3198742 1258 100% /
tmpfs 873942 16 873926 1% /dev
tmpfs 873942 13 873929 1% /sys/fs/cgroup
/dev/sda1 3200000 3198742 1258 100% /images
shm 873942 1 873941 1% /dev/shm
tmpfs 873942 1 873941 1% /sys/firmware
docker info
Containers: 5
Running: 3
Paused: 0
Stopped: 2
Images: 23
Server Version: 17.06.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 53
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 6.668GiB
Name: serveur-1
ID: CW7J:FJAH:S4GR:4CGD:ZRWI:EDBY:AYBX:H2SD:TWZO:STZU:GSCX:TRIC
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
The only thing i think can do this, is a build i'm doing on this machine.
This build runs a npm install with many files.
Can these files stays on server ?
is there any chance i have to delete these temporary files ?
Is there any dangling volumes left in the system? If you have dangling volumes, it may fill up your disk space.
List all dangling volumes
docker volume ls -q -f dangling=true
Remove all dangling volumes
docker volume rm `docker volume ls -q -f dangling=true`
Found the error,
this seems to be Docker 17.06.1-ce error.
This version seems not correctly deleting images, and keeping files in /var/lib/docker/aufs/mnt/
So just upgrade to new docker version and this will be fine.
now df show me
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/sda1 51558236 3821696 45595452 8% /
udev 10240 0 10240 0% /dev
tmpfs 1398308 57696 1340612 5% /run
tmpfs 3495768 0 3495768 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 3495768 0 3495768 0% /sys/fs/cgroup
This is better :)
I had the same problem. Had Jenkins running inside Docker with a volume attached to it. After a few weeks Jenkins told me "npm WARN tar ENOSPC: no space left on device". After some googling I found out that all inodes are taken with sudo df -ih. With sudo find . -xdev -type f | cut -d "/" -f 2 | sort | uniq -c | sort -n I could locate the folder using up all the inodes and it was a certain build with npm. Deleted that folder and now I'm good to go again.
This can also be an effect of a lot of stopped containers, for example if there's a cron job running that use a container, and the docker run commandline used does not include --rm - in that case, every cron invocation will leave a stopped container on the filesystem.
In this case, the output of docker info will show a high number under Server -> Containers -> Stopped.
To cure this:
docker container prune
Add --rm to your docker run command line in the cron job.
In my case it was dangling build cache because removing dangling images does not solve the issue.
This cache can be removed by following command: docker system prune --all --force, but be careful maybe you still need some volumes or images.
In my case, pods with ingress-nginx and modsecurity active are creating a lot of dirs & files on container volumes in mod security structure (/var/lib/docker/overlay2/...) after more than 80 days of execution.
Restarting the pods remove the problem.
This case can be generalize to other cases with container internal storage no controlled
Related
I am trying build the nodemcu firmware with a docker on a windows 10 system.
When I try to build the nodemcu firmware, I have this error:
(...)
PRUNE libmain.a libc.a
/opt:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/nodemcu-firmware/tools/toolchains/esp8266-linux-x86_64-20190731.0/bin
xtensa-lx106-elf-ar: /opt/nodemcu-firmware/sdk/esp_iot_sdk_v3.0-e4434aa/lib/libc.a: No space left on device
Makefile:334: recipe for target '/opt/nodemcu-firmware/sdk/.pruned-3.0-e4434aa' failed
make: *** [/opt/nodemcu-firmware/sdk/.pruned-3.0-e4434aa] Error 1
make: Leaving directory '/opt/nodemcu-firmware'
I tried docker system prune but this error persists.
I tried execute docker info but doesn't help me:
Client:
Debug Mode: false
Server:
Containers: 1
Running: 0
Paused: 0
Stopped: 1
Images: 12
Server Version: 19.03.1
Storage Driver: overlay2
Backing Filesystem: extfs
Supports d_type: true
Native Overlay Diff: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host ipvlan macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 894b81a4b802e4eb2a91d1ce216b8817763c29fb
runc version: 425e105d5a03fabd737a126ad93d62a9eeede87f
init version: fec3683
Security Options:
seccomp
Profile: default
Kernel Version: 4.14.131-linuxkit
Operating System: Docker Desktop
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.419GiB
Name: docker-desktop
ID: RCEX:VYON:IG6T:AXCF:CSLX:3NVK:V453:DTSP:HW4Y:EKBY:PCVW:2UOJ
Docker Root Dir: /var/lib/docker
Debug Mode: true
File Descriptors: 28
Goroutines: 44
System Time: 2020-12-21T02:04:22.387341144Z
EventsListeners: 1
Registry: https://index.docker.io/v1/
Labels:
Experimental: true
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
Product License: Community Engine
When I execute docker system df, I have this result:
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 4 1 673.8MB 209.8MB (31%)
Containers 1 0 0B 0B
Local Volumes 10 0 0B 0B
Build Cache 0 0 0B 0B
I am new in docker and I don't know what do to from here. Can anyone help me?
Docker tends to need cleanup after you work with it sometimes. There may be previous resources that were created in the past that are no longer in use, this is especially true if you are making use of volumes.
Sometimes, images are not fully created and end up becoming untagged dangling images which for the most part are undesirable. So you have to remove these undesirables that consume space.
docker image prune
docker image prune removes all dangling images.
2. docker volume prune
docker volume prune removes all 'volumes' which are not attached to any containers. These should always be ran before pruning containers otherwise all containers will get deleted this is because pruning containers removes all containers which are not running at the time the command is ran. For the most part in my experience volumes is often what consumes the most amount of resources in terms of space.
docker container prune
For the most part this is optional and you can choose to do this or not
Now even after deleting dangling images there might still be other images that do not get deleted but you no longer need, like for example if you had older versions of the redis image and later download the most recent version of it, you may need to remove the older one if you no longer use it. You will have to delete this manually. Use the following commands:
docker image ls
docker rmi <image id>
The first command gives you a list of all the images you have downloaded, and the second command deletes it. sometimes you may have to use the -f flag on the delete command to force the deletion of an image:
docker rmi -f <image id>
I have purchased volume for my droplet in digital ocean and when I do docker compose build it takes up space on my current setup and I am not able to build my images.
My current setup is on
`/dev/vda1 25227048 25191932 18732 100% /`
Full Ubunto is :
udev 2013884 0 2013884 0% /dev
tmpfs 404632 5672 398960 2% /run
/dev/vda1 25227048 25191932 18732 100% /
tmpfs 2023160 0 2023160 0% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 2023160 0 2023160 0% /sys/fs/cgroup
/dev/vda15 106858 3437 103421 4% /boot/efi
tmpfs 404632 0 404632 0% /run/user/0
/dev/sda 103081248 93980 97728004 1% /mnt/volume_lon1_01
How do I build so it build on my new volume?
`/dev/sda 103081248 93980 97728004 1% /mnt/volume_lon1_01`
Fail into error now:
Version: 18.03.0-ce
API version: 1.37
Go version: go1.9.4
Git commit: 0520e24
Built: Wed Mar 21 23:10:01 2018
OS/Arch: linux/amd64
Experimental: false
Orchestrator: swarm
Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
If you want to use your new disk only for docker, you need to mount it in the docker base directory: /var/lib/docker.
But before doing it, you need to:
Stop the docker daemon completely sudo systemctl docker stop
Sync everything in the current directory to the new disk: sudo rsync -aqxP /var/lib/docker/ /mnt/volume_lon1_01
Delete the old content: sudo rm -rf /var/lib/docker/*
mount the new volume to the right place: sudo mount /dev/sda /var/lib/docker
Start docker daemon sudo systemctl start docker
Check that everything works properly - you can check if you still have your volume listed docker volume ls, or some local images docker images ls, or if you can start a new container docker run -ti alpine
Add the new mount definition into /etc/fstab*
You could also change the default directory of docker to use /mnt/volume_lon1_01.
If you want the second option, I recommend you to read https://linuxconfig.org/how-to-move-docker-s-default-var-lib-docker-to-another-directory-on-ubuntu-debian-linux
*For modifying the fstab, if you are not familiar with, you need a few information: what is the filesystem used by the partition, its path and where you want to mount it
After that, edit the file /etc/fstab and check if a line already exist with the partition path (/dev/sda for you). If not, add a new line, if yest, just edit it for changing the mount path to the new one.
How to find the partition filesystem already mounted: mount
This will return one line par partition and you need to check the type of the partition.
Example: rootfs on / type lxfs (rw,noatime), the partition type is lxfs
If you need to add a new line, it will be something like that:
/dev/sda /var/lib/docker <fs type> defaults 0 0
While experimenting with Docker and Docker Compose I suddenly ran into "no space left on device" errors. I've tried to remove everything using methods suggested in similar questions, but to no avail.
Things I ran:
$ docker-compose rm -v
$ docker volume rm $(docker volume ls -qf dangling=true)
$ docker rmi $(docker images | grep "^<none>" | awk "{print $3}")
$ docker system prune
$ docker container prune
$ docker rm $(docker stop -t=1 $(docker ps -q))
$ docker rmi -f $(docker images -q)
As far as I'm aware there really shouldn't be anything left now. And it looks that way:
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
Same for volumes:
$ docker volume ls
DRIVER VOLUME NAME
And for containers:
$ docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
Unfortunately, I still get errors like this one:
$ docker-compose up
Pulling adminer (adminer:latest)...
latest: Pulling from library/adminer
90f4dba627d6: Pulling fs layer
19ae35d04742: Pulling fs layer
6d34c9ec1436: Download complete
729ea35b870d: Waiting
bb4802913059: Waiting
51f40f34172f: Waiting
8c152ed10b66: Waiting
8578cddcaa07: Waiting
e68a921e4706: Waiting
c88c5cb37765: Waiting
7e3078f18512: Waiting
42c465c756f0: Waiting
0236c7f70fcb: Waiting
6c063322fbb8: Waiting
ERROR: open /var/lib/docker/tmp/GetImageBlob865563210: no space left on device
Some data about my Docker installation:
$ docker info
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 1
Server Version: 17.06.1-ce
Storage Driver: aufs
Root Dir: /var/lib/docker/aufs
Backing Filesystem: extfs
Dirs: 15
Dirperm1 Supported: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Log: awslogs fluentd gcplogs gelf journald json-file logentries splunk syslog
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 6e23458c129b551d5c9871e5174f6b1b7f6d1170
runc version: 810190ceaa507aa2727d7ae6f4790c76ec150bd2
init version: 949e6fa
Security Options:
apparmor
seccomp
Profile: default
Kernel Version: 4.10.0-32-generic
Operating System: Ubuntu 16.04.3 LTS
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.685GiB
Name: engelbert
ID: UO4E:FFNC:2V25:PNAA:S23T:7WBT:XLY7:O3KU:VBNV:WBSB:G4RS:SNBH
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
WARNING: No swap limit support
And my disk info:
$ df -h
Filesystem Size Used Avail Use% Mounted on
udev 3,9G 0 3,9G 0% /dev
tmpfs 787M 10M 778M 2% /run
/dev/nvme0n1p3 33G 25G 6,3G 80% /
tmpfs 3,9G 46M 3,8G 2% /dev/shm
tmpfs 5,0M 4,0K 5,0M 1% /run/lock
tmpfs 3,9G 0 3,9G 0% /sys/fs/cgroup
/dev/loop0 81M 81M 0 100% /snap/core/2462
/dev/loop1 80M 80M 0 100% /snap/core/2312
/dev/nvme0n1p1 596M 51M 546M 9% /boot/efi
/dev/nvme0n1p5 184G 52G 123G 30% /home
tmpfs 787M 12K 787M 1% /run/user/121
tmpfs 787M 24K 787M 1% /run/user/1000
And:
$ df -hi /var/lib/docker
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/nvme0n1p3 2,1M 2,0M 68K 97% /
As said, I'm still experimenting, so I'm not sure if I've posted all relevant info - let me know if you need more.
Anyone any idea what more could be the issue?
The problem is that /var/lib/docker is on the / filesystem, which is running out of inodes. You can check this by running df -i /var/lib/docker
Since /home's filesystem has sufficient inodes and disk space, moving Docker's working directory there there should get it going again.
(Note that the this assumes there is nothing valuable in the current Docker install.)
First stop the Docker daemon. On Ubuntu, run
sudo service docker stop
Then move the old /var/lib/docker out of the way:
sudo mv /var/lib/docker /var/lib/docker~
Now create a directory on /home:
sudo mkdir /home/docker
and set the required permissions:
sudo chmod 0711 /home/docker
Link the /var/lib/docker directory to the new working directory:
sudo ln -s /home/docker /var/lib/docker
Then restart the Docker daemon:
sudo service docker start
Then it should work again.
For future reference, if you have removed all the containers you can also try docker system prune which will remove dangling images, containers and anything else.
This may not directly answer the question but it can be useful in general if the Dockerfile used to create the image is available.
Make sure in particular to limit the number of layers that will be generated, hence, when writing the Dockerfile, avoid doing this:
RUN apt-get update && sudo apt-get install -y package1
RUN apt-get update && sudo apt-get install -y package2
RUN apt-get update && sudo apt-get install -y package3
and do this instead:
RUN apt-get update && sudo apt-get install -y \
package1 \
package2 \
package3
Doing so drastically reduced the size of the image as well as the inode usage, since less layers are generated. This helped address the issue in my case (where the inodes would all get used up).
Make sure to remove the potential intermediate image that was generated by the failed build to free up space docker rmi <IMAGE_ID>.
For more tips, you can check out this site about Optimizing Docker images.
I'm learning using Docker now, I've installed docker on my server(CentOS 7). But when I follow the offical tutorial, I meet one problem which fails me to continue:
mkdir /var/lib/docker/overlay/7849ab40fd8072dcd724387dab14707bb4af0e94d9ab4f71795d0c478c3d49a9-init/merged/dev/shm: invalid argument
this appears when I build/run most images(few image that didnt fail is offical python:latest and hello-world)
What I'm trying to do is pulling offical image "docker/whalesay" and run it as follows:
docker run docker/whalesay
Unable to find image 'docker/whalesay:latest' locally
latest: Pulling from docker/whalesay
e190868d63f8: Pull complete
909cd34c6fd7: Pull complete
0b9bfabab7c1: Pull complete
a3ed95caeb02: Pull complete
00bf65475aba: Pull complete
c57b6bcc83e3: Pull complete
8978f6879e2f: Pull complete
8eed3712d2cf: Pull complete
Digest: sha256:178598e51a26abbc958b8a2e48825c90bc22e641de3d31e18aaf55f3258ba93b
Status: Downloaded newer image for docker/whalesay:latest
docker: Error response from daemon: mkdir /var/lib/docker/overlay/fb4b7f34f0963d158856dadccec49963e47716865c83066f7e6eaf0bae057a13-init/merged/dev/shm: invalid argument.
See 'docker run --help'.
Here is my docker info:
Containers: 5
Running: 0
Paused: 0
Stopped: 5
Images: 4
Server Version: 1.13.0
Storage Driver: overlay
Backing Filesystem: extfs
Supports d_type: true
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge host macvlan null overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Init Binary: docker-init
containerd version: 03e5862ec0d8d3b3f750e19fca3ee367e13c090e
runc version: 2f7393a47307a16f8cee44a37b262e8b81021e3e
init version: 949e6fa
Security Options:
seccomp
Profile: default
Kernel Version: 3.10.0-327.22.2.el7.x86_64
Operating System: CentOS Linux 7 (Core)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.702 GiB
Name: iZ25d1y69iaZ
ID: VJAP:FBMM:CQ5I:KIV5:FO47:VJUJ:ECU2:5TOS:JZBE:EUSH:HUFF:NCAZ
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: bridge-nf-call-iptables is disabled
WARNING: bridge-nf-call-ip6tables is disabled
Experimental: false
Insecure Registries:
127.0.0.0/8
Live Restore Enabled: false
I tried to search, not found the same issue. it looks like a file system problem, so here is my df -h and file -s /dev/vda1
Filesystem Size Used Avail Use% Mounted on
/dev/vda1 40G 4.9G 33G 14% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.9G 0 1.9G 0% /dev/shm
tmpfs 1.9G 352K 1.9G 1% /run
tmpfs 1.9G 0 1.9G 0% /sys/fs/cgroup
tmpfs 380M 0 380M 0% /run/user/1000
.
/dev/vda1: Linux rev 1.0 ext4 filesystem data, UUID=80b9b662-0a1d-4e84-b07b-c1bf19e72d97 (needs journal recovery) (extents) (large files) (huge files)
I'm new to Docker, so this maybe same configuration problem or version issue, but i havnt find it out.
I appreciate any suggestions and answers!
I was in SLES12 and i fixed the issue by
service stop docker
vi /etc/docker/daemon.json (This is a new file)
Add following entry
{
"storage-driver": "devicemapper"
}
service start docker
Edit: GitHub issue with the discussion of this problem.
I remember there was an issue with 3.10 kernel and overlayfs/ext4/xfs filesystems, and some people noticed that it started working again with a more recent kernel (I think it was in 3.18 that the overlayfs module was added to the kernel).
So if upgrading the kernel is an option to you, you can check if overlayfs+ext4 works.
If a kernel upgrade is not an option, then I guess your only option is to use another storage driver (aufs should not be available, so device mapper)
I have a server with a docker registry, and have pushed a lot of times a build the same :latest tag now my DD is full and I can't get how to diet it.
disk is full
df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 48G 45G 397M 100% /
udev 10M 0 10M 0% /dev
tmpfs 794M 81M 713M 11% /run
tmpfs 2.0G 0 2.0G 0% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 2.0G 0 2.0G 0% /sys/fs/cgroup
/dev/dm-1 9.8G 56M 9.2G 1% /var/lib/docker/devicemapper/mnt/2e895760700ac3e1575e496a4ac6adde4de6129226febba8c0c3126af1655ad9
shm 64M 0 64M 0% /var/lib/docker/containers/5aa47e34d1b8be22deeae473729b4e587e6e4bfe7fb3e262eda891bad4b05042/shm
there is no dangling volume nor images
# docker volume ls -qf dangling=true
#
# docker images -f "dangling=true" -q
#
docker images
[root#kvm22:/etc/cron.daily] # docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
jwilder/nginx-proxy 0.5.0 72b65b5a6f38 4 weeks ago 248.4 MB
registry 2 c9bd19d022f6 11 weeks ago 33.27 MB
registry 2.5 c9bd19d022f6 11 weeks ago 33.27 MB
disk usage
# du -h -d 7 /var/lib/docker/volumes/
12K /var/lib/docker/volumes/24000fbe2e81da06924be8f7ce81e07101824036bca5f87d4d811f2a6f7bfa7b/_data
16K /var/lib/docker/volumes/24000fbe2e81da06924be8f7ce81e07101824036bca5f87d4d811f2a6f7bfa7b
42G /var/lib/docker/volumes/registry_docker-registry-volume/_data/docker/registry/v2/blobs/sha256
42G /var/lib/docker/volumes/registry_docker-registry-volume/_data/docker/registry/v2/blobs
5.9M /var/lib/docker/volumes/registry_docker-registry-volume/_data/docker/registry/v2/repositories/labor-prod
5.9M /var/lib/docker/volumes/registry_docker-registry-volume/_data/docker/registry/v2/repositories
43G /var/lib/docker/volumes/registry_docker-registry-volume/_data/docker/registry/v2
43G /var/lib/docker/volumes/registry_docker-registry-volume/_data/docker/registry
43G /var/lib/docker/volumes/registry_docker-registry-volume/_data/docker
43G /var/lib/docker/volumes/registry_docker-registry-volume/_data
43G /var/lib/docker/volumes/registry_docker-registry-volume
43G /var/lib/docker/volumes/
Output of docker version:
# docker --version
Docker version 1.12.4, build 1564f02
Output of docker info:
# docker info
Containers: 4
Running: 1
Paused: 0
Stopped: 3
Images: 5
Server Version: 1.12.4
Storage Driver: devicemapper
Pool Name: docker-8:1-1184923-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: ext4
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 1.058 GB
Data Space Total: 107.4 GB
Data Space Available: 3.036 GB
Metadata Space Used: 2.142 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.145 GB
Thin Pool Minimum Free Space: 10.74 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Use `--storage-opt dm.thinpooldev` to specify a custom block storage device.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.90 (2014-09-01)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: bridge null host overlay
Swarm: inactive
Runtimes: runc
Default Runtime: runc
Security Options:
Kernel Version: 3.16.0-4-amd64
Operating System: Debian GNU/Linux 8 (jessie)
OSType: linux
Architecture: x86_64
CPUs: 2
Total Memory: 3.873 GiB
Name: kvm22
ID: G6OC:EKKY:ER4W:3JVZ:25BI:FF2Y:YXVA:RZRR:WPAP:SASB:AJJA:DM6J
Docker Root Dir: /var/lib/docker
Debug Mode (client): false
Debug Mode (server): false
Registry: https://index.docker.io/v1/
WARNING: No memory limit support
WARNING: No swap limit support
WARNING: No kernel memory limit support
WARNING: No oom kill disable support
WARNING: No cpu cfs quota support
WARNING: No cpu cfs period support
Insecure Registries:
127.0.0.0/8
I had the same problem. I can't believe there's no ready solution for this. Anyway I hacked a tool together and it seems to work.
You can find it here: https://github.com/Richie765/docker-tools
Basically it uses a bash script to find out which manifests are untagged. Then deletes them through the registry API. Afterwards you can run a garbage collection to actually delete the data.
I'm sure the script isn't perfect. Any improvements are welcome!
Found another tool here.
https://github.com/burnettk/delete-docker-registry-image
Included a clean_old_version.py script.
I give it a try too.
In case anyone still has this problem:
This is the 'reset' way how I solved it:
You can stop and delete the registry
docker stop registry && docker rm -v registry
and restart it afterwards:
docker run -d -p 5000:5000 --restart=always --name registry registry:2
Then you would have to rebuild your images localy and push them to the registry again.