Way to delete the images from Private Docker Registry - docker

I have a Private Docker Registry set up and i have pushed some images from other machine to this registry.
Its a V2 registry.
I don't know a novel way to delete the images from repositories since these pushed images doesn't get listed in CLI for "docker images".
Can anyone suggest me the proper way to delete those images from the disk?
Appreciate a lot for answer.
Thanks

I have posted same answer to other question. Maybe it would be useful for you.
I've faced same problem with my registry then i tried the solution listed below from a blog page. It works.
Step 1: Listing catalogs
You can list your catalogs by calling this url:
http://YourPrivateRegistyIP:5000/v2/_catalog
Response will be in the following format:
{
"repositories": [
<name>,
...
]
}
Step 2: Listing tags for related catalog
You can list tags of your catalog by calling this url:
http://YourPrivateRegistyIP:5000/v2/<name>/tags/list
Response will be in the following format:
{
"name": <name>,
"tags": [
<tag>,
...
]
}
Step 3: List manifest value for related tag
You can run this command in docker registry container:
curl -v --silent -H "Accept: application/vnd.docker.distribution.manifest.v2+json" -X GET http://localhost:5000/v2/<name>/manifests/<tag> 2>&1 | grep Docker-Content-Digest | awk '{print ($3)}'
Response will be in the following format:
sha256:6de813fb93debd551ea6781e90b02f1f93efab9d882a6cd06bbd96a07188b073
Run the command given below with manifest value:
curl -v --silent -H "Accept: application/vnd.docker.distribution.manifest.v2+json" -X DELETE http://127.0.0.1:5000/v2/<name>/manifests/sha256:6de813fb93debd551ea6781e90b02f1f93efab9d882a6cd06bbd96a07188b073
Step 4: Delete marked manifests
Run this command in your docker registy container:
bin/registry garbage-collect /etc/docker/registry/config.yml
Here is my config.yml
root#c695814325f4:/etc# cat /etc/docker/registry/config.yml
version: 0.1
log:
fields:
service: registry
storage:
cache:
blobdescriptor: inmemory
filesystem:
rootdirectory: /var/lib/registry
delete:
enabled: true
http:
addr: :5000
headers:
X-Content-Type-Options: [nosniff]
health:
storagedriver:
enabled: true
interval: 10s
threshold: 3

Currently you cannot delete an image from a docker registry without an external tool. The easiest way to do so would be to use this script to do so, keeping in mind that it does require downtime.

Related

Get the docker image size from Jfrog Artifactory

I have docker images stored in jFrog artifactory and I need to get the size of those docker images without pulling it. I tried to figure out if there is any direct REST API provided by jFrog but couldn’t find any.
I also tried one solution by doing a curl request to fetch all the docker layers and to sum all the layers. But I’m looking if there is any feasible solution other than the above?
You can try and use regclient.
It does allow to inspect images without pulling the layers, allowing quick access to the image manifest and configuration.
And the manifest include the size of the image.
Start with the regctl manifest command:
regctl manifest get <yourJFrogName>/<yourImageName>:<tag>
Or:
regctl image inspect --format '{{jsonPretty .}}' <yourJFrogName>/<yourImageName>:<tag>
It should include the total size:
$ regctl manifest get localhost:5000/artifact:demo
Name: localhost:5000/artifact:demo
MediaType: application/vnd.oci.image.manifest.v1+json
Digest: sha256:36484d44383fc9ffd34be11da4a617a96cb06b912c98114bfdb6ad2dddd443e2
Annotations:
demo: true
format: oci
Total Size: 64B
JFrog provides a get artifact details REST API where you can get the file size of the image with below REST API.
curl -u admin -X GET http://localhost:8082/artifactory/api/storage/docker-local/nginx/latest/manifest.json -H "Content-type: application/json"
This will give the following output along with the size.
{
"repo" : "docker-local",
"path" : "/nginx/latest/manifest.json",
----------
"downloadUri" : "https://localhost:8082/artifactory/docker-local/nginx/latest/manifest.json",
"mimeType" : "application/json",
"size" : "1570",
"checksums" : {
"sha1" : "cc357b563ce443aa8270eb34e6755458a1a46869",
--------
},
"originalChecksums" : {
"sha256" : "89020cd33be2767f3f894484b8dd77bc2e5a1ccc864350b92c53262213257dfc"
},
"uri" : "https://localhost:8082/artifactory/api/storage/docker-local/nginx/latest/manifest.json"
}

How to make Drone Docker Plugin use self-signed certs?

I'm facing the same problem as here - I have set up a private Docker Registry with TLS certification (certificates generated via Certbot), and I can interact with it directly via curl etc. (thus proving that the certificate is correct), but the Docker Plugin in my Drone flow gives an error x509: certificate signed by unknown authority.
As per this StackOverflow answer, I believe that putting the certificate at /etc/docker/certs.d/<my_registry_address:port>/ca.crt should fix this problem, but it doesn't appear to (neither does adding the certificate into the standard /etc/ssl/certs/ca-certificates.crt location)
Demonstration that the certificates work as-expected, having already built the Docker Drone Plugin locally as per https://github.com/drone-plugins/drone-docker:
$ docker run --rm -v <path_to_directory_containing_pems>:/custom-certs -it --entrypoint /bin/sh plugins/docker
/ # ls /custom-certs
accounts archive csr keys live renewal renewal-hooks
/ # apk add curl
...
OK: 28 MiB in 56 packages
/ # curl https://docker-registry.scubbo.org:8843/v2/_catalog
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
/ # curl https://docker-registry.scubbo.org:8843/v2/_catalog --cacert /custom-certs/live/docker-registry.scubbo.org/fullchain.pem
{"repositories":[...]}
/ # cat /custom-certs/live/docker-registry.scubbo.org/fullchain.pem >> /etc/ssl/certs/ca-certificates.crt
/ # curl https://docker-registry.scubbo.org:8843/v2/_catalog
{"repositories":[...]}
Here's my .drone.yml, for a Runner instantiated with --env=DRONE_RUNNER_VOLUMES=/var/run/docker.sock:/var/run/docker.sock,<path_to_directory_containing_pems>:/custom-certs:
kind: pipeline
name: hello-world
type: docker
platform:
os: linux
arch: arm64
steps:
- name: copy-cert-into-place
image: busybox
volumes:
- name: docker-cert-persistence
path: /etc/docker/certs.d/
commands:
# https://stackoverflow.com/a/56410355/1040915
# Note that we need to mount the whole `custom-certs` directory into the workflow and then copy the file to `/etc/...`,
# rather than mounting the file directly into `/etc/...`, because the original file is a symlink and it's not possible (AFAIK)
# to instruct Docker to "mount the eventual-target-of this symlink into <location>"
- mkdir -p /etc/docker/certs.d/docker-registry.scubbo.org:8843
- cp -L /custom-certs/live/docker-registry.scubbo.org/fullchain.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
- name: check-cert-persists-between-stages
image: alpine
volumes:
- name: docker-cert-persistence
path: /etc/docker/certs.d/
commands:
- apk add curl
# The command below would fail if the cert was unavailable or invalid
- curl https://docker-registry.scubbo.org:8843/v2/_catalog --cacert /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
- name: build-image
# ...contents irrelevant to this question...
- name: push-built-image
image: plugins/docker
volumes:
- name: docker-cert-persistence
path: /etc/docker/certs.d/
settings:
repo: docker-registry.scubbo.org:8843/scubbo/blog_nginx
tags: built_in_ci
debug: true
launch_debug: true
volumes:
- name: docker-cert-persistence
temp: {}
giving these logs from push-built-image step - ending in...
+ /usr/local/bin/docker tag 472d41d9c03ee60fe9c1965ad9cfd36a1cdb6cbf docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
+ /usr/local/bin/docker push docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
The push refers to repository [docker-registry.scubbo.org:8843/scubbo/blog_nginx]
Get "https://docker-registry.scubbo.org:8843/v2/": x509: certificate signed by unknown authority
exit status 1
How should I go about providing the CA Certificate to my Drone Docker Plugin step to permit it to communicate over TLS with a secure Docker registry? This answer suggests simply reverting to insecure integration, which works but is unsatisfactory.
EDIT: After re-reading this documentation, I extended the copy-cert-into-place commands to copy all 3 certificate-related files:
commands:
- mkdir -p /etc/docker/certs.d/docker-registry.scubbo.org:8843
- cp -L /custom-certs/live/docker-registry.scubbo.org/fullchain.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
- cp -L /custom-certs/live/docker-registry.scubbo.org/privkey.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/client.key
- cp -L /custom-certs/live/docker-registry.scubbo.org/cert.pem /etc/docker/certs.d/docker-registry.scubbo.org:8843/client.cert
but that did not resolve the problem - same x509: certificate signed by unknown authority error.
EDIT2: I directly confirmed (directly on a host, outside the context of a plugin or docker container) that adding the certificate to the path used above is sufficient to permit interaction with the registry:
$ docker pull docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
Error response from daemon: Get "https://docker-registry.scubbo.org:8843/v2/": x509: certificate signed by unknown authority
$ sudo cp -L <path_to_directory_containing_pems>/live/docker-registry.scubbo.org/chain.pem /etc/docker/certs.d/docker-registry.scubbo.org\:8843/ca.crt
$ docker pull docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
built_in_ci: Pulling from scubbo/blog_nginx
Digest: sha256:3a17f86f23050303d94443f24318b49fb1a5e2d0cc9228270678c8aa55b4d2c2
Status: Image is up to date for docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
docker-registry.scubbo.org:8843/scubbo/blog_nginx:built_in_ci
This isn't a complete answer, but I was able to get secure registry access working by switching from mounting a directory, to mounting the file directly:
I changed the docker run option to --env=DRONE_RUNNER_VOLUMES=/var/run/docker.sock:/var/run/docker.sock,$(readlink -f <path_to_directory_containing_pems>/live/docker-registry.scubbo.org/chain.pem):/registry_cert.crt
I changed the commands in copy-cert-into-place to:
- mkdir -p /etc/docker/certs.d/docker-registry.scubbo.org:8843
- cp /registry_cert.crt /etc/docker/certs.d/docker-registry.scubbo.org:8843/ca.crt
I don't consider this a complete answer (and would love further input or advice!), because:
I don't know why copying the file out of the mounted directory into /etc/docker/... (as in the original question) didn't work, but mounting the file directly from the host filesystem worked. (Note that the check-cert-persists-between-stages stage confirms that the certificate is correct, so it's not a mistake of copying a wrong or empty file)
I don't know how to mount the file directly into an in-stage path that contains a colon - this answer indicates how to mount a path containing a colon directly into a container, but in this case we're passing the path to DRONE_RUNNER_VOLUMES

Problem with pulling image from private hub

I have the following problem : Trying to pull built docker image from private hub and run it as a service, but the following error appears
Failed to launch container: Failed to run 'docker -H unix:///var/run/docker.sock pull r.cfcr.io/path/to/repo/': exited with status 1; stderr='Error: Cannot perform an interactive login from a non TTY device '
here is the fetch[] config.json info that I am using to authenticate :
{
"auths": {
"r.cfcr.io": {
"auth": "=auth_token="
}
},
"HttpHeaders": {
"User-Agent": "Docker-Client/18.06.1-ce (linux)"
}
}
Do you have any idea how to resolve the problem?
Probably not linked to the problem here, but some people might encounter the exact same message when trying a docker login from a Linux like terminal on Windows such as Git bash or Docker quickstart terminal or even Cygwin.
The trick here is to use winpty docker login
or try to use this command
docker login "${URL}" -u "${Username}" -p "${PASSWORD}"
You must keep the config.json file at .docker directory at $MESOS_SANDBOX.
So create the archieve of .docker directory with below listing of files :
$ tar tvf docker-login.tar
drwx------ parvez/parvez 0 2019-06-12 21:45 .docker/
-rw------- parvez/parvez 177 2019-06-12 21:45 .docker/config.json
Fetch and extract this archive from mesos configuration.
"fetch": [{
"uri": "https://foo.com/docker-login.tar",
"executable": false,
"extract": true,
"cache": true
}],
It will download and extract archieve at $MESOS_SANDBOX path and docker pull should be successful.

Tagging a Docker image with auto incremented global version in Gitlab in yaml

I'm currently digging in Gitlab CI. I would like to add a way in my YAML files to tag my docker images with a version number composed in the following fashion : MajorVersion.Minorversion.AutoincrementedGlobalversionNumber
I would like to auto-increment the globally defined variable "AutoincrementedGlobalversionNumber" each time I deploy.
I have used CI_PIPELINE_IID however it keeps incrementing for each pipeline request, I need something to keep a version where I can keep track of and it should increment only when I pack and deploy.
variables:
CI_VERSION: "1.0.${CI_PIPELINE_IID}"
build-master:
stage: build
script:
- docker build --pull -t "$CI_REGISTRY_IMAGE" -t "$CI_REGISTRY_IMAGE:$CI_VERSION" ./postfix
- docker push "$CI_REGISTRY_IMAGE"
only:
- master
You probably can't do this with the default GitLab CI variables, but there could be a workaround along the lines of (untested):
Get the registry ID with something like:
$ registry_id=$(curl -s -XGET --header "PRIVATE-TOKEN: $TOKEN" "https://gitlab.com/$PROJECT_PATH/container_registry.json" | jq '.[].id')
Query said registry to get the name:
curl -s -XGET --header "PRIVATE-TOKEN: $TOKEN" "https://gitlab.com/$PROJECT_PATH/registry/repository/$registry_id/tags?format=json" | jq
eg returns the following and you can grep the name for GlobalVersionNumber:
[
{
"name": "latest",
"location": "registry.gitlab.com/mwasilewski/helm:latest",
"revision": "85a403337a56e9e6409dfb8185bf9aa5c2135f9a437bd75da82d27471c71feb4",
"short_revision": "85a403337",
"total_size": 152246865,
"created_at": "2016-12-11T08:31:30.126+00:00",
"destroy_path": "/mwasilewski/helm/registry/repository/31074/tags/latest"
}
]
Continue with your Docker build and push, after incrementing the GlobalVersionNumber you get back.
NB: this assumes you are using GitLab's Container Registry
Resources:
https://gitlab.com/gitlab-org/gitlab-ce/issues/40826

Why can't I delete a layer in my private docker registry(v2)?

I have just installed the docker-registry in stand-alone mode successfully and I can use the following command
curl -X GET http://localhost:5000/v2/
to get the proper result.
However, when I use
curl -X DELETE http://localhost:5000/v2/<name>/blobs/<digest>
to delete a layer, it fails, I get:
{"errors":[{"code":"UNSUPPORTED","message":"The operation is unsupported."}]}
I use the default configuration from the docker hub. And I studied the official configuration but failed to resolve it.
How can I make it out?
You have to add the parameter delete: enabled: true in /etc/docker/registry/config.yml
make it look like that :
version: 0.1
log:
fields:
service: registry
storage:
cache:
layerinfo: inmemory
filesystem:
rootdirectory: /var/lib/registry
delete:
enabled: true
http:
addr: :5000
take a look here for more details
Or by adding an environment var to the container on boot :
-e REGISTRY_STORAGE_DELETE_ENABLED=true
Either use:
REGISTRY_STORAGE_DELETE_ENABLED=true
or define:
REGISTRY_STORAGE_DELETE_ENABLED: "yes"
in docker-compose.

Resources