Docker push re-sends layers to private repository - docker

I have a base image from which I want to create new images for deploying. The images are built locally and deployed to an internally hosted repository on another server using basic auth. When making changes to the base/deployable images, I have observed that some layers are re-sent even though the repository has already seen them before.
Since layers are 'fixed' and the repository has seen them sent previously by my logged in user, why does docker re-send them when pushing and not just send the new layers?

This seems to be a bug in the current Docker version, where it may send some layers more than one time. See: https://github.com/docker/docker/issues/12489

Related

Use multiple logins in the same docker-compose.yml file

I am trying to pull images from the same Artifactory repo using 2 different access tokens. This is because one image is available to one user, and another one is accessible by another user.
I tried using docker login, but I can login only once to a repo. Is there a way to specify in the docker-compose.yml file a user and token that Compose should use in order to pull the image?
The docker-compose file specification does not support providing credentials per service / image.
But putting this technicality aside, the described use case clearly indicates there is a user who needs access to both images...

Getting image pull history using Registry API

I am trying to create a python script to see when was last time my image was pulled from a container registry. I went through Registry API under API Reference and tried the following APIs to get information:
To list repo:
GET /v2/_catalog
To pull an image manifest:
GET /v2/<name>/manifests/<reference>
To pull a layer:
GET /v2/<name>/blobs/<digest>
After pulling the layer I can see a lot of information including history but not when was the last time image:tag was pulled.
How can I fetch when was the last time my image:tag was pulled?
Will be great if someone can help me with the APIs.
Thanks
The registry is a content addressable store. The digest from what was pushed is the same as the digest when you pull it, and that digest indicates the content of the image. Usage statistics therefore cannot be put into the image, since they would mutate what is being pulled.
Registries may provide separate APIs to give out usage statistics. Those will be custom to the registry because as of the date of this post, OCI has not standardized any API to fetch that metadata.
Looking at the GCR and GAR REST API docs, I'm not seeing any metadata details in there:
https://cloud.google.com/container-analysis/docs/reference/rest
https://cloud.google.com/artifact-registry/docs/reference/rest

Nexus 3 Docker Content Selector selects too many images

I am using Nexus 3 as a docker repository and want to create a user that has only read-only access to a specific docker image (and its related tags)
For this I created a Content Selector with the following query (The name of the image is test for demonstration purposes):
format == "docker" and path =~ "^(/v2/|/v2/library/)?(test(/.*)?)?$".
Then I created a Privilege with the action read, bound that to a role and added it to the user.
All is well, when I use the limited user I can fetch the image and not push.
However, I can still pull images I should not be able to pull.
Consider the following: I create an image called testaaa:1 on the docker registry. Afterwards I docker login to the registry using my user with read-only access. I am suddenly able to pull docker pull hub.my-registry.com/testaaa:1 even though according to the query I should not be able to.
I tested the query in a Java Regex Tester, the query would not select testaaa. Am I missing something? I am having a hard time finding clues on this topic.
EDIT: Some more testing reveals that my user is actually able to pull all images from this registry. The Content Selector query I used is exactly the one suggested by the Sonatype documentation Content Selectors and Docker - REST API vs Docker Client
I have figured it out. The issue was not the Content Selector query, but a capability that I previously added. The capability granted any authenticated user the role nx-anonymous which lets anyone view any repository in Nexus. This meant that any authenticated user was allowed to read/pull any image from the repository.
This error was entirely on my part. In case anyone has similar issues go have a look in the Nexus Settings -> System -> Capabilities and check if there are any capabilities that give your users unwanted roles.

How to reset the root key / offline key in docker?

According to Docker Documentation: Manage keys for content trust, the root key is :
Root of content trust for an image tag. When content trust is enabled, you create the root key once. Also known as the offline key, because it should be kept offline.
I don't know the exact meaning of "once". Do I only have one chance to set the root key? If dismissing subsequent consequences of these uploaded repositories, what should I do to reset it?
The keys are trust on first use, so if you change the root key for a repo, anyone that has previously trusted that key would need to have that information removed, which involves changing everywhere that has previously run this image. The notary server itself also needs to have it's data of this repository purged. It may be easier to create a new repository.
Realize that Content Trust currently points to Notary v1 which is soon to be phased out. Project sigstore has cosign already available, Notary v2 is being designed, and I've yet to come across a significant production infrastructure using Content Trust. Even the images in the Docker Library haven't been signed in over a year, so if you enable Content Trust, you'll find that image pulls revert to very old images missing any recent security patches.

How can I output all network requests to pull a Docker image?

In order to request access to a docker image on a public container registry from within a corporate network I need to obtain a list of all the URLs that will be requested during the pull. From what I can see, the initial call returns a json manifest and subsequent requests will be needed.
How can I get visibility of all the URLs requested when invoking docker pull my-image?
The registry API docker uses is publicly available that clarifies each API call possible. What you should see is:
A GET to the /v2/ API to check authorization settings
A query to the auth server to get a token if using an external auth server
A GET for the image manifest
A GET for the image config
A series of GET requests, one for each layer
The digests for the config and each layer will change with each image pushed, so best to whitelist the entire repository path for GET requests.
Note that many will take a different approach to this, setting up a local registry that all nodes in the network can pull from, and pushes to update that registry are done from a controlled node that performs all the security checks before ingesting new images. This handles the security needs, controlling what enters the network, without needing to whitelist individual URLs to an external resource.

Resources