Add remote tag to a docker image - docker

On a private registry (myregistry.com), say I have an image tagged with 'v1.2.3'. I then push it by:
docker push myregistry.com/myimage:v1.2.3
If I want to associate another tag, say 'staging', and push that tag to my registry, I can:
docker tag myregistry.com/myimage:v1.2.3 myregistry.com/myimage:staging
docker push myregistry.com/myimage:staging
Though this works, the second docker push still runs through each image, attempting to push it (albeit skipping upload). Is there a better way just to add a remote tag?

The way you've stated, docker tag ...; docker push ... is the best way to add a tag to an image and share it.
In the specific example you've given, both tags were in the same repo (myregistry.com/myimage). In this case you can just docker push myregistry.com/myimage and by default the docker daemon will push all the tags for the repo at the same time, saving the iteration over layers for shared layers.
You can use the same process (docker tag ...; docker push ...) to tag images between repositories as well, e.g.
docker tag myregistry.com/myimage:v1.2.3 otherregistry.com/theirimage:v2
docker push otherregistry.com/theirimage

You can achieve this with docker buildx imagetools create
docker buildx imagetools create myregistry.com/myimage:v1.2.3 --tag myregistry.com/myimage:staging
this will simply download the image manifest of myregistry.com/myimage:v1.2.3 and re-tag (and push) it as myregistry.com/myimage:staging
NOTE: this will also retain the multi-platform manifest list when you "re-tag" (e.g. when your image is build for both linux/arm64 and linux/amd64). Where as the conventional docker pull/push strategy will only retain the image manifest for the platform/architecture of the system you do the pull/push from.

pull/tag/push method will have time&network costs, you can just remotely tag your image with:
only for changing TAG the answer https://stackoverflow.com/a/38362476/8430173 works , but I wanted to change the repository name too.
by many thanks to this, I changed the repoName too!
(by help of his Github project):
1- get manifests (in v2 schema)
2- post every layer.digest in the new repo
3- post config.layer
4- put whole manifest to new repo
details:
1- GET manifest from reg:5000/v2/{oldRepo}/manifests/{oldtag} withaccept header:application/vnd.docker.distribution.manifest.v2+json
2- for every layer : POST reg:5000/v2/{newRepo}/blobs/uploads/?mount={layer.digest}&from={oldRepoNameWithaoutTag}
3- POST reg:5000/v2/{newRepo}/blobs/uploads/?mount={config.digest}&from={oldRepoNameWithaoutTag}
4- PUT reg:5000/v2/{newRepo}/manifests/{newTag} with content-type header:application/vnd.docker.distribution.manifest.v2+json and body from step 1 response
5- enjoy!

For single platform images, you can use
docker pull repo:oldtag
docker tag repo:oldtag repo:newtag
docker push repo:newtag
However, there are a few downsides.
You pull all the layers even if you don't need to run the image locally.
You are dereferencing multi-platform images to your local platform.
This can be done with curl, especially if you are in the same repository (when you go across repositories, you also need to copy all the blobs). However, that has two challenges of its own:
You need to accept all of the possible media types for the image you are pulling, track the media type you received, and use that same media type when pushing the manifest back to the registry. There are at least 6 media types I'm familiar with:
a signed and unsigned docker schema v1
the common manifest and manifest list in docker schema v2
the OCI image and index
If you go across repositories, you need to parse the manifest for the included blobs, which can be recursive for docker manifest lists and OCI indexes. You may be able to do a server side blob mount to avoid pulling and pushing the blob, but that will depend on the server support. And we're also going to see anonymous blob mounts come to registries, which allows a blob mount even if you don't know the source repo on that registry (useful for everyone pushing to a cloud registry with an image based on an official docker image from Docker Hub).
Authorization gets complicated, particularly if you have bearer tokens to request and maintain between commands.
The solution I'd recommend is using a tool that handles the registry API for you. I've been working on my own tooling for this, regclient, and there are other similar projects like Google's crane and RedHat's skopeo. These should each handle the media types, copying blobs when needed, and authorization issues that would complicate the curl command.
As an example with regclient's regctl command, you'd run:
regctl image copy repo:oldtag repo:newtag

With google's crane you just do
crane tag myregistry.com/myimage:v1.2.3 staging
It works with both docker images and OCI images, no image is downloaded locally and it even skips the layer verifications, as they are guaranteed to already be in the repository.
It's even available in a docker image: gcr.io/go-containerregistry/crane
Note that there are other similar tools, like regctl or skopeo

There is a simpler method with the new experimental manifest Docker commands. It only requires downloading and uploading the manifest file (JSON overview) of an image. The commands below have been tested with the GitLab registry. First build and push a Docker image in some previous stage:
docker build -t registry.gitlab.com/<group>/<project>/<image-name>:<tag-a> .
docker push registry.gitlab.com/<group>/<project>/<image-name>:<tag-a>
Then at a later stage where the image has not been pulled:
docker manifest create registry.gitlab.com/<group>/<project>/<image-name>:<tag-b> \
registry.gitlab.com/<group>/<project>/<image-name>:<tag-a>
docker manifest push registry.gitlab.com/<group>/<project>/<image-name>:<tag-b>
You can then pull the image with the new tag. The first step did not seem to work with public Docker Hub images, but any suggestions are welcome. Additionally, to see the manifest itself, run:
docker manifest inspect registry.gitlab.com/<group>/<project>/<image-name>:<tag-a>

Related

Republish Docker Image with Preserved Digest to Different Registry

I pull images from public registries such as DockerHub, and push them to a singular private registry. This is a simple process for images in the format of image:tag but not so for those of image#digest.
I want to re-publish, or push in Docker's terminology, images from a public registry to my private registry whilst maintaining the integrity of the exact immutable image. I want to preserve the digest so there's no abstraction between the digest referenced from my private registry to the image's source in a public registry.
I attempted to perform the same docker push command that works for image:tag on image#digest, but to no avail.
image:tag push
docker login -u usr -p psw registry.io
docker image pull docker.io/alpine:3.17.0
docker image push registry.io/alpine:3.17.0
...
ok
image#digest: push
docker login -u usr -p psw registry.io
docker image pull docker.io/alpine#sha256:c0d488a800e4127c334ad20d61d7bc21b4097540327217dfab52262adc02380c
docker image push registry.io/alpine#sha256:c0d488a800e4127c334ad20d61d7bc21b4097540327217dfab52262adc02380c
...
cannot push a digest reference
I want to re-publish the image from source to target as-is. I could perform a re-tag, or a push with a different ID, but both result in altering the reference-able digest and a level of abstraction that seems unnecessary.
What you want to do is rename the image and then push it. When you push an image to a repository you are pushing it to the repository specified by the image name. So first rename (changing the tag really) to include the registry you want to push to, then do the push.
docker image pull centos:centos7
docker tag centos:centos7 my-registry.io/centos7:centos7
docker push my-registry.io/centos/centos7
I wasn't able to solve this using Docker. Instead, I investigated Skopeo. v1.6.0 introduced the --preserve-digests flag that sounded laughably perfect. It was.
I implemented it as follows:
skopeo copy \
--dest-creds='usr:psw' \
--preserve-digests \
--override-arch amd64 \
docker://docker.io/alpine#sha256:c0d488a800e4127c334ad20d61d7bc21b4097540327217dfab52262adc02380c \
docker://registry.io/alpine#sha256:c0d488a800e4127c334ad20d61d7bc21b4097540327217dfab52262adc02380c
After I'd completed the digest-preserving copy, I had some additional bits that I wanted to add but found the documentation a bit hard to navigate. I eventually found all of the answers and supplemental flags that I was looking for, so thought I should post here in case they help someone else. I had to read through the available transports (repository types) for use in Skopeo's copy command, which is where I discovered --dest-creds and --preserve-digests, and I wasn't able to discover --override-arch as a global flag immediately!
There are a variety of tools for managing images on registries. And most support copying the image, including the multi-platform manifest, without modifying the digest. A few examples include:
google/go-containerregistry's crane (crane copy)
RedHat's skopeo (skopeo copy)
My own regclient's regctl (regctl image copy)
Using the docker CLI won't work for this because pulling an image to docker dereferences the multi-platform manifest to your platform's image. And when docker pushes an image, it may regenerate the image manifest json, possibly losing data like annotations, or changing spacing, and any single character change in the marshaled json will change the digest.

How do you tag a Docker Manifest [duplicate]

On a private registry (myregistry.com), say I have an image tagged with 'v1.2.3'. I then push it by:
docker push myregistry.com/myimage:v1.2.3
If I want to associate another tag, say 'staging', and push that tag to my registry, I can:
docker tag myregistry.com/myimage:v1.2.3 myregistry.com/myimage:staging
docker push myregistry.com/myimage:staging
Though this works, the second docker push still runs through each image, attempting to push it (albeit skipping upload). Is there a better way just to add a remote tag?
The way you've stated, docker tag ...; docker push ... is the best way to add a tag to an image and share it.
In the specific example you've given, both tags were in the same repo (myregistry.com/myimage). In this case you can just docker push myregistry.com/myimage and by default the docker daemon will push all the tags for the repo at the same time, saving the iteration over layers for shared layers.
You can use the same process (docker tag ...; docker push ...) to tag images between repositories as well, e.g.
docker tag myregistry.com/myimage:v1.2.3 otherregistry.com/theirimage:v2
docker push otherregistry.com/theirimage
You can achieve this with docker buildx imagetools create
docker buildx imagetools create myregistry.com/myimage:v1.2.3 --tag myregistry.com/myimage:staging
this will simply download the image manifest of myregistry.com/myimage:v1.2.3 and re-tag (and push) it as myregistry.com/myimage:staging
NOTE: this will also retain the multi-platform manifest list when you "re-tag" (e.g. when your image is build for both linux/arm64 and linux/amd64). Where as the conventional docker pull/push strategy will only retain the image manifest for the platform/architecture of the system you do the pull/push from.
pull/tag/push method will have time&network costs, you can just remotely tag your image with:
only for changing TAG the answer https://stackoverflow.com/a/38362476/8430173 works , but I wanted to change the repository name too.
by many thanks to this, I changed the repoName too!
(by help of his Github project):
1- get manifests (in v2 schema)
2- post every layer.digest in the new repo
3- post config.layer
4- put whole manifest to new repo
details:
1- GET manifest from reg:5000/v2/{oldRepo}/manifests/{oldtag} withaccept header:application/vnd.docker.distribution.manifest.v2+json
2- for every layer : POST reg:5000/v2/{newRepo}/blobs/uploads/?mount={layer.digest}&from={oldRepoNameWithaoutTag}
3- POST reg:5000/v2/{newRepo}/blobs/uploads/?mount={config.digest}&from={oldRepoNameWithaoutTag}
4- PUT reg:5000/v2/{newRepo}/manifests/{newTag} with content-type header:application/vnd.docker.distribution.manifest.v2+json and body from step 1 response
5- enjoy!
For single platform images, you can use
docker pull repo:oldtag
docker tag repo:oldtag repo:newtag
docker push repo:newtag
However, there are a few downsides.
You pull all the layers even if you don't need to run the image locally.
You are dereferencing multi-platform images to your local platform.
This can be done with curl, especially if you are in the same repository (when you go across repositories, you also need to copy all the blobs). However, that has two challenges of its own:
You need to accept all of the possible media types for the image you are pulling, track the media type you received, and use that same media type when pushing the manifest back to the registry. There are at least 6 media types I'm familiar with:
a signed and unsigned docker schema v1
the common manifest and manifest list in docker schema v2
the OCI image and index
If you go across repositories, you need to parse the manifest for the included blobs, which can be recursive for docker manifest lists and OCI indexes. You may be able to do a server side blob mount to avoid pulling and pushing the blob, but that will depend on the server support. And we're also going to see anonymous blob mounts come to registries, which allows a blob mount even if you don't know the source repo on that registry (useful for everyone pushing to a cloud registry with an image based on an official docker image from Docker Hub).
Authorization gets complicated, particularly if you have bearer tokens to request and maintain between commands.
The solution I'd recommend is using a tool that handles the registry API for you. I've been working on my own tooling for this, regclient, and there are other similar projects like Google's crane and RedHat's skopeo. These should each handle the media types, copying blobs when needed, and authorization issues that would complicate the curl command.
As an example with regclient's regctl command, you'd run:
regctl image copy repo:oldtag repo:newtag
With google's crane you just do
crane tag myregistry.com/myimage:v1.2.3 staging
It works with both docker images and OCI images, no image is downloaded locally and it even skips the layer verifications, as they are guaranteed to already be in the repository.
It's even available in a docker image: gcr.io/go-containerregistry/crane
Note that there are other similar tools, like regctl or skopeo
There is a simpler method with the new experimental manifest Docker commands. It only requires downloading and uploading the manifest file (JSON overview) of an image. The commands below have been tested with the GitLab registry. First build and push a Docker image in some previous stage:
docker build -t registry.gitlab.com/<group>/<project>/<image-name>:<tag-a> .
docker push registry.gitlab.com/<group>/<project>/<image-name>:<tag-a>
Then at a later stage where the image has not been pulled:
docker manifest create registry.gitlab.com/<group>/<project>/<image-name>:<tag-b> \
registry.gitlab.com/<group>/<project>/<image-name>:<tag-a>
docker manifest push registry.gitlab.com/<group>/<project>/<image-name>:<tag-b>
You can then pull the image with the new tag. The first step did not seem to work with public Docker Hub images, but any suggestions are welcome. Additionally, to see the manifest itself, run:
docker manifest inspect registry.gitlab.com/<group>/<project>/<image-name>:<tag-a>

How to copy multi-arch docker images to a different container registry?

There's a well-known approach to have docker images copied from one container registry to another. In case the original registry is dockerhub, the typical workflow would be something like this:
docker pull <image:tag>
docker tag <image:tag> <new-reg-url/uid/image:tag>
docker push <new-reg-url/uid/image:tag>
Now, how do you the above when dealing with images with multi-architecture layers?
As per the information in this link, you can rely on buildx to construct multi-arch images, and while doing that, you can also upload those to whichever repo you wish, but how do i do this without having to first build the images?
Looks like buildx cli has unnecessarily (?) coupled the uploading process with the building one. Any suggestions?
Thanks!
While the docker pull ...; docker tag ...; docker push ... syntax is the easy way to move images between registries, it has a couple drawbacks. First, as you've seen, is that it dereferences a multi-platform image to a single platform. And the second is that it pulls all layers to the docker engine even if the remote registry already has those layers, making it a bad method for ephemeral CI workers that would always need to pull every layer.
To do this, I prefer talking directly to the registry servers rather than the docker engine itself. You don't need the functionality from the engine to run the images, all you need is the registry API. Docker has documented the original registry API and OCI recently went 1.0 on the distribution-spec which should get us some standardization.
There's a variety of tooling based on those specs, from the docker engine itself and containerd, to skopeo, google's crane, and I've also been working on regclient. Doing this with regclient's regctl command looks like:
regctl image copy <source_image:tag> <target_image:tag>
And the result is the various layers, image config, manifests, and multi-platform manifest list will be copied between registries, but only for the layers that don't already exist on the target registry.
2022 (docker builtin) solution
It's posible to perform the copy using the not-well-documented built in command docker buildx imagetools create using --tag
# i.e.
OLD_TAG=registry.example.com/namespaced/repository/example-image:old-tag
NEW_TAG=registry.example.com/namespaced/repository/example-image:new-tag
# we can
docker buildx imagetools create --tag "$NEW_TAG" "$OLD_TAG"
Reference documentation
IMPORTANT NOTE: There is no support at the moment to perform this operation against different repositories. Given tags like
OLD_TAG=registry.example.com/namespaced/repository/example-image:latest
NEW_TAG=registry.example.com/other-repository/example-image:latest
You end with an error like
error: multiple repositories currently not supported
For this situation I'm going to test the actual accepted answer
As #laconbass has written, this can be done with docker buildx imagetools create. The ability to do this over multiple repos was added in this PR
docker buildx imagetools create -t <NEW-TAG> <OLD-TAG>

Difference between Docker registry and repository

I'm confused as to the difference between docker registries and repositories. It seems like the Docker documentation uses the two words interchangeably. Also, repositories are sometimes referred to as images, such as this from their docs:
In order to push a repository to its registry, you need to have named
an image or committed your container to a named image as we saw here.
Now you can push this repository to the registry designated by its
name or tag.
How can you push a repository to a registry? Aren't you pushing the image to the repository?
Docker registry is a service that is storing your docker images.
Docker registry could be hosted by a third party, as public or private registry, like one of the following registries:
Docker Hub,
Quay,
Google Container Registry,
AWS Container Registry
or you can host the docker registry by yourself
(see https://docs.docker.com/ee/dtr/ for more details).
Docker repository is a collection of different docker images with same name, that have different tags. Tag is alphanumeric identifier of the image within a repository.
For example see https://hub.docker.com/r/library/python/tags/. There are many different tags for the official python image, these tags are all members of the official python repository on the Docker Hub. Docker Hub is a Docker Registry hosted by Docker.
To find out more read:
https://docs.docker.com/registry/
https://github.com/docker/distribution
From the book Using Docker, Developing and deploying Software with Containers
Registries, Repositories, Images, and Tags
There is a hierarchical system for storing images.
The following terminology is used:
Registry
A service responsible for hosting and distributing images. The default registry is the Docker Hub.
Repository
A collection of related images (usually providing different versions of the same application or service).
Tag
An alphanumeric identifier attached to images within a repository (e.g., 14.04 or stable ).
So the command docker pull amouat/revealjs:latest will download the image tagged latest within the amouat/revealjs repository from the Docker Hub registry.
Complementing the information:
You usually push a repository to a registry (and all images that are part of it). But you can push a single image to a registry. In all cases, you use docker push.
An image has a 12-hex-digit Image ID, but is also identified by: namespace/repo-name:tag
The image full name can be optionally prefixed by the registry host name and port: myregistryhost:5000/namespace/repo-name:tag
A common naming convention is to use your registry user-name as what I called "namespace".
A docker repository is a cute combination of registry and image.
docker tag foo <registry>/<image>:<tag>
is the same as
docker tag foo <repository>:<tag>
Docker Registry is a service, which you can either host yourself (Trusted and Private) or you can let docker hub be the host for this service. Usually, if your software is commercial, you will have hosted this as a "Private and Trusted" registry. For Java Developers, this is somewhat analogous to Maven Artifactory setup.
Docker Repository is a set of "Tagged" images. An example is that you might have tagged 5 of ubuntu:latest images:
a) Nano editor (image1_tag:v1)
b) A specific software 1 (image1_tag:v2)
c) Sudo (image1_tag:v3)
d) apache http daemon (image1_tag:v4)
e) tomcat (image1_tag:v5)
You can use docker push command to push each of the above images to your repository. As long as the repository names match, they will be pushed successfully, and appear under your chosen repository and correctly tagged.
Now, your question is, "So where is this repository hosted/who is managing the service"? That is where Docker Registry comes into picture. By default you will get a docker hub registry (Open Source) which you can use to keep your private/public repository. So without any modification, your images will be pushed to your private repository in docker hub. An example output when you pushing your image tags are the following:
docker#my-docker-vm:/$ docker push mydockerhub/my-helloworld-repo:my_tag
The push refers to repository [docker.io/mydockerhub/my-helloworld-repo]
bf41e934d39d: Pushed
70d93396f87f: Pushed
6ec525dfd060: Pushed
705419d10b13: Pushed
a4aaef726d02: Pushed
04964fddc946: Pushed
latest: digest: sha256:eb93c92351bce785aa3ec0de489cfeeaafd55b7d90adf95ecea02629b376e577 size: 1571
docker#my-docker-vm:/$
And if you type immediately docker images --digests -a you can confirm that your pushed image tags are now showing new signature against the private repository managed by docker hub registry.
A Docker image registry is the place to store all your Docker images. The image registry allows you to push and pull the container images as needed.
Registries can be private or public. When the registry is public, the images are shared with the whole world whereas in the private registry the images are shared only amongst the members of an enterprise or a team.
A registry makes it possible for the Docker daemon to easily pull and run your Docker images.
Docker Hub and other third party repository hosting services are called “registries”. A registry stores a collection of repositories.
As a registry can have many repositories and a repository can have many different versions of the same image which are individually versioned with tags.
The confusion starts with this definition of a tag: "An alphanumeric identifier attached to images in a repository"
I'd rather call that alphanumeric identifier that you append with a ':' a tag-suffix for now. When somebody says "'latest' is the default tag", then this kind of tag-suffix is meant.
In reality, the :latest' suffix is technically part of the tag. The entire name is a tag. All these are tags (possibly referring to the same image):
myimagename
myimagename:latest
username/theirimagename:1.0
myrepo:5000/username/imagename:1.0
(I say imagename here, just to illustrate the other main source of confusion. That's the repositoryname, of course. Sorry.)
Examples:
a) When you want to name your image while building, you use docker build -t thisname ... -- that is -t for tag, (not -n for name).
b) When you want to push that image to a registry, you need to have the full URL (starting with registryname and ending with a tag-suffix) as a tag:
docker tag thisname mylocalregistry:5000/username/repoimagething:1.0
Now you push the image known as thisname by saying:
docker push mylocalregistry:5000/username/repoimagething:1.0
Naming things is hard.
Alas! A repository is not a "container" (aaargh...) where you put things in, that is what muggles think...

How to share my Docker-Image without using the Docker-Hub?

I'm wondering where Docker's images are exactly stored to in my local host machine.
Can I share my Docker-Image without using the Docker-Hub or a Dockerfile but the 'real' Docker-Image? And what is exactly happening when I 'push' my Docker-Image to Docker-Hub?
Docker images are stored as filesystem layers. Every command in the Dockerfile creates a layer. You can also create layers by using docker commit from the command line after making some changes (via docker run probably).
These layers are stored by default under /var/lib/docker. While you could (theoretically) cherry pick files from there and install it in a different docker server, is probably a bad idea to play with the internal representation used by Docker.
When you push your image, these layers are sent to the registry (the docker hub registry, by default… unless you tag your image with another registry prefix) and stored there. When pulling, the layer id is used to check if you already have the layer locally or it needs to be downloaded. You can use docker history to peek at which layers (other images) are used (and, to some extent, which command created the layer).
As for options to share an image without pushing to the docker hub registry, your best options are:
docker save an image or docker export a container. This will output a tar file to standard output, so you will like to do something like docker save 'dockerizeit/agent' > dk.agent.latest.tar. Then you can use docker load or docker import in a different host.
Host your own private registry. - Outdated, see comments See the docker registry image. We have built an s3 backed registry which you can start and stop as needed (all state is kept on the s3 bucket of your choice) which is trivial to setup. This is also an interesting way of watching what happens when pushing to a registry
Use another registry like quay.io (I haven't personally tried it), although whatever concerns you have with the docker hub will probably apply here too.
Based on this blog, one could share a docker image without a docker registry by executing:
docker save --output latestversion-1.0.0.tar dockerregistry/latestversion:1.0.0
Once this command has been completed, one could copy the image to a server and import it as follows:
docker load --input latestversion-1.0.0.tar
Sending a docker image to a remote server can be done in 3 simple steps:
Locally, save docker image as a .tar:
docker save -o <path for created tar file> <image name>
Locally, use scp to transfer .tar to remote
On remote server, load image into docker:
docker load -i <path to docker image tar file>
[Update]
More recently, there is Amazon AWS ECR (Elastic Container Registry), which provides a Docker image registry to which you can control access by means of the AWS IAM access management service. ECR can also run a CVE (vulnerabilities) check on your image when you push it.
Once you create your ECR, and obtain the "URL" you can push and pull as required, subject to the permissions you create: hence making it private or public as you wish.
Pricing is by amount of data stored, and data transfer costs.
https://aws.amazon.com/ecr/
[Original answer]
If you do not want to use the Docker Hub itself, you can host your own Docker repository under Artifactory by JFrog:
https://www.jfrog.com/confluence/display/RTF/Docker+Repositories
which will then run on your own server(s).
Other hosting suppliers are available, eg CoreOS:
http://www.theregister.co.uk/2014/10/30/coreos_enterprise_registry/
which bought quay.io

Resources