Push existing image to another registry (without mounting docker.sock or using docker:dind) - docker

Is it actually posible to push the existing image to another docker registry without either mounting docker.sock or starting docker:dind?
I'm running docker build in cluster (with kaniko) and the image needs to be pushed to another repository.
I haven't found an option for kaniko to do that. The only way would be to start a new build (am I correct?).
Is there another alternative? Pulling and pushing should be actually easier as building, and should not require access to the docker daemon?

The docker registry has a documented API, and OCI is close to finishing their distribution-spec release, so it's possible to interact with the registry directly rather than using the docker engine. I've been doing exactly that with regclient that includes a regctl image copy command that likely does exactly what you're looking to achieve.

Related

Dockerfile FROM command - Does it always download from Docker Hub?

I just started working with docker this week and came across a 'dockerfile'. I was reading up on what this file does, and the official documentation basically mentions that the FROM keyword is needed to build a "base image". These base images are pulled from Docker hub, or downloaded from there.
Silly question - Are base images always pulled from docker hub?
If so and if I understand correctly I am assuming that running the dockerfile to create an image is not done very often (only when needing to create an image) and once the image is created then the image is whats run all the time?
So the dockerfile then can be migrated to which ever enviroment and things can be set up all over again quickly?
Pardon the silly question I am just trying to understand the over all flow and how dockerfile fits into things.
If the local (on your host) Docker daemon (already) has a copy of the container image (i.e. it's been docker pull'd) specified by FROM in a Dockerfile then it's cached and won't be repulled.
Container images include a tag (be wary of ever using latest) and the image name e.g. foo combined with the tag (which defaults to latest if not specified) is the full name of the image that's checked i.e. if you have foo:v0.0.1 locally and FROM:v0.0.1 then the local copy is used but FROM foo:v0.0.2 will pull foo:v0.0.2.
There's an implicit docker.io prefix i.e. docker.io/foo:v0.0.1 that references the Docker registry that's being used.
You could repeatedly docker build container images on the machines where the container is run but this is inefficient and the more common mechanism is that, once a container image is built, it is pushed to a registry (e.g. DockerHub) and then pulled from there by whatever machines need it.
There are many container registries: DockerHub, Google Artifact Registry, Quay etc.
There are tools other than docker that can be used to interact with containers e.g. (Red Hat's) Podman.

How to copy multi-arch docker images to a different container registry?

There's a well-known approach to have docker images copied from one container registry to another. In case the original registry is dockerhub, the typical workflow would be something like this:
docker pull <image:tag>
docker tag <image:tag> <new-reg-url/uid/image:tag>
docker push <new-reg-url/uid/image:tag>
Now, how do you the above when dealing with images with multi-architecture layers?
As per the information in this link, you can rely on buildx to construct multi-arch images, and while doing that, you can also upload those to whichever repo you wish, but how do i do this without having to first build the images?
Looks like buildx cli has unnecessarily (?) coupled the uploading process with the building one. Any suggestions?
Thanks!
While the docker pull ...; docker tag ...; docker push ... syntax is the easy way to move images between registries, it has a couple drawbacks. First, as you've seen, is that it dereferences a multi-platform image to a single platform. And the second is that it pulls all layers to the docker engine even if the remote registry already has those layers, making it a bad method for ephemeral CI workers that would always need to pull every layer.
To do this, I prefer talking directly to the registry servers rather than the docker engine itself. You don't need the functionality from the engine to run the images, all you need is the registry API. Docker has documented the original registry API and OCI recently went 1.0 on the distribution-spec which should get us some standardization.
There's a variety of tooling based on those specs, from the docker engine itself and containerd, to skopeo, google's crane, and I've also been working on regclient. Doing this with regclient's regctl command looks like:
regctl image copy <source_image:tag> <target_image:tag>
And the result is the various layers, image config, manifests, and multi-platform manifest list will be copied between registries, but only for the layers that don't already exist on the target registry.
2022 (docker builtin) solution
It's posible to perform the copy using the not-well-documented built in command docker buildx imagetools create using --tag
# i.e.
OLD_TAG=registry.example.com/namespaced/repository/example-image:old-tag
NEW_TAG=registry.example.com/namespaced/repository/example-image:new-tag
# we can
docker buildx imagetools create --tag "$NEW_TAG" "$OLD_TAG"
Reference documentation
IMPORTANT NOTE: There is no support at the moment to perform this operation against different repositories. Given tags like
OLD_TAG=registry.example.com/namespaced/repository/example-image:latest
NEW_TAG=registry.example.com/other-repository/example-image:latest
You end with an error like
error: multiple repositories currently not supported
For this situation I'm going to test the actual accepted answer
As #laconbass has written, this can be done with docker buildx imagetools create. The ability to do this over multiple repos was added in this PR
docker buildx imagetools create -t <NEW-TAG> <OLD-TAG>

Syncing docker images

I have 2 machines(separate hosts) running docker and I am using the same image on both the machines. How do I keep both the images in sync. For eg. suppose I make changes to the image in one of the hosts and want the changes to reflect in the other host as well. I can commit the image and copy the image over to the other host. Is there any other efficient way of doing this??
Some ways I can think of:
1. with a Docker registry
the workflow here is:
HOST A: docker commit, docker push
HOST B: docker pull
2. by saving the image to a .tar file
the workflow here is:
HOST A: docker save
HOST B: docker load
3. with a Dockerfile and by building the image again
the workflow here is:
provide a Dockerfile together with your code / files required
everytime your code has changed and you want to make a release, use docker build to create a new image.
from the hosts that you want to take the update, you will have to get the updated source code (maybe by using a version control software like Git), and then docker build the image
4. CI/CD pipeline
you can see a video here: docker.com/use-cases/cicd
Keep in mind that containers are considered to be ephemeral. This means that updating an image inside another host will then require:
to stop and remove any old container (running with the outdated image)
to run a new one (with the updated image)
I quote from: Best practices for writing Dockerfiles
General guidelines and recommendations
Containers should be ephemeral
The container produced by the image your Dockerfile defines should be as ephemeral as possible. By “ephemeral,” we mean that it can be stopped and destroyed and a new one built and put in place with an absolute minimum of set-up and configuration.
You can perform docker push to upload you image to docker registry and perform a docker pull to get the latest image from another host.
For more information please look at this

Clone an image from a docker registry to another

I have a private registry with a set of images. It can be visualized as a store of applications.
My app can take these applications and run them on other machines.
To achieve this, my app first pull the image from the private registry and then copies it to a local registry for later use.
Step as are follow:
docker pull privateregistry:5000/company/app:tag
docker tag privateregistry:5000/company/app:tag localregistry:5000/company/app:tag
docker push localregistry:5000/company/app:tag
Then later on a different machine in my network:
docker pull localregistry:5000/company/app:tag
Is there a way to efficiently copy an image from a repository to another without using a docker client in between ?
you can use docker save to save the images to tar archive and then copy the tar to new host and use docker load to untar it.
read below links for more
https://docs.docker.com/engine/reference/commandline/save/
Is there a way to efficiently copy an image from a repository to another without using a docker client in between?
Yes, there's a variety of tools that implement this today. RedHat has been pushing their skopeo, Google has crane, and I've been working on my own with regclient. Each of these tools talks directly to the registry server without needing a docker engine. And at least with regclient (I haven't tested the others), these will only copy the layers that are not already in the target registry, avoiding the need to pull layers again. Additionally, you can move a multi-platform image, retaining all of the available platforms, which you would lose with a docker pull since that dereferences the image to a single platform.

Is it possible using the Docker remote API to copy an image to another host?

Supposed I have two servers, let's call them A and B. Both run Docker as daemon, and I have created an image on A.
So, of course, I am able to run this image as a container on host A.
Supposed I want to move that image to B - how do I do this? Is this possible via Docker's remote API? Or by any other way?
Right now, you can use docker save, scp, then docker load.
Alternatively, you can docker push then docker pull.
If you are not confortable using the public index, you can spawn a private registry. Take a look at https://github.com/dotcloud/docker-registry. The easiest way to do so is docker run stackbrew/registry. You will have more details on the github readme.

Resources