Syncing docker images - docker

I have 2 machines(separate hosts) running docker and I am using the same image on both the machines. How do I keep both the images in sync. For eg. suppose I make changes to the image in one of the hosts and want the changes to reflect in the other host as well. I can commit the image and copy the image over to the other host. Is there any other efficient way of doing this??

Some ways I can think of:
1. with a Docker registry
the workflow here is:
HOST A: docker commit, docker push
HOST B: docker pull
2. by saving the image to a .tar file
the workflow here is:
HOST A: docker save
HOST B: docker load
3. with a Dockerfile and by building the image again
the workflow here is:
provide a Dockerfile together with your code / files required
everytime your code has changed and you want to make a release, use docker build to create a new image.
from the hosts that you want to take the update, you will have to get the updated source code (maybe by using a version control software like Git), and then docker build the image
4. CI/CD pipeline
you can see a video here: docker.com/use-cases/cicd
Keep in mind that containers are considered to be ephemeral. This means that updating an image inside another host will then require:
to stop and remove any old container (running with the outdated image)
to run a new one (with the updated image)
I quote from: Best practices for writing Dockerfiles
General guidelines and recommendations
Containers should be ephemeral
The container produced by the image your Dockerfile defines should be as ephemeral as possible. By “ephemeral,” we mean that it can be stopped and destroyed and a new one built and put in place with an absolute minimum of set-up and configuration.

You can perform docker push to upload you image to docker registry and perform a docker pull to get the latest image from another host.
For more information please look at this

Related

Dockerfile FROM command - Does it always download from Docker Hub?

I just started working with docker this week and came across a 'dockerfile'. I was reading up on what this file does, and the official documentation basically mentions that the FROM keyword is needed to build a "base image". These base images are pulled from Docker hub, or downloaded from there.
Silly question - Are base images always pulled from docker hub?
If so and if I understand correctly I am assuming that running the dockerfile to create an image is not done very often (only when needing to create an image) and once the image is created then the image is whats run all the time?
So the dockerfile then can be migrated to which ever enviroment and things can be set up all over again quickly?
Pardon the silly question I am just trying to understand the over all flow and how dockerfile fits into things.
If the local (on your host) Docker daemon (already) has a copy of the container image (i.e. it's been docker pull'd) specified by FROM in a Dockerfile then it's cached and won't be repulled.
Container images include a tag (be wary of ever using latest) and the image name e.g. foo combined with the tag (which defaults to latest if not specified) is the full name of the image that's checked i.e. if you have foo:v0.0.1 locally and FROM:v0.0.1 then the local copy is used but FROM foo:v0.0.2 will pull foo:v0.0.2.
There's an implicit docker.io prefix i.e. docker.io/foo:v0.0.1 that references the Docker registry that's being used.
You could repeatedly docker build container images on the machines where the container is run but this is inefficient and the more common mechanism is that, once a container image is built, it is pushed to a registry (e.g. DockerHub) and then pulled from there by whatever machines need it.
There are many container registries: DockerHub, Google Artifact Registry, Quay etc.
There are tools other than docker that can be used to interact with containers e.g. (Red Hat's) Podman.

Shared volume during build?

I have a docker-compose environment setup like so:
Oracle
Filesystem
App
...
etc...
The filesystem container downloads the latest code from our repo and exposes its volume for other containers to mount. This works great except that containers that need to use the code to do builds can't access it since the volume isn't mounted until the containers are run.
I'd like to avoid checkout/downloading the code since the codebase is over 3 gig right now... Hence trying to do something spiffier.
Is there a better way to do this?
As you mentioned, Docker volumes won't work as volumes are used when the container start.
The best solution for your situation is to use Docker multistage Builds. The idea here is to have an image which has the code base and other images can access this code directly from this image.
You basically have an image, that is responsible for pulling the code:
FROM alpine/git
RUN git clone ...
You then build this image, either separately or as the first image in a compose file.
Other images can then use this image as such:
FROM code-image as code
COPY --from=code /git/<code-repository> /code
This will make the code available to all the images, and it will only be pulled once from the remote repo.

Install Docker image from a local Dockerfile

I'm working from my local laptop and preparing a Dockerfile that I want to use for deployment later on the server. The problem is server contains only docker client/daemon, but has no connectivity to official docker registry and neither it provides it's own image registry.
Is it possible to build my image locally, ship it to the server and run a container on it without going through the trouble of creating my own image registry?
You can save an image using docker save imagename which creates a tarfile and then use docker load to create an image on the server from that tarfile.
Don't confuse this with docker export which creates a tar from a container. See Difference between save and export in Docker. As shown in that link an exported container might be smaller because it flattens layers. If size matters you might consider commiting a container and exporting it right afterwards.

Clone an image from a docker registry to another

I have a private registry with a set of images. It can be visualized as a store of applications.
My app can take these applications and run them on other machines.
To achieve this, my app first pull the image from the private registry and then copies it to a local registry for later use.
Step as are follow:
docker pull privateregistry:5000/company/app:tag
docker tag privateregistry:5000/company/app:tag localregistry:5000/company/app:tag
docker push localregistry:5000/company/app:tag
Then later on a different machine in my network:
docker pull localregistry:5000/company/app:tag
Is there a way to efficiently copy an image from a repository to another without using a docker client in between ?
you can use docker save to save the images to tar archive and then copy the tar to new host and use docker load to untar it.
read below links for more
https://docs.docker.com/engine/reference/commandline/save/
Is there a way to efficiently copy an image from a repository to another without using a docker client in between?
Yes, there's a variety of tools that implement this today. RedHat has been pushing their skopeo, Google has crane, and I've been working on my own with regclient. Each of these tools talks directly to the registry server without needing a docker engine. And at least with regclient (I haven't tested the others), these will only copy the layers that are not already in the target registry, avoiding the need to pull layers again. Additionally, you can move a multi-platform image, retaining all of the available platforms, which you would lose with a docker pull since that dereferences the image to a single platform.

clone docker images from local server?

I was wondering if there was a way to clone images from a local server.
The servers running containers will be hosted behind a bandwidth constrained connection. It would be great if there was a way to pull given containers for one server and then pull from that initial local server to update the containers on the remaining servers.
You could pull those images you want, give hem a new tag, and put them in your own registry.
For instance, let's say you pulled down the official registry image and stood it up at myregistry.internal.mycompany.com. Now, if you wanted to have a CentOS image available for all of your servers but didn't want to pull them all from the official repo (incurring the bandwitch charges) then you could pull a CentOS image (let's say centos:latest - docker pull centos) and then give that image a new tag, like this:
docker tag centos:latest myregistry.internal.mycompany.com/centos:latest
Now from your other servers you just pull 'myregistry.internal.mycompany.com/centos:latest'
Setting up your own repo is really easy as a docker container itself. You can pull the image and learn more at https://registry.hub.docker.com/_/registry/
I think you have a few options. If what you actually want to manage is images rather than containers:
You could set up a private Docker registry, and then push to/pull from that local repository. This may ultimately be the easiest if that is something that you want to do fairly often, because you're just using standard docker push/docker pull commands.
You could use docker save to save images on one server and docker load to load the images on another server.
If you are actually trying to move containers around:
You could use docker export on one server and docker import on another server.

Resources