Is it possible using the Docker remote API to copy an image to another host? - docker

Supposed I have two servers, let's call them A and B. Both run Docker as daemon, and I have created an image on A.
So, of course, I am able to run this image as a container on host A.
Supposed I want to move that image to B - how do I do this? Is this possible via Docker's remote API? Or by any other way?

Right now, you can use docker save, scp, then docker load.
Alternatively, you can docker push then docker pull.
If you are not confortable using the public index, you can spawn a private registry. Take a look at https://github.com/dotcloud/docker-registry. The easiest way to do so is docker run stackbrew/registry. You will have more details on the github readme.

Related

Push existing image to another registry (without mounting docker.sock or using docker:dind)

Is it actually posible to push the existing image to another docker registry without either mounting docker.sock or starting docker:dind?
I'm running docker build in cluster (with kaniko) and the image needs to be pushed to another repository.
I haven't found an option for kaniko to do that. The only way would be to start a new build (am I correct?).
Is there another alternative? Pulling and pushing should be actually easier as building, and should not require access to the docker daemon?
The docker registry has a documented API, and OCI is close to finishing their distribution-spec release, so it's possible to interact with the registry directly rather than using the docker engine. I've been doing exactly that with regclient that includes a regctl image copy command that likely does exactly what you're looking to achieve.

How to get transferable docker compose stack without dockerhub

I have few docker images composed together in the stack using docker-compose.yml.
Now I want to transfer whole docker compose stack to the other host machine without uploading to the dockerhub,
And deploy it on the docker swarm.
I saw there is a thing called docker compose bundle, would that help?
If you’re deploying on a multi-host swarm (or something similar like Kubernetes or Nomad) you all but need a Docker registry. It doesn’t specifically have to be Docker Hub — quay.io, Amazon’s ECR, Google’s GCR, and self-hosted registries all work fine — but you do need to have pushed the built images somewhere where the orchestrator can retrieve them by name.
I’ve never used docker-compose bundle myself, but its documentation also notes that its operation “requires interaction with a Docker registry”.
The only real alternative is using docker save and docker load to manually move images between machines, but as a manual process it will get tedious very quickly, and you need to make sure an identical set of images are on every machine for consistency. Using a registry will be vastly easier.
The easyest way to do it is to use a Docker registry. The problem with Docker Hub is that you can only have one private registry, the rest must be public or paid.
Thankfully, there are other (free) alternatives:
Deploy your own private registry. Here is a nice tutorial where you can try it in the browser.
Use a free private registry. I personnaly use Codefresh. It can automatically build your image from a private repo (like bitbucket who has free plan too), but you can also just use it like a "simple" docker registry and push and pull your Docker images there.

Syncing docker images

I have 2 machines(separate hosts) running docker and I am using the same image on both the machines. How do I keep both the images in sync. For eg. suppose I make changes to the image in one of the hosts and want the changes to reflect in the other host as well. I can commit the image and copy the image over to the other host. Is there any other efficient way of doing this??
Some ways I can think of:
1. with a Docker registry
the workflow here is:
HOST A: docker commit, docker push
HOST B: docker pull
2. by saving the image to a .tar file
the workflow here is:
HOST A: docker save
HOST B: docker load
3. with a Dockerfile and by building the image again
the workflow here is:
provide a Dockerfile together with your code / files required
everytime your code has changed and you want to make a release, use docker build to create a new image.
from the hosts that you want to take the update, you will have to get the updated source code (maybe by using a version control software like Git), and then docker build the image
4. CI/CD pipeline
you can see a video here: docker.com/use-cases/cicd
Keep in mind that containers are considered to be ephemeral. This means that updating an image inside another host will then require:
to stop and remove any old container (running with the outdated image)
to run a new one (with the updated image)
I quote from: Best practices for writing Dockerfiles
General guidelines and recommendations
Containers should be ephemeral
The container produced by the image your Dockerfile defines should be as ephemeral as possible. By “ephemeral,” we mean that it can be stopped and destroyed and a new one built and put in place with an absolute minimum of set-up and configuration.
You can perform docker push to upload you image to docker registry and perform a docker pull to get the latest image from another host.
For more information please look at this

Docker: Push data container onto Docker Hub

I am really new at this Docker stuff, and even newer at Docker Hub, so please bear with me …
I have created a data container to use with my docker image (specifically, a data container to store data for a running mssql-server-lnux image). I know where it is on my local system.
I have a newly created account on Docker Hub and I think I want to push the data container on the hub. I say I think because I’m not sure that it’s the right way to go about it: I want to be able to use the data container from different machines.
If what I have said so far is in the right direction, then how do I push the docker image to the Hub, and how do I then access it later?
You can't push containers, only images, the distinction is important.
Image is akin to a class of your container, and container is essentially an instance of your image.
So if you want to push to share your database then it's not a good idea - you would have to docker commit first and this would get ugly really fast.
But if you just want to start new instances of your mysql on different machines with fresh data containers (there will be no data initially) then go ahead and push the data container image.
Hope this helps.
Okay, here are few steps. Please check if this helps.
Tag your image first. Let's say your image name is 'myapplication' and your docker hub user name is 'dockerhubusername'.
$ docker tag myapplication dockerhubusername/myapplication
Login to docker hub using. Enter user name like 'dockerhubusername' and then password of docker hub account.
$ docker login
Now push command.
$ docker push dockerhubusername/myapplication
Now, login to your docker hub and check if you have image there. Remember, images are pushed to registry/repository like dockerhub not containers.
Assuming you have tagged your image,
Use docker login to configure your docker hub credentials and use docker push to push your image to dockerhub.
$ docker login
$ docker push dockerhub_username/mssql-server-lnux
If you have not yet tagged the image,
$ docker tag mssql-server-lnux dockerhub_username/mssql-server-lnux
To access your image later,
$ docker pull dockerhub_username/mssql-server-lnux
References
Docker Pull
Docker Login

clone docker images from local server?

I was wondering if there was a way to clone images from a local server.
The servers running containers will be hosted behind a bandwidth constrained connection. It would be great if there was a way to pull given containers for one server and then pull from that initial local server to update the containers on the remaining servers.
You could pull those images you want, give hem a new tag, and put them in your own registry.
For instance, let's say you pulled down the official registry image and stood it up at myregistry.internal.mycompany.com. Now, if you wanted to have a CentOS image available for all of your servers but didn't want to pull them all from the official repo (incurring the bandwitch charges) then you could pull a CentOS image (let's say centos:latest - docker pull centos) and then give that image a new tag, like this:
docker tag centos:latest myregistry.internal.mycompany.com/centos:latest
Now from your other servers you just pull 'myregistry.internal.mycompany.com/centos:latest'
Setting up your own repo is really easy as a docker container itself. You can pull the image and learn more at https://registry.hub.docker.com/_/registry/
I think you have a few options. If what you actually want to manage is images rather than containers:
You could set up a private Docker registry, and then push to/pull from that local repository. This may ultimately be the easiest if that is something that you want to do fairly often, because you're just using standard docker push/docker pull commands.
You could use docker save to save images on one server and docker load to load the images on another server.
If you are actually trying to move containers around:
You could use docker export on one server and docker import on another server.

Resources