What is a good style to deploy with docker when docker hub is not available? - docker

I want to deploy my project avoiding usage of docker-hub. Is it a good idea to sent an image in tar file or just a Dockerfile combined with all local dependencies would be preferable (in this case user will need to build the image on his own)? Is there a way to deploy container itself?

If you cannot run a local Docker Registry, then sending the image itself is preferable, as it won't force the user to access any dependencies.
Sending the Dockerfile as a complement (to document the image) is good as well.
But the main alternative is to use a docker registry. If however you don't want to depend on local registry storage, you can use a remote storage like S3 with minio.

Related

How do I docker buildx build into a local "registry" container

I am trying to build a multi-arch image but would like to avoid pushing it to docker hub. I've had a lot of trouble finding out how to control the export options. is there a way to make "--push" push to a registry of my choosing?
Any help is appreciated
Docker provides a container image for a registry server that you may self run even on localhost, see: Deploying a registry server.
There are other servers|services that implement the registry API (see below) but this is a good place to start.
Conventionally, images pushed|pulled default to Docker registry; unless a registry is explicitly specifed, an image e.g. your-image:your-tag defaults to docker.io/my-image:my-tag. In my opinion, it's a good practice to always include this default to be more transparent about this.
If you run Docker's registry image on localhost on the default port 5000, you'll need to take your images with localhost:5000/your-image:your-tag to ensure that when you docker push localhost:5000/your-image:your-tag, the CLI is able to determine your local registry is the intended destination.
Similarly, if you use e.g. Quay registry, images must be prefixed quay.io, Google Artifact Registry, images are prefixed ${REGION}-docker.pkg.dev/${PROJECT}/${REPOSITORY} etc.
IIRC it's not possible to push to Docker's registry (aka dockerhub) without an account so, as long as you ensure you're not logged in, you should not accidentally push images to Docker's registry.
NOTE You only need to use a registry to ease distribution of container images between machines. If you're only interested in local(host) development, you can docker run ... immediately after a successful docker build without any pushing|pulling (beyond interim images, e.g. FROM).

Can a docker registry be copied from one machine to another?

Can a docker registry populated on one host be 'tree-copied' to another machine and be 'turned on' as a pre-populated docker registry served by the new host?
I am working on a project providing Platform-as-a-Service which includes a docker registry service. These run in disconnected environments (not connected to the Internet). One very time consuming aspect of each deployment is creating an empty registry and loading, tagging, and pushing hundreds of docker images (tens of gigabytes of data) from a compressed tar into the registry for each new deployment.
I am thinking it would be faster to do this differently. Instead of a tarball of docker files, could we at 'build time' create and populate the docker registry then and compress that. At deploy time we just unpack the registry into /var/lib/registry or wherever...
But, I don't know if any of the data in the registry is dependent upon, say, a machine ID, certificate, or other aspect of the host upon which the registry was first running.
It seems to me an equivalent question is, if I populate two docker registries running on different machines with the same set of docker images in the same order, will the file contents of the registry folder be the same (or similar, allowing for timestamps and such?)
Every time I search for "docker registry transfer" or "move docker registry to new machine" or similar terms, I am flooded with answers about moving single docker images from one machine or registry to another, but don't see anything about docker registry migration or portability.
I haven't had the time or resources to test this out; maybe someone already expert in docker registry structures could clue me in that this is practical (or can say it absolutely will not work) I can make a better decision about whether to pursue getting the time and machines to demonstrate this approach.
Thank you.
I don't know if any of the data in the registry is dependent upon,
say, a machine ID, certificate, or other aspect of the host upon which
the registry was first running.
The configuration will be associated to the registry, for example, if you are running a secure registry by adding certs to the registry.
In this case, you will have to configure the registry in the same manner on a new machine as on the previous machine, use a configuration manager(like ansible) for that.
Instead of a tarball of docker files, could we at 'build time' create
and populate the docker registry then and compress that. At deploy
time we just unpack the registry into /var/lib/registry or wherever...
Adding to what #DazWilkin already mentioned in the comments, a storage location can be configured which can be,
filesystem: the rootdirectory default is /var/lib/registry
based on the cloud provider if the registry is deployed on a private cloud
Example, S3 bucket for AWS
You can take backup of that like the rootdirectory in case of filesystem or attach the storage location to the new registry.
**Words of caution**
Try to use the exact configuration and version of the docker registry.

Syncing docker images

I have 2 machines(separate hosts) running docker and I am using the same image on both the machines. How do I keep both the images in sync. For eg. suppose I make changes to the image in one of the hosts and want the changes to reflect in the other host as well. I can commit the image and copy the image over to the other host. Is there any other efficient way of doing this??
Some ways I can think of:
1. with a Docker registry
the workflow here is:
HOST A: docker commit, docker push
HOST B: docker pull
2. by saving the image to a .tar file
the workflow here is:
HOST A: docker save
HOST B: docker load
3. with a Dockerfile and by building the image again
the workflow here is:
provide a Dockerfile together with your code / files required
everytime your code has changed and you want to make a release, use docker build to create a new image.
from the hosts that you want to take the update, you will have to get the updated source code (maybe by using a version control software like Git), and then docker build the image
4. CI/CD pipeline
you can see a video here: docker.com/use-cases/cicd
Keep in mind that containers are considered to be ephemeral. This means that updating an image inside another host will then require:
to stop and remove any old container (running with the outdated image)
to run a new one (with the updated image)
I quote from: Best practices for writing Dockerfiles
General guidelines and recommendations
Containers should be ephemeral
The container produced by the image your Dockerfile defines should be as ephemeral as possible. By “ephemeral,” we mean that it can be stopped and destroyed and a new one built and put in place with an absolute minimum of set-up and configuration.
You can perform docker push to upload you image to docker registry and perform a docker pull to get the latest image from another host.
For more information please look at this

Clone an image from a docker registry to another

I have a private registry with a set of images. It can be visualized as a store of applications.
My app can take these applications and run them on other machines.
To achieve this, my app first pull the image from the private registry and then copies it to a local registry for later use.
Step as are follow:
docker pull privateregistry:5000/company/app:tag
docker tag privateregistry:5000/company/app:tag localregistry:5000/company/app:tag
docker push localregistry:5000/company/app:tag
Then later on a different machine in my network:
docker pull localregistry:5000/company/app:tag
Is there a way to efficiently copy an image from a repository to another without using a docker client in between ?
you can use docker save to save the images to tar archive and then copy the tar to new host and use docker load to untar it.
read below links for more
https://docs.docker.com/engine/reference/commandline/save/
Is there a way to efficiently copy an image from a repository to another without using a docker client in between?
Yes, there's a variety of tools that implement this today. RedHat has been pushing their skopeo, Google has crane, and I've been working on my own with regclient. Each of these tools talks directly to the registry server without needing a docker engine. And at least with regclient (I haven't tested the others), these will only copy the layers that are not already in the target registry, avoiding the need to pull layers again. Additionally, you can move a multi-platform image, retaining all of the available platforms, which you would lose with a docker pull since that dereferences the image to a single platform.

How to share my Docker-Image without using the Docker-Hub?

I'm wondering where Docker's images are exactly stored to in my local host machine.
Can I share my Docker-Image without using the Docker-Hub or a Dockerfile but the 'real' Docker-Image? And what is exactly happening when I 'push' my Docker-Image to Docker-Hub?
Docker images are stored as filesystem layers. Every command in the Dockerfile creates a layer. You can also create layers by using docker commit from the command line after making some changes (via docker run probably).
These layers are stored by default under /var/lib/docker. While you could (theoretically) cherry pick files from there and install it in a different docker server, is probably a bad idea to play with the internal representation used by Docker.
When you push your image, these layers are sent to the registry (the docker hub registry, by default… unless you tag your image with another registry prefix) and stored there. When pulling, the layer id is used to check if you already have the layer locally or it needs to be downloaded. You can use docker history to peek at which layers (other images) are used (and, to some extent, which command created the layer).
As for options to share an image without pushing to the docker hub registry, your best options are:
docker save an image or docker export a container. This will output a tar file to standard output, so you will like to do something like docker save 'dockerizeit/agent' > dk.agent.latest.tar. Then you can use docker load or docker import in a different host.
Host your own private registry. - Outdated, see comments See the docker registry image. We have built an s3 backed registry which you can start and stop as needed (all state is kept on the s3 bucket of your choice) which is trivial to setup. This is also an interesting way of watching what happens when pushing to a registry
Use another registry like quay.io (I haven't personally tried it), although whatever concerns you have with the docker hub will probably apply here too.
Based on this blog, one could share a docker image without a docker registry by executing:
docker save --output latestversion-1.0.0.tar dockerregistry/latestversion:1.0.0
Once this command has been completed, one could copy the image to a server and import it as follows:
docker load --input latestversion-1.0.0.tar
Sending a docker image to a remote server can be done in 3 simple steps:
Locally, save docker image as a .tar:
docker save -o <path for created tar file> <image name>
Locally, use scp to transfer .tar to remote
On remote server, load image into docker:
docker load -i <path to docker image tar file>
[Update]
More recently, there is Amazon AWS ECR (Elastic Container Registry), which provides a Docker image registry to which you can control access by means of the AWS IAM access management service. ECR can also run a CVE (vulnerabilities) check on your image when you push it.
Once you create your ECR, and obtain the "URL" you can push and pull as required, subject to the permissions you create: hence making it private or public as you wish.
Pricing is by amount of data stored, and data transfer costs.
https://aws.amazon.com/ecr/
[Original answer]
If you do not want to use the Docker Hub itself, you can host your own Docker repository under Artifactory by JFrog:
https://www.jfrog.com/confluence/display/RTF/Docker+Repositories
which will then run on your own server(s).
Other hosting suppliers are available, eg CoreOS:
http://www.theregister.co.uk/2014/10/30/coreos_enterprise_registry/
which bought quay.io

Resources