How to share my Docker-Image without using the Docker-Hub? - docker

I'm wondering where Docker's images are exactly stored to in my local host machine.
Can I share my Docker-Image without using the Docker-Hub or a Dockerfile but the 'real' Docker-Image? And what is exactly happening when I 'push' my Docker-Image to Docker-Hub?

Docker images are stored as filesystem layers. Every command in the Dockerfile creates a layer. You can also create layers by using docker commit from the command line after making some changes (via docker run probably).
These layers are stored by default under /var/lib/docker. While you could (theoretically) cherry pick files from there and install it in a different docker server, is probably a bad idea to play with the internal representation used by Docker.
When you push your image, these layers are sent to the registry (the docker hub registry, by default… unless you tag your image with another registry prefix) and stored there. When pulling, the layer id is used to check if you already have the layer locally or it needs to be downloaded. You can use docker history to peek at which layers (other images) are used (and, to some extent, which command created the layer).
As for options to share an image without pushing to the docker hub registry, your best options are:
docker save an image or docker export a container. This will output a tar file to standard output, so you will like to do something like docker save 'dockerizeit/agent' > dk.agent.latest.tar. Then you can use docker load or docker import in a different host.
Host your own private registry. - Outdated, see comments See the docker registry image. We have built an s3 backed registry which you can start and stop as needed (all state is kept on the s3 bucket of your choice) which is trivial to setup. This is also an interesting way of watching what happens when pushing to a registry
Use another registry like quay.io (I haven't personally tried it), although whatever concerns you have with the docker hub will probably apply here too.

Based on this blog, one could share a docker image without a docker registry by executing:
docker save --output latestversion-1.0.0.tar dockerregistry/latestversion:1.0.0
Once this command has been completed, one could copy the image to a server and import it as follows:
docker load --input latestversion-1.0.0.tar

Sending a docker image to a remote server can be done in 3 simple steps:
Locally, save docker image as a .tar:
docker save -o <path for created tar file> <image name>
Locally, use scp to transfer .tar to remote
On remote server, load image into docker:
docker load -i <path to docker image tar file>

[Update]
More recently, there is Amazon AWS ECR (Elastic Container Registry), which provides a Docker image registry to which you can control access by means of the AWS IAM access management service. ECR can also run a CVE (vulnerabilities) check on your image when you push it.
Once you create your ECR, and obtain the "URL" you can push and pull as required, subject to the permissions you create: hence making it private or public as you wish.
Pricing is by amount of data stored, and data transfer costs.
https://aws.amazon.com/ecr/
[Original answer]
If you do not want to use the Docker Hub itself, you can host your own Docker repository under Artifactory by JFrog:
https://www.jfrog.com/confluence/display/RTF/Docker+Repositories
which will then run on your own server(s).
Other hosting suppliers are available, eg CoreOS:
http://www.theregister.co.uk/2014/10/30/coreos_enterprise_registry/
which bought quay.io

Related

Dockerfile FROM command - Does it always download from Docker Hub?

I just started working with docker this week and came across a 'dockerfile'. I was reading up on what this file does, and the official documentation basically mentions that the FROM keyword is needed to build a "base image". These base images are pulled from Docker hub, or downloaded from there.
Silly question - Are base images always pulled from docker hub?
If so and if I understand correctly I am assuming that running the dockerfile to create an image is not done very often (only when needing to create an image) and once the image is created then the image is whats run all the time?
So the dockerfile then can be migrated to which ever enviroment and things can be set up all over again quickly?
Pardon the silly question I am just trying to understand the over all flow and how dockerfile fits into things.
If the local (on your host) Docker daemon (already) has a copy of the container image (i.e. it's been docker pull'd) specified by FROM in a Dockerfile then it's cached and won't be repulled.
Container images include a tag (be wary of ever using latest) and the image name e.g. foo combined with the tag (which defaults to latest if not specified) is the full name of the image that's checked i.e. if you have foo:v0.0.1 locally and FROM:v0.0.1 then the local copy is used but FROM foo:v0.0.2 will pull foo:v0.0.2.
There's an implicit docker.io prefix i.e. docker.io/foo:v0.0.1 that references the Docker registry that's being used.
You could repeatedly docker build container images on the machines where the container is run but this is inefficient and the more common mechanism is that, once a container image is built, it is pushed to a registry (e.g. DockerHub) and then pulled from there by whatever machines need it.
There are many container registries: DockerHub, Google Artifact Registry, Quay etc.
There are tools other than docker that can be used to interact with containers e.g. (Red Hat's) Podman.

docker difference between private registry and the local image registry?

I have something on my mind that is bugging me. When running docker images I see a list of my local images I have in my docker environment. When pulling Images I pull it from a registry and more specific pull the specified tag managed by the repository.
so there is the registry as the big hub to store all image
repositories
and the repository is storing commits/tagged versions of a specific image
But what is docker images then? It's a registry as well isn't it? It holds all images that I've built locally or pulled.
If my claim is valid:
How does it comply with running a private registry (mentioned here https://docs.docker.com/registry/deploying/)
Running this docker run -d -p 5000:5000 --restart=always --name registry registry:2
Would deploy this new registry into my docker images...
So now I have a registry within my registry... registception?
What is the difference besides the custom registry is deployable?
Its not a local image registry as other questions have pointed. It is an image cache. The purpose of the image cache is to avoid having every time to download the same image whenever you do a docker run.
docker images simply lists all the cached images on the machine. Whenever there is newer image on the registry, the image(some layers) are downloaded and cached when doing docker pull .... Also, when a layer exists in the local cache, docker tells you that, example:
Step 2/2 : CMD /bin/bash
---> Using cache
On the other hand, a docker registry is a central repository to store images. It provide a remote api to pull and push images. The local image cache does not have this feature. Images in the local cache are read and stored used local docker commands that simply read files under /var/lib/docker/...
To make things clear, think of Docker remote registries (such as Docker Hub) as the remote Git repositories. You pull Docker images (like git repositories) that you need and you play with it.
Like remote Git repositories such as GitHub\BitBucket, Docker registries are also public and private. Public registries are for public usage and open-source projects. Examples include in like Docker Hub. Where as private registries are for organizational use or for your own. Examples for private registries include Azure Container Registry, EC2 Container Registry etc.
The official Docker Registry image is just a Docker registry image for your own system, you can't share them with others unless you have a server or a public Internet IP address. Think of it as Bonobo Private Git Server for Windows.
Your local image registry as you mentioned are all those images that you have build locally or pulled from a registry public or private you can see it like a local cache of images that you can re use without download or rebuild each time.
Running the registry what actually does is to spin up a server that implements the Docker Registry API which allows users to push, pull, delete and handles the storage of this images and their layers. See it like a central repository like npm, nexus
For example if you run the registry in your.registry.com:5000
You can do things like
docker build -t your.registry.com:5000/my-image:tag .
docker push your.registry.com:5000/my-image:tag
So others that have access to your server can pull it
docker pull your.registry.com:5000/my-image:tag

Install Docker image from a local Dockerfile

I'm working from my local laptop and preparing a Dockerfile that I want to use for deployment later on the server. The problem is server contains only docker client/daemon, but has no connectivity to official docker registry and neither it provides it's own image registry.
Is it possible to build my image locally, ship it to the server and run a container on it without going through the trouble of creating my own image registry?
You can save an image using docker save imagename which creates a tarfile and then use docker load to create an image on the server from that tarfile.
Don't confuse this with docker export which creates a tar from a container. See Difference between save and export in Docker. As shown in that link an exported container might be smaller because it flattens layers. If size matters you might consider commiting a container and exporting it right afterwards.

Clone an image from a docker registry to another

I have a private registry with a set of images. It can be visualized as a store of applications.
My app can take these applications and run them on other machines.
To achieve this, my app first pull the image from the private registry and then copies it to a local registry for later use.
Step as are follow:
docker pull privateregistry:5000/company/app:tag
docker tag privateregistry:5000/company/app:tag localregistry:5000/company/app:tag
docker push localregistry:5000/company/app:tag
Then later on a different machine in my network:
docker pull localregistry:5000/company/app:tag
Is there a way to efficiently copy an image from a repository to another without using a docker client in between ?
you can use docker save to save the images to tar archive and then copy the tar to new host and use docker load to untar it.
read below links for more
https://docs.docker.com/engine/reference/commandline/save/
Is there a way to efficiently copy an image from a repository to another without using a docker client in between?
Yes, there's a variety of tools that implement this today. RedHat has been pushing their skopeo, Google has crane, and I've been working on my own with regclient. Each of these tools talks directly to the registry server without needing a docker engine. And at least with regclient (I haven't tested the others), these will only copy the layers that are not already in the target registry, avoiding the need to pull layers again. Additionally, you can move a multi-platform image, retaining all of the available platforms, which you would lose with a docker pull since that dereferences the image to a single platform.

What is a good style to deploy with docker when docker hub is not available?

I want to deploy my project avoiding usage of docker-hub. Is it a good idea to sent an image in tar file or just a Dockerfile combined with all local dependencies would be preferable (in this case user will need to build the image on his own)? Is there a way to deploy container itself?
If you cannot run a local Docker Registry, then sending the image itself is preferable, as it won't force the user to access any dependencies.
Sending the Dockerfile as a complement (to document the image) is good as well.
But the main alternative is to use a docker registry. If however you don't want to depend on local registry storage, you can use a remote storage like S3 with minio.

Resources