How to optimise docker pull speed - docker

Docker pull can be slow sometimes
How can this best be optimised?
Is it possible to set the mirrors?
Any ideas appreciated. I appreciate sometimes it can just be slow network but would be great to speed this up as much as possible.

Not exactly a mirror, but you can setup a registry as a pull through cache:
By running a local registry mirror, you can keep most of the redundant image fetch traffic on your local network.
In this mode a Registry responds to all normal docker pull requests but stores all content locally.
The first time you request an image from your local registry mirror, it pulls the image from the public Docker registry and stores it locally before handing it back to you.
On subsequent requests, the local registry mirror is able to serve the image from its own storage.
You will need to pass the --registry-mirror option to your Docker daemon on startup:
docker --registry-mirror=https://<my-docker-mirror-host> daemon

Related

Change default registry for docker push commands

There is an option to set registry-mirrors in daemon.json of docker to config default registry while pulling images.
Is there another way so I can push docker images to a local registry by default (without specifying local registry URL)?
To describe more, we have 2 nexus repositories, one as proxy of default registry (so we pull images through it), and one hosted one.
We want all developers push their images to our hosted registry, not docker hub.
registry-mirrors is a mirror of Docker Hub, so the design is to push to the authoritative source, and if a pull to one of the Hub mirrors fails, that will also fall back to Hub.
It looks like what you are trying to do is merge a multiple namespaces into one, the namespace for Hub, and the namespace for your local Nexus repositories. Doing that is dangerous because deploying the same image reference on different machines, or when the Nexus instance is unavailable for any reason, will result in a dependency confusion attack.
The design of image references is to always specify the registry when you don't want to use Docker Hub. This avoids the dependency confusion attacks seen by other package repositories.
If you're worried about developers pushing images to Hub that they shouldn't be, then I'd recommend setting up an HTTP proxy that rejects POST and PUT requests to registry-1.docker.io (a pull uses GET and HEAD requests), and make sure all developers use that proxy (typically via a network policy that doesn't allow direct access without the proxy).

What is the best way to deliver docker containers to remote host?

I'm new in docker and docker-compose. I'm using of docker-compose file with several services. I have containers and images on the local machine when I work with docker-compose and my task is to deliver them remote host.
I found several solutions:
I could build my images and push them to the some registry and pull
them on production server. But for this option I need private
registry. And as I think- registry is an unnecessary element. I
wanna run countainers directly.
Save docker image to tar and load it to remote host. I saw this post
Moving docker-compose containersets around between hosts
, but in this case I need to have shell scripts. Or I can use
docker directly
(Docker image push over SSH (distributed)),
but in this case I'm losing the benefits of docker-compose.
Use docker-machine (https://github.com/docker/machine) with general driver. But in this case I can use it for deployng only from
one machine, or I need to configure certificates
(How to set TLS Certificates for a machine in docker-machine). And, again, it isn't simple solution, as for me.
Use docker-compose and parameter host (-H) - But in the last option I need to build images on remote host. Is it possible to
build image on local mashine and push it to remote host?
I could use docker-compose push (https://docs.docker.com/compose/reference/push/) to remote host,
but for this I need to create registry on remote host and I need to
add and pass hostname as parameter to docker compose every time.
What is the best practice to deliver docker containers to remote host?
Via a registry (your first option). All container-oriented tooling supports it, and it's essentially required in cluster environments like Kubernetes. You can use Docker Hub, or an image registry from a public-cloud provider, or a third-party option, or run your own.
If you can't use a registry then docker save/docker load is the next best choice, but I'd only recommend it if you're in something like an air-gapped environment where there's no network connectivity between the build system and the production systems.
There's no way to directly push an image from one system to another. You should avoid enabling the Docker network API for security reasons: anyone who can reach a network-exposed Docker socket can almost trivially root its host.
Independently of the images you will also need to transfer the docker-compose.yml file itself, plus any configuration files you bind-mount into the containers. Ordinary scp or rsync works fine here. There is no way to transfer these within the pure Docker ecosystem.

Is Azure Container Registry multi-region?

We use Azure Container Registry to pull a larger image (~6Gb) to launch a cluster of many instances.. and it takes unusually long to pull the image.
We were wondering if Azure Container Registry is a truly multi-region service, or at least has a front-end CDN that has per-region local caches?
Have a look at
https://learn.microsoft.com/en-us/azure/container-registry/container-registry-geo-replication
This will allow you to bring your images closer to your different regions where your clusters are created.

how to pull docker images from localhost docker private registry to GKE?

I have my own docker private registry created in my host machine[localhost] and I intend to make use of localhost private registry to pull images in google Kubernetes engine.
How do I make it happen?
You won't be able to use either your locally built docker images (which can be listed by running docker images on your local machine) or your locally set up docker private registry (unless you make it available under some public IP which doesn't make much sense if it's your home computer). Those images can be used by your local kubernetes cluster but not by GKE.
In GKE we generally use GCR (Google Container Registry) for storing images that are used by our Kubernetes Engine. We can build them directly from code (e.g. pulled from our github account) on a cloudshell vm (simply click Cloud Shell icon in your GCP Console). You can build them directly on this machine and you can push them to your GCR directly from there.
Alternatively, if you build your images locally, but by "locally" I mean this time the nodes where kubernetes is installed (so in case of GKE they need to be present on every worker node), you can also use them without a need of pulling them from any external registry. The only requirement is that they are available on all kubernetes worker nodes. You can force kubernetes to always use your local images, present on your nodes, instead of trying to pull them from a registry by specifying:
imagePullPolicy: Never
in your Pod or Deployment specification. More details on that you can find in this answer.

Pre-build a docker container, so you can use it in multiple instances without having to wait for the build each time

I'm having memory issues on my machine because I need to run many instances of a heavy container.
I was wondering is it possible to pre-build a container, and inject it into other containers on different ports, as needed?
You can. In fact that's how Docker images are meant to be distributed.
After you build and tag the image, use docker push to push to a remote registry. You can then download the image from that registry.
There are many options for private registries, the best-known being:
Artifactory
Nexus
AWS ECR
Or if you don't mind your image being public, just push to the official Docker registry.

Resources