How to increase throughput for Docker client pull? - docker

I need Docker pull to be as fast as possible. I'm using an EC2 machine to pull a ~13.2GB from ECR (Amazon container registry) in about 3m10sec (70MB/sec).
Can I tune the Docker client use more system resources (threads, connections) for the pull to complete faster?
For example: can I tune Docker to download more layers in parallel? and/or multi-part download?
Notes:
I can't change the image itself.
CPU/Disk/Network are idle - There's enough network bandwidth between the client and the docker registry (both in AWS). There's enough disk IO (using SSD). There's enough CPU cores to spare.
I assume the repository server can support more connections.
I see the pull involves: 1/network transfer (Network+disk) 2/extracting the layers (CPU+disk)

Related

Lanch new Docker image when memory limit is reached

Sorry if this is a dumb question but i'm quite new to Docker.
I understand that, if a the --memory parameter is set and the container uses all the memory, Docker will kill the container if the container.
I wonder if it's possible to create a new container (without killing the previous one) when the container reaches a certain memory limit defined by me.
docker does not have built in service scaling.
most implementations ive seen for docker that do this use:
prometheus, a monitoring server that can scrape docker container metrics.
alertmanager, a server that, given metrics to monitor on a prometheus server, can raise alerts when thresholds are reached.
a custom piece of code using the docker golang sdk that increases or decreases the number of service replicas in response to alert thresholds.

Is Azure Container Registry multi-region?

We use Azure Container Registry to pull a larger image (~6Gb) to launch a cluster of many instances.. and it takes unusually long to pull the image.
We were wondering if Azure Container Registry is a truly multi-region service, or at least has a front-end CDN that has per-region local caches?
Have a look at
https://learn.microsoft.com/en-us/azure/container-registry/container-registry-geo-replication
This will allow you to bring your images closer to your different regions where your clusters are created.

How long does it take you to create a container from a 4GB docker image?

I just need some kind of reference on how long it should take to create a container based on a 4GB docker image. On my computer it is currently taking >60 seconds, which causes docker-compose to timeout. Is this normal for a modern workstation with SSD disks and a decent CPU? Why does it take so long?
The docker context is ~6MB, so that should not be the issue here, but I know it could be had the context been larger.
It's running on a linux host, so it's also not the IO-overhead tax you pay when running docker in a VM like Docker for MAC does.
I just don't understand why it's so slow, if it's expected for these large images, or if I should try some other technology instead of Docker (like a virtual machine, ansible scripts or whatever).

How did docker daemon limit container's disk bandwidth like docker run --device-read-bps

We can limit container's disk bandwidth when create it by using docker run --device-read-bps. Actually I'm using kubernetes to create container. Want make every container in my node use only 50M/s disk bandwidth .
Is there any way can configure docker daemon like docker run --device-read-bps?
Kubernetes supports CPU and Memory limits, but as far as I know, it does not handle any disk quota or limits at this time.
In PersistentVolumes, you can specify StorageClass, but that only seems to imply slow or fast disk (i.e. HD or SSD..) and that does not have any bandwidth limitation.
So in short I do not think it is possible.

How to optimise docker pull speed

Docker pull can be slow sometimes
How can this best be optimised?
Is it possible to set the mirrors?
Any ideas appreciated. I appreciate sometimes it can just be slow network but would be great to speed this up as much as possible.
Not exactly a mirror, but you can setup a registry as a pull through cache:
By running a local registry mirror, you can keep most of the redundant image fetch traffic on your local network.
In this mode a Registry responds to all normal docker pull requests but stores all content locally.
The first time you request an image from your local registry mirror, it pulls the image from the public Docker registry and stores it locally before handing it back to you.
On subsequent requests, the local registry mirror is able to serve the image from its own storage.
You will need to pass the --registry-mirror option to your Docker daemon on startup:
docker --registry-mirror=https://<my-docker-mirror-host> daemon

Resources