is there a way to make docker download the layers of an image sequentially instead of in parallel. I require this due to our repository being very strict (or dodgey) on networking issues. I get a lot of the EOF errors like:
time="2016-06-14T13:15:52.936846635Z" level=debug msg="Error contacting registry http://repo.server/v1/: Get http://repo.server/v1/images/b6...be/layer: EOF"
time="2016-06-14T13:15:52.936924310Z" level=error msg="Download failed: Server error: Status 0 while fetching image layer (b6...be)"
This is when running Docker 1.11.2 on windows.
But on a Centos7 VM it all works fine with the default 1.9.1.
I noticed one difference was that 1.9.1 does the downloads sequentially. So I tried to install 1.9.1 on windows, but the quick start terminal automatically downloaded and installed the 1.11.2 version of the boot2docker ISO.
So is there some arg, config, or environment variable I can set to make docker download the layers one at a time?
Or am I jumping to the wrong conclusion assuming the concurrent downloads are causing my network errors?
Thanks
It seems that there was recently added a max-concurrent-downloads option to the configuration of the docker daemon. Here is the link to the docs although I did not have a chance to test it yet myself.
Related
I am setting up and airflow k8s cluster using kind deployment on a WSL2 setup. When I execute standard helm install $RELEASE_NAME apache-airflow/airflow --namespace $NS it fails. Further investigation shows that cluster worker node cannot connect to registry-1.docker.io.
Error log for one the image pull
Failed to pull image "redis:6-buster": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/library/redis:6-buster": failed to resolve reference "docker.io/library/redis:6-buster": failed to do request: Head "https://registry-1.docker.io/v2/library/redis/manifests/6-buster": dial tcp: lookup registry-1.docker.io on 172.19.0.1:53: no such host
I can access all other websites from this node e.g. google.com, yahoo.com merriam-webster.com etc. ; even docker.com works. This issue is very specific to registry-1.docker.io.
All the search and links seems to be around general internet connection issue.
Current solution:
If I manually change the /etc/resolv.conf on the kind worker node to point to the IP address from /etc/resolv.conf of the WSL2 Debian main IP address, then it works.
But, this is a dynamic cluster and node and I cannot do this every time. I am currently searching for a way as to how the make it a part of the cluster configuration. Some way that makes it work just by saying kind create cluster and one should be able to use kubectl or helm by default.
However, I am more interested in figuring out why this network setup fails specifically for registry-1.docker.io. Is there some configuration that can be done to avoid changing DNS to host IP or google DNS? As the current network configuration seems to work pretty much for the rest of the internet.
I have documented all the steps and investigation details including some of network configuration details on github repositroy. If you need any further information to help solve the issue, please let me know. I will keep on updating the github documentation as I make progress.
Setup:
Windows 11 with WSL2 without any Docker desktop
WSL2 image : Debian bullseye (11) with docker engine on linux
Docker version : 20.10.2
Kind version : 0.11.1
Kind image: kindest/node:v1.20.7#sha256:cbeaf907fc78ac97ce7b625e4bf0de16e3ea725daf6b04f930bd14c
67c671ff9
I am not sure, if it is an answer or not. After spending 2 days trying to find solution. I thought to change the node image version. On the Kind release page, it says 1.21 as the latest image for the kind version 0.11.1. I had problems with 1.21 to even start the cluster. 1.20 faced this strange DNS image. So went with 1.23. It all worked fine with thus image.
However, to my surprise, when I changed the cluster configuration back to 1.20, the DNS issue was gone. So, I do not what changed due to switch of of the image, but I cannot reproduce the issue again! Maybe it will help someone else
I find that i have found the correct workaround for this bug: Switching IPTables to legacy mode has fixed this for me.
https://github.com/docker/for-linux/issues/1406#issuecomment-1183487816
I'm trying to spin up a fresh server using the azerothcore docker installation guide. I have completed all of the early installation steps, up until running the containers. Upon running the containers (for worldserver and authserver) i see the following output from the containers. It appears the destination of the world and auth servers in dist/bin is missing, how may i resolve this issue?
Check your docker settings. Make sure you have enough memory. If containers have low memory they will not finish the compile. Check if you have build issues.
I'm having problems with a custom Docker image in which there are some files having some whitespaces in their names. When I execute the docker push command I have this error:
$> docker push example.azurecr.io/myimage
The push refers to repository [example.azurecr.io/myimage]
ecaa33aa3064: Pushing [==================================================>] 59MB/59MB
3f06df57be30: Pushing [==================================================>] 21.31MB/21.31MB
ca31a9af4714: Layer already exists
09eb78ab1afc: Layer already exists
62386d2295bd: Layer already exists
f7afe9869eba: Layer already exists
e2eb06d8af82: Layer already exists
svm.runProcess: command cat /tmp/d2/app/wwwroot/fonts/FranziskaWeb W03 BlackItalic.ttf failed with exit code 1
I run the Docker engine on Windows Server 2019 with the Linux containers feature enabled.
Unfortunately I'm not able to write a Dockerfile that reproduces this error.
Someone else on the Internet got this same error but I found no solution. As far as you know, does Docker have any problem with pushing images containing files with whitespaces?
If you are not using the latest version, there is chance to update and fix it. It has been a problem, based on the comment on the code.
It seems to be fixed on 2019 February.
If you have already the latest version or you can't update it, then I fear that you are out of luck if you can't modify these specific files to not contain the whitespaces.
svm.runProcess seems to be part of LCOW image support, which is experimental and deprecated. See multiple pull-requests which removed this feature recently, and the initial one. Containers in Windows space are moving towards WSL2, however it also seems like, that now (July 2021) they are moving away from WSL support on Windows Servers, based on the GitHub comment:
"The Windows Desktop SKUs (where WSL 2 is supported) are recommended SKUs for these scenarios. For those who would like to use Linux in production scenarios in a server environment we recommend using server products such as Hyper-V VMs, and AKS on Azure Stack HCI. As always, we welcome any further Windows Server feedback through our Feedback Hub."
There is bigger issue explaining about the WSL 2 on Windows Servers, available on here.
I have a problem with Docker that seems to happen when I change the machine type of a Google Compute Platform VM instance. Images that were fine fail to run, fail to delete, and fail to pull, all with various obscure messages about missing keys (this on Linux), duplicate or missing layers, and others I don't recall.
The errors don't always happen. One that occurred just now, with an image that ran a couple hundred times yesterday on the same setup, though before a restart, was:
$ docker run --rm -it mbloore/model:conda4.3.1-aq0.1.9
docker: Error response from daemon: layer does not exist.
$ docker pull mbloore/model:conda4.3.1-aq0.1.9
conda4.3.1-aq0.1.9: Pulling from mbloore/model
Digest: sha256:4d203b18fd57f9d867086cc0c97476750b42a86f32d8a9f55976afa59e699b28
Status: Image is up to date for mbloore/model:conda4.3.1-aq0.1.9
$ docker rmi mbloore/model:conda4.3.1-aq0.1.9
Error response from daemon: unrecognized image ID sha256:8315bb7add4fea22d760097bc377dbc6d9f5572bd71e98911e8080924724554e
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$
So it thinks it has no images, but the Docker folders are full of files, and it does know some hashes. It looks like some index has been damaged.
I restarted that instance, and then Docker seemed to be normal again without any special action on my part.
The only workarounds I have found so far are to restart and hope, or to delete several large Docker directories, and recreate them empty. Then after a restart and pull and run works again. But I'm now not sure that it always will.
I am running with Docker version 17.05.0-ce on Debian 9. My images were built with Docker version 17.03.2-ce on Amazon Linux, and are based on the official Ubuntu image.
Has anyone had this kind of problem, or know a way to reset the state of Docker without deleting almost everything?
Two points:
1) It seems that changing the VM had nothing to do with it. On some boots Docker worked, on others not, with no change in configuration or contents.
2) At Google's suggestion I installed Stackdriver monitoring and logging agents, and I haven't had a problem through seven restarts so far.
My first guess is that there is a race condition on startup, and adding those agents altered it in my favour. Of course, I'd like to have a real fix, but for now I don't have the time to pursue the problem.
I have the latest Docker for Mac installed, and I'm running into a problem where it appears that docker-compose up is stuck in a Downloading state for one of the containers:
± |master ✗| → docker-compose up --build
Pulling container (repo.io/company/container:prod)...
prod: Pulling from company/container
somehash: Already exists
somehash: Already exists
somehash: Already exists
somehash: Already exists
somehash: Pulling fs layer
somehash: Already exists
somehash: Already exists
somehash: Downloading [=================================================> ] 234.6 MB/239.3 MB
somehash: Download complete
somehash: Download complete
^^ this is literally what it looks like on my command line. Stopping and starting hasn't helped, it immediately outputs this same output.
I've tried to rm the container but I guess it doesn't yet exist, it returns the output No stopped containers. --force-recreate also gets stuck in the same place. And perhaps I'm not googling for the right terminology but I haven't found anything useful to try - any pointers?
I just needed to restart Docker.
Linux users can use sudo service docker restart.
Docker for Mac has a handy button for this in the Docker widget in the macOS toolbar:
If you happen to be using Docker Toolkit try docker-machine restart.
I faced the same problem! Restarting the service didn't help, downloading again didn't help. It used to get stuck at random instances leaving me with no option but to kill the pull request.
One thing which worked for me was to download 1 file at a time. For Ubuntu users, you can use the following steps:
Stop the service:
sudo service docker stop
Start docker with max concurrent download set as 1:
sudo dockerd --max-concurrent-downloads 1
Download the required image:
sudo docker pull <image_name>
Download images, after that stop the terminal and start the daemon again as it was earlier.
sudo service docker start
I had the similar situation this morning where my network suddenly went down and I was forced to power cycle the modern, while docker-compose was still in the middle of downloading stuff from docker hub.
Yes, bouncing the docker daemon process seems to resolve this.
For Linux users - do sudo service docker restart to fix it.
Go to the Docker Preferences from its menu bar icon. Within there is a "bug" icon. Click on that and then "clean / Purge data"
I'm running OSX and restarting Docker for Mac didn't help. Neither did a full restart or upgrading VirtualBox. What did work was turning my wifi interface on and off every time it got stuck. I had to do this repeatedly, but it eventually downloaded the entire image.
Directly download the necessary images using docker, e.g.
docker pull company/container
and then run
docker-compose up
again. Worked for me on MacOS.
I found a possible workaround.
I have my docker engine installed in a Ubuntu 18.04 Snap Environment.
I discovered searching in some forums that users relate this behaviors to limitation in the download bandwith.
So in the picture below you are going to watch that the components was stucked
Part of the Downloads stucked and finally I cancelled the process CTRL + C
I added two parameters or flags in the configuration file that controls the docker daemon behavior: max-concurrent-downloads 1 and max-concurrent-uploads 1
In my case remember, i am working in a snap environment. This file is located in this directory: /var/lib/docker/current/config/daemon.json
REMEMBER TO STOP ALL DOCKER PROCESS BEFORE THE FILE MODIFICATION, AND CREATE BACKUP OF THE FILE
Add the two lines in the picture. This is going to help you to limit the downloads to only one by one
This is the process that helped me to resolve this problem.
Download Succesfull
I had this issue in my VirtualBox when doing a docker pull on the image but it got stuck at a specific position and never moved from there. So, the issue was due to the network adapters in my VM. I was using NAT by default. When I switched it to "Bridged adapter", the issue went away.
I had a similar problem on docker for windows for a couple of days and when I tried to connect to the virtual machine (via Hyper-V Manager) the downloads started speeding along. I have no idea why but it worked for me...
Completely remove docker
Install docker again
It should work now
I tried to restar docker, update docker, but didnt help