When I try to push new docker images to gcr.io using gcloud docker push, it frequently makes some progress before stalling out:
$ gcloud docker push gcr.io/foo-bar-1225/baz-quux:2016-03-23
The push refers to a repository [gcr.io/foo-bar-1225/baz-quux]
762ab2ceaa70: Pushing [> ] 556 kB/154.4 MB
2220ee6c7534: Pushing [===> ] 4.82 MB/66.11 MB
f99917176817: Layer already exists
8c1b4a49167b: Layer already exists
5f70bf18a086: Layer already exists
1967867932fe: Layer already exists
6b4fab929601: Layer already exists
550f16cd8ed1: Layer already exists
44267ec3aa94: Layer already exists
bd750002938c: Layer already exists
917c0fc99b35: Layer already exists
The push stays in this state indefinitely (I've left it for an hour without a byte of progress). If I Ctrl-C kill this process and rerun it, it gets to the exact same point and again makes no progress.
The only workaround I've found is to restart my computer and re-run "Docker Quickstart Terminal". Then the push succeeds.
Is there a workaround for stalled pushes that doesn't require frequently rebooting my computer? (I'm on Mac OS X.)
This seems to be an issue to docker users on Mac have ran into previously, as can be seen in this docker thread, https://github.com/docker/docker/issues/5113
While there is no clear fix, a slightly better workaround is to restart docker machine rather than your computer each time.
You can run docker-machine restart default to reset docker to a working state.
Hope that helps.
Related
My OS: Ubuntu 20.04 LTS
Docker Version: 20.10.17, build 100c701
Symptoms: do a docker push to either docker hub or AWS ECR. 1 or more of the layers will fail and retry, fail and retry, fail and retry. Sometimes it looks like it's nearly done, sometimes it's sooner. Eventually the entire command will fail.
If I retry the command, the same problem layers remain problem layers.
If I rebuild my image, the problem may get better or worse. It does appear to move around with a different image build, but at one point I was pushing a base image (and it was failing), so I pushed a child image, and that push failed on the same layer as the base image while being peachy-keen with the other layers.
Web searches suggest fixes from 2020 or 2021, which certainly should be in the mainstream now, although perhaps both Amazon and Docker Hub are running ancient (and broken) versions.
Additional Info:
Tried from my Mac. Same failure.
ca1399d10d43: Layer already exists
b74197196d00: Layer already exists
2c9fd6cbb874: Retrying in 7 seconds
d79f7f0a3cf1: Layer already exists
36eb8e32aa2f: Layer already exists```
It's not an authentication problem. And it's really quite consistent -- some layers upload, some don't. So I don't see how it can be a network issue.
I'm having problems with a custom Docker image in which there are some files having some whitespaces in their names. When I execute the docker push command I have this error:
$> docker push example.azurecr.io/myimage
The push refers to repository [example.azurecr.io/myimage]
ecaa33aa3064: Pushing [==================================================>] 59MB/59MB
3f06df57be30: Pushing [==================================================>] 21.31MB/21.31MB
ca31a9af4714: Layer already exists
09eb78ab1afc: Layer already exists
62386d2295bd: Layer already exists
f7afe9869eba: Layer already exists
e2eb06d8af82: Layer already exists
svm.runProcess: command cat /tmp/d2/app/wwwroot/fonts/FranziskaWeb W03 BlackItalic.ttf failed with exit code 1
I run the Docker engine on Windows Server 2019 with the Linux containers feature enabled.
Unfortunately I'm not able to write a Dockerfile that reproduces this error.
Someone else on the Internet got this same error but I found no solution. As far as you know, does Docker have any problem with pushing images containing files with whitespaces?
If you are not using the latest version, there is chance to update and fix it. It has been a problem, based on the comment on the code.
It seems to be fixed on 2019 February.
If you have already the latest version or you can't update it, then I fear that you are out of luck if you can't modify these specific files to not contain the whitespaces.
svm.runProcess seems to be part of LCOW image support, which is experimental and deprecated. See multiple pull-requests which removed this feature recently, and the initial one. Containers in Windows space are moving towards WSL2, however it also seems like, that now (July 2021) they are moving away from WSL support on Windows Servers, based on the GitHub comment:
"The Windows Desktop SKUs (where WSL 2 is supported) are recommended SKUs for these scenarios. For those who would like to use Linux in production scenarios in a server environment we recommend using server products such as Hyper-V VMs, and AKS on Azure Stack HCI. As always, we welcome any further Windows Server feedback through our Feedback Hub."
There is bigger issue explaining about the WSL 2 on Windows Servers, available on here.
I have a problem with Docker that seems to happen when I change the machine type of a Google Compute Platform VM instance. Images that were fine fail to run, fail to delete, and fail to pull, all with various obscure messages about missing keys (this on Linux), duplicate or missing layers, and others I don't recall.
The errors don't always happen. One that occurred just now, with an image that ran a couple hundred times yesterday on the same setup, though before a restart, was:
$ docker run --rm -it mbloore/model:conda4.3.1-aq0.1.9
docker: Error response from daemon: layer does not exist.
$ docker pull mbloore/model:conda4.3.1-aq0.1.9
conda4.3.1-aq0.1.9: Pulling from mbloore/model
Digest: sha256:4d203b18fd57f9d867086cc0c97476750b42a86f32d8a9f55976afa59e699b28
Status: Image is up to date for mbloore/model:conda4.3.1-aq0.1.9
$ docker rmi mbloore/model:conda4.3.1-aq0.1.9
Error response from daemon: unrecognized image ID sha256:8315bb7add4fea22d760097bc377dbc6d9f5572bd71e98911e8080924724554e
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$
So it thinks it has no images, but the Docker folders are full of files, and it does know some hashes. It looks like some index has been damaged.
I restarted that instance, and then Docker seemed to be normal again without any special action on my part.
The only workarounds I have found so far are to restart and hope, or to delete several large Docker directories, and recreate them empty. Then after a restart and pull and run works again. But I'm now not sure that it always will.
I am running with Docker version 17.05.0-ce on Debian 9. My images were built with Docker version 17.03.2-ce on Amazon Linux, and are based on the official Ubuntu image.
Has anyone had this kind of problem, or know a way to reset the state of Docker without deleting almost everything?
Two points:
1) It seems that changing the VM had nothing to do with it. On some boots Docker worked, on others not, with no change in configuration or contents.
2) At Google's suggestion I installed Stackdriver monitoring and logging agents, and I haven't had a problem through seven restarts so far.
My first guess is that there is a race condition on startup, and adding those agents altered it in my favour. Of course, I'd like to have a real fix, but for now I don't have the time to pursue the problem.
I'm trying to push a docker container to a private registry on the Google Cloud Platform:
gcloud docker -- push gcr.io/<project-name>/<container-name>
and a checksum fails:
e9a19ae6509f: Pushing [========================================> ] 610.9 MB/752.4 MB
xxxxxxxxxxxx: Layer already exists
...
xxxxxxxxxxxx: Layer already exists
file integrity checksum failed for "var/lib/postgresql/9.5/main/pg_xlog/000000010000000000000002"
Then I deleted that file (and more) from the container, committed the change, and tried to push the new image. I got the same error.
Is there some way to push up my image without pushing up the commit that contains the broken file? Any insight into why the new commit fails in the same way?
FWIW, that looks like a local daemon error prior to contacting the registry, so I very much doubt there is anything we will be able to do on our side. That said, if you reach out to us as Jake (jsand) suggests, we can hopefully try to help you resolve the issue.
I have the latest Docker for Mac installed, and I'm running into a problem where it appears that docker-compose up is stuck in a Downloading state for one of the containers:
± |master ✗| → docker-compose up --build
Pulling container (repo.io/company/container:prod)...
prod: Pulling from company/container
somehash: Already exists
somehash: Already exists
somehash: Already exists
somehash: Already exists
somehash: Pulling fs layer
somehash: Already exists
somehash: Already exists
somehash: Downloading [=================================================> ] 234.6 MB/239.3 MB
somehash: Download complete
somehash: Download complete
^^ this is literally what it looks like on my command line. Stopping and starting hasn't helped, it immediately outputs this same output.
I've tried to rm the container but I guess it doesn't yet exist, it returns the output No stopped containers. --force-recreate also gets stuck in the same place. And perhaps I'm not googling for the right terminology but I haven't found anything useful to try - any pointers?
I just needed to restart Docker.
Linux users can use sudo service docker restart.
Docker for Mac has a handy button for this in the Docker widget in the macOS toolbar:
If you happen to be using Docker Toolkit try docker-machine restart.
I faced the same problem! Restarting the service didn't help, downloading again didn't help. It used to get stuck at random instances leaving me with no option but to kill the pull request.
One thing which worked for me was to download 1 file at a time. For Ubuntu users, you can use the following steps:
Stop the service:
sudo service docker stop
Start docker with max concurrent download set as 1:
sudo dockerd --max-concurrent-downloads 1
Download the required image:
sudo docker pull <image_name>
Download images, after that stop the terminal and start the daemon again as it was earlier.
sudo service docker start
I had the similar situation this morning where my network suddenly went down and I was forced to power cycle the modern, while docker-compose was still in the middle of downloading stuff from docker hub.
Yes, bouncing the docker daemon process seems to resolve this.
For Linux users - do sudo service docker restart to fix it.
Go to the Docker Preferences from its menu bar icon. Within there is a "bug" icon. Click on that and then "clean / Purge data"
I'm running OSX and restarting Docker for Mac didn't help. Neither did a full restart or upgrading VirtualBox. What did work was turning my wifi interface on and off every time it got stuck. I had to do this repeatedly, but it eventually downloaded the entire image.
Directly download the necessary images using docker, e.g.
docker pull company/container
and then run
docker-compose up
again. Worked for me on MacOS.
I found a possible workaround.
I have my docker engine installed in a Ubuntu 18.04 Snap Environment.
I discovered searching in some forums that users relate this behaviors to limitation in the download bandwith.
So in the picture below you are going to watch that the components was stucked
Part of the Downloads stucked and finally I cancelled the process CTRL + C
I added two parameters or flags in the configuration file that controls the docker daemon behavior: max-concurrent-downloads 1 and max-concurrent-uploads 1
In my case remember, i am working in a snap environment. This file is located in this directory: /var/lib/docker/current/config/daemon.json
REMEMBER TO STOP ALL DOCKER PROCESS BEFORE THE FILE MODIFICATION, AND CREATE BACKUP OF THE FILE
Add the two lines in the picture. This is going to help you to limit the downloads to only one by one
This is the process that helped me to resolve this problem.
Download Succesfull
I had this issue in my VirtualBox when doing a docker pull on the image but it got stuck at a specific position and never moved from there. So, the issue was due to the network adapters in my VM. I was using NAT by default. When I switched it to "Bridged adapter", the issue went away.
I had a similar problem on docker for windows for a couple of days and when I tried to connect to the virtual machine (via Hyper-V Manager) the downloads started speeding along. I have no idea why but it worked for me...
Completely remove docker
Install docker again
It should work now
I tried to restar docker, update docker, but didnt help