We are using a self hosted version of Jfrog Artifactory and for some reason Artifactory went down and which we were able to resolve by restarting the server. After reboot everything seemed to come back up correctly, however we realized that anything using a particular layer ddad3d7c1e96 which is actually pulling down alpine 3.11.11 from docker hub was now getting stuck at pulling fs layer and would just sit and keep retrying.
We tried "Zapping Caches" and running the maintenance option for Garbage Collection/Unused artifacts but we were still unable to download any image that had this particular layer.
Is it possible that this layer was somehow cached in artifactory and ended up being corrupted. We were also looked through the filestore and were unsuccessful in finding the blob.
After a few days, the issue resolved itself and images can now be pulled but we are now left without an explanation...
Does anyone have an idea of what could have happened and how we can clear out cached parent images which are not directly in our container registry but pulled from docker hub or another registry.
We ran into a similar issue - zapping caches worked for us however it took some time to complete
Related
Sometimes my internet connection gets slow and unstable. And when I use docker pull image_name_here for images that are large in volume, I see that they get stuck in the middle of the download process. Or the internet connection is lost and it times out or exist with other errors.
But when I execute that pull command again, the layers that are already downloaded won't be saved on my drive. They would be downloaded again.
This means for large images on an unstable network I literally can't pull the image.
Is there a way for me to somehow resume the pull process from where it was interrupted.
Is there a third party app that does that?
I'm on Linux (Ubuntu & Debian)
I've been using nexus as a docker repository for a while to mitigate flakey internet. However recently, I've hit an issue that seems a bit weird. So, if I run:
docker pull server:8042/alpine:3.16.2
it works fine and it all gets cached. However, if I try and run
docker pull server:8042/sameersbn/gitlab:15.0.3
I get the following error:
Error response from daemon: unknown image in /v1.41/images/create?fromImage=server%3A8042%2Fsameersbn%2Fgitlab&tag=15.0.3
Running a direct pull from docker works fine, but using the cache, any nested tag with a username fails. I'm using engine 20.10.20 if that helps.
Thanks in advance
This appears to be a bug introduced somewhere between engine 20.10.17 and 20.10.20. Rolling back to an earlier version makes everything work again. I have also reported this to docker, however as I'm not a paid member, I suspect this will go unnoticed.
My OS: Ubuntu 20.04 LTS
Docker Version: 20.10.17, build 100c701
Symptoms: do a docker push to either docker hub or AWS ECR. 1 or more of the layers will fail and retry, fail and retry, fail and retry. Sometimes it looks like it's nearly done, sometimes it's sooner. Eventually the entire command will fail.
If I retry the command, the same problem layers remain problem layers.
If I rebuild my image, the problem may get better or worse. It does appear to move around with a different image build, but at one point I was pushing a base image (and it was failing), so I pushed a child image, and that push failed on the same layer as the base image while being peachy-keen with the other layers.
Web searches suggest fixes from 2020 or 2021, which certainly should be in the mainstream now, although perhaps both Amazon and Docker Hub are running ancient (and broken) versions.
Additional Info:
Tried from my Mac. Same failure.
ca1399d10d43: Layer already exists
b74197196d00: Layer already exists
2c9fd6cbb874: Retrying in 7 seconds
d79f7f0a3cf1: Layer already exists
36eb8e32aa2f: Layer already exists```
It's not an authentication problem. And it's really quite consistent -- some layers upload, some don't. So I don't see how it can be a network issue.
GitHub packages started returning error pulling image configuration: unknown blob this weekend when trying to pull docker images. It still works to push images to the registry. I haven't found any infromation pointing to problems at GitHub.
000eee12ec04: Pulling fs layer
db438065d064: Pulling fs layer
e345d85b1d3e: Pulling fs layer
f6285e273036: Waiting
2354ee191574: Waiting
69189c7cf8d6: Waiting
771c701acbb7: Waiting
error pulling image configuration: unknown blob
How do I troubleshoot this?
This is the result of a failed push where the push appears to have been successful but something went wrong on the registry side and something is missing.
To fix it build your container again and push it again.
While this is likely a rare situation it would be possible to test for this by deleting your image locally after pushing and pulling it again to ensure pulls work as expected.
One possible cause of failure of pulling or pushing image layer is the unreliable network connection outlined in this blog. By default docker engine has 5 parallel upload operations.
You can update the docker engine to only use single upload or download operation by specifying values for max-concurrent-downloads for download or max-concurrent-uploads for upload.
On windows, you should update via C:\Users\{username}\.docker\daemon.json or via the Docker for Desktop GUI:
{
...
"max-concurrent-uploads": 1
}
On *Nix, open /etc/docker/daemon.json (If the daemon.json file doesn’t exist in /etc/docker/, create it.) and add following values as needed:
{
...
"max-concurrent-uploads": 1
}
And restart daemon.
Note: Currently this is not possible to specify these options in docker push or docker pull command as per this post.
We have been using locked version of the Minio image (RELEASE.2016-10-07T01-16-39Z), but now it seems to have been removed.
I'm getting this from Docker:
Pulling minio (minio/minio:RELEASE.2016-10-07T01-16-39Z)...
Pulling repository docker.io/minio/minio
ERROR: Tag RELEASE.2016-10-07T01-16-39Z not found in repository docker.io/minio/minio
I'm finding Docker hub hard to navigate. Where can I find a list of available versioned images, or a mirror to my exact image?
You can find the available tags for minio/minio on that repository's tag page.
If you have the image you want already downloaded on any of your systems, you can push it to Docker Hub yourself, then pull it onto your other systems. This has the benefit that you can control whether you delete that image (it's your account, not someone else's).
You can also use a private registry, if you want, which would prevent Docker from deleting the image from Docker Hub against your will for some reason. But that is extra work you may not wish to do (you would have to host the registry yourself, set it up, maintain it...)
We removed the docker version due to incompatibilities, from the recent releases it won't happen.