Nexus and Docker Caching error for images with username - docker

I've been using nexus as a docker repository for a while to mitigate flakey internet. However recently, I've hit an issue that seems a bit weird. So, if I run:
docker pull server:8042/alpine:3.16.2
it works fine and it all gets cached. However, if I try and run
docker pull server:8042/sameersbn/gitlab:15.0.3
I get the following error:
Error response from daemon: unknown image in /v1.41/images/create?fromImage=server%3A8042%2Fsameersbn%2Fgitlab&tag=15.0.3
Running a direct pull from docker works fine, but using the cache, any nested tag with a username fails. I'm using engine 20.10.20 if that helps.
Thanks in advance

This appears to be a bug introduced somewhere between engine 20.10.17 and 20.10.20. Rolling back to an earlier version makes everything work again. I have also reported this to docker, however as I'm not a paid member, I suspect this will go unnoticed.

Related

docker image stuck at pulling fs layer

We are using a self hosted version of Jfrog Artifactory and for some reason Artifactory went down and which we were able to resolve by restarting the server. After reboot everything seemed to come back up correctly, however we realized that anything using a particular layer ddad3d7c1e96 which is actually pulling down alpine 3.11.11 from docker hub was now getting stuck at pulling fs layer and would just sit and keep retrying.
We tried "Zapping Caches" and running the maintenance option for Garbage Collection/Unused artifacts but we were still unable to download any image that had this particular layer.
Is it possible that this layer was somehow cached in artifactory and ended up being corrupted. We were also looked through the filestore and were unsuccessful in finding the blob.
After a few days, the issue resolved itself and images can now be pulled but we are now left without an explanation...
Does anyone have an idea of what could have happened and how we can clear out cached parent images which are not directly in our container registry but pulled from docker hub or another registry.
We ran into a similar issue - zapping caches worked for us however it took some time to complete

Docker fails on changed GCP virtual machine?

I have a problem with Docker that seems to happen when I change the machine type of a Google Compute Platform VM instance. Images that were fine fail to run, fail to delete, and fail to pull, all with various obscure messages about missing keys (this on Linux), duplicate or missing layers, and others I don't recall.
The errors don't always happen. One that occurred just now, with an image that ran a couple hundred times yesterday on the same setup, though before a restart, was:
$ docker run --rm -it mbloore/model:conda4.3.1-aq0.1.9
docker: Error response from daemon: layer does not exist.
$ docker pull mbloore/model:conda4.3.1-aq0.1.9
conda4.3.1-aq0.1.9: Pulling from mbloore/model
Digest: sha256:4d203b18fd57f9d867086cc0c97476750b42a86f32d8a9f55976afa59e699b28
Status: Image is up to date for mbloore/model:conda4.3.1-aq0.1.9
$ docker rmi mbloore/model:conda4.3.1-aq0.1.9
Error response from daemon: unrecognized image ID sha256:8315bb7add4fea22d760097bc377dbc6d9f5572bd71e98911e8080924724554e
$ docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
$
So it thinks it has no images, but the Docker folders are full of files, and it does know some hashes. It looks like some index has been damaged.
I restarted that instance, and then Docker seemed to be normal again without any special action on my part.
The only workarounds I have found so far are to restart and hope, or to delete several large Docker directories, and recreate them empty. Then after a restart and pull and run works again. But I'm now not sure that it always will.
I am running with Docker version 17.05.0-ce on Debian 9. My images were built with Docker version 17.03.2-ce on Amazon Linux, and are based on the official Ubuntu image.
Has anyone had this kind of problem, or know a way to reset the state of Docker without deleting almost everything?
Two points:
1) It seems that changing the VM had nothing to do with it. On some boots Docker worked, on others not, with no change in configuration or contents.
2) At Google's suggestion I installed Stackdriver monitoring and logging agents, and I haven't had a problem through seven restarts so far.
My first guess is that there is a race condition on startup, and adding those agents altered it in my favour. Of course, I'd like to have a real fix, but for now I don't have the time to pursue the problem.

Docker image created from environment, pushed to a registry, pulled from a server... now what?

I started to use Docker a few days ago so I'm still a newbie in this domain, so I deeply apologize if my questions seem obvious, because so far, most of them aren't for me.
My goal is to create a custom image from a Rails application, to send it up to the Docker Hub, then pull it from a server and simply make it run.
I used this doc to create my image excepted that I chose to use MariaDB (works fine). So far, my project only contains a CRUD / scaffold that works nicely.
I then pushed it to a private repository on Docker Hub using this link. Again, no problem, hub is telling me the push went okay, same for my console.
Then, I connected to a private server running Debian, pulled the project from the hub, made sure it existed using docker images.
My main question is the following: what should I do next?
If I refer to the first link, I create the rails project from close to empty Gemfile, then synchronise the local files with the image. However, on my server, all I have is an empty directory. If I'm not stupid, redoing the Docker's tutorial will "reset" my image.
This is where I'm currently lost: what should I do now? I don't believe that running docker run repo/my-image rails server is the good solution here
Thank you in advance
You are going good till now. Now think what is the use of you pushing the image to private repository - You and others who have access to repo should be able to get the image and should be able to create containers from it.
The point where you lost is exactly what you should do now i.e. execute docker run
redoing the Docker's tutorial will "reset" my image.
Docker is smart enough to download image once and use again. Resetting will remove your locally downloaded images but it won't remove from private repo.

Docker cannot pull any image

I've installed the newest version of Docker for Windows. Everything works except docker pull. When I try to pull any image I always get information about network time out, but I know that my network is fine. Also I'm not behind any proxy. I don't know what's wrong with this. Anybody has any idea?

404 when pulling a private Docker repo from Hub

Today I deployed a few new servers, and ran into a strange issue. On one of our private hub repos, I suddenly got a 404. It's strange, since it has worked fine in the past. Moreover, all of the other (private) repos under the same account works fine.
root#some-server:~# docker pull foo/bar
Pulling repository foo/bar
1112f98a0e3d: Error pulling image (latest) from foo/bar, HTTP code 404
511136ea3c5a: Download complete
2758ea31b20b: Error pulling dependent layers
2014/08/23 12:59:58 Error pulling image (latest) from foo/bar, HTTP code 404
The dockercfg is in place, and works fine for the other repos
root#some-server:~# cat ~/.dockercfg
{"https://index.docker.io/v1/":{"auth":"abc123","email":"docker-deploy#foobar.net"}}
I've also triple checked the to make sure that the group the account ('docker-deploy#foobar.net') has read write to this particular repo.
My gut feeling tells me that it is something on Docker's end.
What makes it even more strange is that I can pull the same repo without any issues from another account.
Closing this issue, as it is an issue reported on Docker's status page, and hence an issue upstream on Docker's end.

Resources