404 when pulling a private Docker repo from Hub - docker

Today I deployed a few new servers, and ran into a strange issue. On one of our private hub repos, I suddenly got a 404. It's strange, since it has worked fine in the past. Moreover, all of the other (private) repos under the same account works fine.
root#some-server:~# docker pull foo/bar
Pulling repository foo/bar
1112f98a0e3d: Error pulling image (latest) from foo/bar, HTTP code 404
511136ea3c5a: Download complete
2758ea31b20b: Error pulling dependent layers
2014/08/23 12:59:58 Error pulling image (latest) from foo/bar, HTTP code 404
The dockercfg is in place, and works fine for the other repos
root#some-server:~# cat ~/.dockercfg
{"https://index.docker.io/v1/":{"auth":"abc123","email":"docker-deploy#foobar.net"}}
I've also triple checked the to make sure that the group the account ('docker-deploy#foobar.net') has read write to this particular repo.
My gut feeling tells me that it is something on Docker's end.
What makes it even more strange is that I can pull the same repo without any issues from another account.

Closing this issue, as it is an issue reported on Docker's status page, and hence an issue upstream on Docker's end.

Related

Nexus and Docker Caching error for images with username

I've been using nexus as a docker repository for a while to mitigate flakey internet. However recently, I've hit an issue that seems a bit weird. So, if I run:
docker pull server:8042/alpine:3.16.2
it works fine and it all gets cached. However, if I try and run
docker pull server:8042/sameersbn/gitlab:15.0.3
I get the following error:
Error response from daemon: unknown image in /v1.41/images/create?fromImage=server%3A8042%2Fsameersbn%2Fgitlab&tag=15.0.3
Running a direct pull from docker works fine, but using the cache, any nested tag with a username fails. I'm using engine 20.10.20 if that helps.
Thanks in advance
This appears to be a bug introduced somewhere between engine 20.10.17 and 20.10.20. Rolling back to an earlier version makes everything work again. I have also reported this to docker, however as I'm not a paid member, I suspect this will go unnoticed.

docker pull ending in unexpected EOF

I have a quirky bug with my 'docker pull' function.
I create 2 images for a project on one local server, 2 repos on same docker service - for this description, we'll call those api and templates. I then log into remote servers to pull and deploy those images as containers.
The first one I do a pull like:
docker pull 10.9.8.7:5000/api:api-1
api-1 is the tag, and I can pull that one just fine.
For the templates pull,
docker pull 10.9.8.7:5000/templates:template-1
The pull starts and then goes into this hellish waiting/retry routine, finally ending in: unexpected EOF
I'm pulling from the same docker service - seems like I should have problems with both or neither.
I've seen this bug in my search and there's so many different suggestions, but looking for any input or suggestions on why it works for one and not the other.

docker image stuck at pulling fs layer

We are using a self hosted version of Jfrog Artifactory and for some reason Artifactory went down and which we were able to resolve by restarting the server. After reboot everything seemed to come back up correctly, however we realized that anything using a particular layer ddad3d7c1e96 which is actually pulling down alpine 3.11.11 from docker hub was now getting stuck at pulling fs layer and would just sit and keep retrying.
We tried "Zapping Caches" and running the maintenance option for Garbage Collection/Unused artifacts but we were still unable to download any image that had this particular layer.
Is it possible that this layer was somehow cached in artifactory and ended up being corrupted. We were also looked through the filestore and were unsuccessful in finding the blob.
After a few days, the issue resolved itself and images can now be pulled but we are now left without an explanation...
Does anyone have an idea of what could have happened and how we can clear out cached parent images which are not directly in our container registry but pulled from docker hub or another registry.
We ran into a similar issue - zapping caches worked for us however it took some time to complete

Docker hub cache with Harbor

I need to cache docker images while pulling from the docker hub in my Harbor "Proxy Cache " project. Therefore I have configured a project with an option Proxy Cache. The registries section also added a new registry endpoint with a provider as "Docker Hub." I added the following configuration to the docker daemon.
cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://registry.harbor.com"]
}
While I'm pulling images from the docker hub it is not caching on my harbor project. Need help to resolve this issue and how my request fulfills with the harbor.
example
my cache project name = proxy
i need to pull httpd:latest
This
method not working also
Updated TLDR;
At the time of answering this question originally, there wasn't a good solution. You can read my original answer. Or just scroll down to the update section where I tell you that Harbor v2.1's blog says they now support this.
Original Answer
I can answer part of your problem. But the answer to part 2 is that you can't. I can link you the issue to show you that they explicitly chose not to due to technical limitations. The good news is that they are aware that this is still somethign that the community wants.
Part 1
One thing you may not know, repos on hub.docker that do not have a project group (like docker pull nginx), still need a project in your harbor that match. It will match on the project name library. So make sure you have a project named library. Not having this library project probably wont effect pass through caching but it definately effects replication.
My setup contains:
harbor url: harbor.mydomain.com
project:
library
cache_proxy-hub-docker
I got my pulls to work with the following example:
docker pull harbor.mydomain.com/cache_proxy-hub-docker/goharbor/redis-photon:v2.1.0
v2.1.0: Pulling from cache_proxy-hub-docker/goharbor/redis-photon
b2823a5a3d08: Pull complete
...omitted...
369af38cd511: Pull complete
Digest: sha256:11bf4d11d81ef582401928b85aa2e325719b125821a578c656951f48d4c716be
Remember, for something like docker pull ngninx, you have to do it as if it were actually library/nginx
docker pull harbor.mydomain.com/cache_proxy-hub-docker/library/nginx
Using default tag: latest
latest: Pulling from cache_proxy-hub-docker/library/nginx
d121f8d1c412: Pull complete
...ommitted...
Digest: sha256:fc66cdef5ca33809823182c9c5d72ea86fd2cef7713cf3363e1a0b12a5d77500
When I look in projects/cache_proxy-hub-docker I see:
cache_proxy-hub-docker/library/nginx
cache_proxy-hub-docker/goharbor/redis-photon
Please also remember, the pull command with the prefix, is also going to be what that image will be known as on your machine after the pull. You'll have to retag it to what you're expecting it to really be. That's why the docker daemon solution is so appealing...
Part 2
I ran around on this same issue. Finally, I suspected they didn't implement it this way. That is correct:
https://github.com/goharbor/harbor/issues/8082#issuecomment-698012277
question:
Is there anyway to configure harbor 2.1 as a transparent docker hub mirror? ...
answer:
not at this time ... we couldn't find a good enough solution in 2.1, but this requirement is known to us.
UPDATE
The Harbor blog for v2.1 indicates that they have now fully added this feature. My answer above is accurate for versions prior to 2.1. I haven't personally tested this but I will link the blog post talking about it.
Blog: https://goharbor.io/blog/harbor-2.1/

JFrog Artifactory error: Pushing Docker images with manifest v2 schema 1 to this repository is blocked

I have two docker repositories running on the same JFrog cloud account/instance. One for internal release candidates and the other for potentially external GC releases. I want to be able to build the docker images and push to the internal repository, let QA/UAT go to town, and then copy the image to the release repository. I don't want to rebuild the image from source. Unfortunately, when I try to pull, tag and then push the image, I'm getting an error:
unauthorized: Pushing Docker images with manifest v2 schema 1 to this repository is blocked.
Both repositories block schema 1 manifests, but I am pushing fine to the internal repository, so it doesn't make much sense I wouldn't be able to push the same image to the release repository.
I've setup a pretty simple test to confirm (actual repository URLs censored):
% docker pull hello-world:latest
latest: Pulling from library/hello-world
0e03bdcc26d7: Pull complete
...
% docker tag hello-world:latest internal-rc.jfrog.io/hello-world:1.0.0-beta
% docker push internal-rc.jfrog.io/hello-world:1.0.0-beta
The push refers to repository [internal-rc.jfrog.io/hello-world]
9c27e219663c: Pushed
...
% docker system prune -a
...
Total reclaimed space: 131.8MB
% docker image pull internal-rc.jfrog.io/hello-world:1.0.0-beta
1.0.0-beta: Pulling from hello-world
0e03bdcc26d7: Pull complete
...
% docker image tag internal-rc.jfrog.io/hello-world:1.0.0-beta docker-release.jfrog.io/hello-world:1.0.0
% docker image push docker-release.jfrog.io/hello-world:1.0.0
The push refers to repository [docker-release.jfrog.io/hello-world]
9c27e219663c: Layer already exists
[DEPRECATION NOTICE] registry v2 schema1 support will be removed in an upcoming release. Please contact admins of the docker-release.jfrog.io registry NOW to avoid future disruption. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/
unauthorized: Pushing Docker images with manifest v2 schema 1 to this repository is blocked. For more information visit https://www.jfrog.com/confluence/display/RTF/Advanced+Topics#AdvancedTopics-DockerManifestV2Schema1Deprecation
So I can upload the image fine to the first repository, and confirm that it is using schema 2:
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 7004,
"digest": "sha256:66f750f4871ba45724699d7341ee7135caba46f63fb205351197464a66b55eff"
...
Does that mediaType being v1 matter? It seems like the manifest itself is version 2... But I don't know how I would change that, or why it would be allowed in one repository but not the other.
I'm using I believe the latest version of docker Docker version 19.03.8, build afacb8b
Anyone have any idea what's going on there? Is the schema version being changed between when I upload it the first time and when I download it? Or is it when I tag it or upload it the second time?
The root of this problem can probably be classified as user error. Specifically the user I'm using somehow had permissions removed from the release repository. Once that was restored everything works as expected.
I say "probably" because the error message has nothing to do with the actual problem, and cost me 2-3 hours worth of wild goose chasing.
So... If you see this error, go ahead and double check everything else around permissions/access before trying to figure out if there's something actually wrong with your image schema version.
We had a different case today with a similar error. I'm adding here because this is the top google result at the moment.
Pulling Docker images with manifest v2 schema 1 to this repository is blocked.
The fix was to change a setting on the remote repository.
Via UI:
Artifactory Admin -> Repositories -> Repositories -> Remote tab
Then select your Docker Hub repo, whatever you named it, then under Basic settings -> Docker Settings, uncheck the checkbox labeled
Block pulling of image manifest v2 schema 1
After that our images began pulling properly again.
There is a similar checkbox on local repos for pushing.
For what it's worth, we're on Artifactory version 7.18.5 rev 71805900
edit: The surprisingness of our particular issue is (potentially) explained in some more detail here: https://www.jfrog.com/jira/browse/RTFACT-2591
Docker pull requests fail due to a change in Docker Hub behavior. Now Docker Hub HTTP response headers return in lower case, for example, 'content-type' instead of 'Content-Type', causing Artifactory to fail to download and cache Docker images from Docker Hub.
but we have not yet tested whether an upgrade allows us to re-enable the aforementioned checkbox.
I have been getting the below errors while either pulling or pushing docker images from build servers. i have a proxy in env which used to connect docker registry. My DNS server while resolving the proxy FQDN it was returning a non functional IP-address. i have 4 DNS servers and multiple proxy servers based on region. Once DNS is updated and working/functional proxy returned it started working. Just check on network side too , it may solve the issue. the error messages were misleading initially i though to be docker layer issue, credential issue ..but no network issue. for the below errors
error pulling image configuration: unknown blob or
[DEPRECATION NOTICE] registry v2 schema1 support will be removed in an upcoming release. Please contact admins of the docker registry NOW to avoid future disruption. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/
manifest invalid: manifest invalid
. Will start No.6 try.

Resources