GitHub packages started returning error pulling image configuration: unknown blob this weekend when trying to pull docker images. It still works to push images to the registry. I haven't found any infromation pointing to problems at GitHub.
000eee12ec04: Pulling fs layer
db438065d064: Pulling fs layer
e345d85b1d3e: Pulling fs layer
f6285e273036: Waiting
2354ee191574: Waiting
69189c7cf8d6: Waiting
771c701acbb7: Waiting
error pulling image configuration: unknown blob
How do I troubleshoot this?
This is the result of a failed push where the push appears to have been successful but something went wrong on the registry side and something is missing.
To fix it build your container again and push it again.
While this is likely a rare situation it would be possible to test for this by deleting your image locally after pushing and pulling it again to ensure pulls work as expected.
One possible cause of failure of pulling or pushing image layer is the unreliable network connection outlined in this blog. By default docker engine has 5 parallel upload operations.
You can update the docker engine to only use single upload or download operation by specifying values for max-concurrent-downloads for download or max-concurrent-uploads for upload.
On windows, you should update via C:\Users\{username}\.docker\daemon.json or via the Docker for Desktop GUI:
{
...
"max-concurrent-uploads": 1
}
On *Nix, open /etc/docker/daemon.json (If the daemon.json file doesn’t exist in /etc/docker/, create it.) and add following values as needed:
{
...
"max-concurrent-uploads": 1
}
And restart daemon.
Note: Currently this is not possible to specify these options in docker push or docker pull command as per this post.
Related
We are using a self hosted version of Jfrog Artifactory and for some reason Artifactory went down and which we were able to resolve by restarting the server. After reboot everything seemed to come back up correctly, however we realized that anything using a particular layer ddad3d7c1e96 which is actually pulling down alpine 3.11.11 from docker hub was now getting stuck at pulling fs layer and would just sit and keep retrying.
We tried "Zapping Caches" and running the maintenance option for Garbage Collection/Unused artifacts but we were still unable to download any image that had this particular layer.
Is it possible that this layer was somehow cached in artifactory and ended up being corrupted. We were also looked through the filestore and were unsuccessful in finding the blob.
After a few days, the issue resolved itself and images can now be pulled but we are now left without an explanation...
Does anyone have an idea of what could have happened and how we can clear out cached parent images which are not directly in our container registry but pulled from docker hub or another registry.
We ran into a similar issue - zapping caches worked for us however it took some time to complete
In the latest Docker, I encountered an issue like this.
docker pull mongo:4.0.10
4.0.10: Pulling from library/mongo
f7277927d38a: Pull complete
8d3eac894db4: Downloading
edf72af6d627: Download complete
3e4f86211d23: Download complete
5747135f14d2: Download complete
f56f2c3793f6: Download complete
f8b941527f3a: Download complete
4000e5ef59f4: Download complete
ad518e2379cf: Download complete
919225fc3685: Download complete
45ff8d51e53a: Download complete
4d3342ddfd7b: Download complete
26002f176fca: Download complete
4.0.10: Pulling from library/mongo
f7277927d38a: Pulling fs layer
8d3eac894db4: Pulling fs layer
edf72af6d627: Pulling fs layer
When I pull an image, it will pull it from my registry-mirrors firstly(quickly), then the official hub( I guess, very slow).
but I do not have this problem before.
The docker version I used at the moment(Docker for Windows).
docker -v
Docker version 19.03.13-beta2, build ff3fbc9d55
Update: Occurred again today. Not sure somewhat changed its config then affected Docker. I played Minikube and Kind in these days.
Update:, create an issue moby/moby#41547), please vote it if you are encountering the same problem.
I have the same issue with you(from china).
After my researching, below is the reason why docker will pull twice.
8d3eac894db4: Downloading
This means this file can not get download from your registry-mirrors.
So after time out, docker will pull this mongo image from official docker hub.
I have two docker repositories running on the same JFrog cloud account/instance. One for internal release candidates and the other for potentially external GC releases. I want to be able to build the docker images and push to the internal repository, let QA/UAT go to town, and then copy the image to the release repository. I don't want to rebuild the image from source. Unfortunately, when I try to pull, tag and then push the image, I'm getting an error:
unauthorized: Pushing Docker images with manifest v2 schema 1 to this repository is blocked.
Both repositories block schema 1 manifests, but I am pushing fine to the internal repository, so it doesn't make much sense I wouldn't be able to push the same image to the release repository.
I've setup a pretty simple test to confirm (actual repository URLs censored):
% docker pull hello-world:latest
latest: Pulling from library/hello-world
0e03bdcc26d7: Pull complete
...
% docker tag hello-world:latest internal-rc.jfrog.io/hello-world:1.0.0-beta
% docker push internal-rc.jfrog.io/hello-world:1.0.0-beta
The push refers to repository [internal-rc.jfrog.io/hello-world]
9c27e219663c: Pushed
...
% docker system prune -a
...
Total reclaimed space: 131.8MB
% docker image pull internal-rc.jfrog.io/hello-world:1.0.0-beta
1.0.0-beta: Pulling from hello-world
0e03bdcc26d7: Pull complete
...
% docker image tag internal-rc.jfrog.io/hello-world:1.0.0-beta docker-release.jfrog.io/hello-world:1.0.0
% docker image push docker-release.jfrog.io/hello-world:1.0.0
The push refers to repository [docker-release.jfrog.io/hello-world]
9c27e219663c: Layer already exists
[DEPRECATION NOTICE] registry v2 schema1 support will be removed in an upcoming release. Please contact admins of the docker-release.jfrog.io registry NOW to avoid future disruption. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/
unauthorized: Pushing Docker images with manifest v2 schema 1 to this repository is blocked. For more information visit https://www.jfrog.com/confluence/display/RTF/Advanced+Topics#AdvancedTopics-DockerManifestV2Schema1Deprecation
So I can upload the image fine to the first repository, and confirm that it is using schema 2:
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 7004,
"digest": "sha256:66f750f4871ba45724699d7341ee7135caba46f63fb205351197464a66b55eff"
...
Does that mediaType being v1 matter? It seems like the manifest itself is version 2... But I don't know how I would change that, or why it would be allowed in one repository but not the other.
I'm using I believe the latest version of docker Docker version 19.03.8, build afacb8b
Anyone have any idea what's going on there? Is the schema version being changed between when I upload it the first time and when I download it? Or is it when I tag it or upload it the second time?
The root of this problem can probably be classified as user error. Specifically the user I'm using somehow had permissions removed from the release repository. Once that was restored everything works as expected.
I say "probably" because the error message has nothing to do with the actual problem, and cost me 2-3 hours worth of wild goose chasing.
So... If you see this error, go ahead and double check everything else around permissions/access before trying to figure out if there's something actually wrong with your image schema version.
We had a different case today with a similar error. I'm adding here because this is the top google result at the moment.
Pulling Docker images with manifest v2 schema 1 to this repository is blocked.
The fix was to change a setting on the remote repository.
Via UI:
Artifactory Admin -> Repositories -> Repositories -> Remote tab
Then select your Docker Hub repo, whatever you named it, then under Basic settings -> Docker Settings, uncheck the checkbox labeled
Block pulling of image manifest v2 schema 1
After that our images began pulling properly again.
There is a similar checkbox on local repos for pushing.
For what it's worth, we're on Artifactory version 7.18.5 rev 71805900
edit: The surprisingness of our particular issue is (potentially) explained in some more detail here: https://www.jfrog.com/jira/browse/RTFACT-2591
Docker pull requests fail due to a change in Docker Hub behavior. Now Docker Hub HTTP response headers return in lower case, for example, 'content-type' instead of 'Content-Type', causing Artifactory to fail to download and cache Docker images from Docker Hub.
but we have not yet tested whether an upgrade allows us to re-enable the aforementioned checkbox.
I have been getting the below errors while either pulling or pushing docker images from build servers. i have a proxy in env which used to connect docker registry. My DNS server while resolving the proxy FQDN it was returning a non functional IP-address. i have 4 DNS servers and multiple proxy servers based on region. Once DNS is updated and working/functional proxy returned it started working. Just check on network side too , it may solve the issue. the error messages were misleading initially i though to be docker layer issue, credential issue ..but no network issue. for the below errors
error pulling image configuration: unknown blob or
[DEPRECATION NOTICE] registry v2 schema1 support will be removed in an upcoming release. Please contact admins of the docker registry NOW to avoid future disruption. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/
manifest invalid: manifest invalid
. Will start No.6 try.
I'm trying to push a docker container to a private registry on the Google Cloud Platform:
gcloud docker -- push gcr.io/<project-name>/<container-name>
and a checksum fails:
e9a19ae6509f: Pushing [========================================> ] 610.9 MB/752.4 MB
xxxxxxxxxxxx: Layer already exists
...
xxxxxxxxxxxx: Layer already exists
file integrity checksum failed for "var/lib/postgresql/9.5/main/pg_xlog/000000010000000000000002"
Then I deleted that file (and more) from the container, committed the change, and tried to push the new image. I got the same error.
Is there some way to push up my image without pushing up the commit that contains the broken file? Any insight into why the new commit fails in the same way?
FWIW, that looks like a local daemon error prior to contacting the registry, so I very much doubt there is anything we will be able to do on our side. That said, if you reach out to us as Jake (jsand) suggests, we can hopefully try to help you resolve the issue.
I am using docker on Ubuntu 12.04. I used docker 0.7.2 modified a container that I created with docker 0.7.1, and when I tried to commit the changes to the container, I got this Failed to upload error (tried twice):
avilella#ubuntu64:~/src/docker$ sudo docker push avilella/basespace-playground
The push refers to a repository [avilella/basespace-playground] (len: 1)
Sending image list
Pushing repository avilella/basespace-playground (1 tags)
5c7f024259a7: Image already pushed, skipping
[...]
04869f04a8c9: Pushing 2.601 MB/16.55 MB 2m16s
[...]
2014/01/02 23:16:54 Failed to upload layer: Put https://registry-1.docker.io/v1/images/cdf6082e5d472d18c0540c43224f4c9b8d1264a2bb3c848a5b5e5a3b00efbf1a/layer: archive/tar: invalid tar header
Any ideas?
I upgraded Docker from 0.7.3 to 0.7.5 and this error stopped.
ALSO POSTED ON GITHUB ISSUE:
I don't have the time to go through a lot of code now, if one of the dev's doesn't get on it I'll look into it later, but it appears to be an issue or change with the registry archive auto-detection settings, or tar file headers being used, probably changed in the new version you are using.
SEE SIMILAR ISSUE:
http://lists.busybox.net/pipermail/busybox/2011-February/074737.html
If there is not too much work you did on the new layer, I would pull your previous docker push, from the registry, and redo the new layer work, then push it. You probably, did not pull from the registry, but built the new layer upon the last commit (locally) and that had different headers. It is probably a good idea, whenever you upgrade to push your work first, then upgrade, then pull and continue working from that pull as things like this can happen where headers and such are different in versions. Hope that helps.
This got fixed in today's version of docker (obtained via apt-get in my case).