Unable to push to dockerhub : 504 gateway timeout - docker

I m having a problem pushing a docker image for a maven project to dockerhub.
It says 504 Gateway Timeout even though i can pull images
I tought at first it was because of image size so i tried to push an image with 14kb
Still the same error.

Related

JFrog Artifactory error: Pushing Docker images with manifest v2 schema 1 to this repository is blocked

I have two docker repositories running on the same JFrog cloud account/instance. One for internal release candidates and the other for potentially external GC releases. I want to be able to build the docker images and push to the internal repository, let QA/UAT go to town, and then copy the image to the release repository. I don't want to rebuild the image from source. Unfortunately, when I try to pull, tag and then push the image, I'm getting an error:
unauthorized: Pushing Docker images with manifest v2 schema 1 to this repository is blocked.
Both repositories block schema 1 manifests, but I am pushing fine to the internal repository, so it doesn't make much sense I wouldn't be able to push the same image to the release repository.
I've setup a pretty simple test to confirm (actual repository URLs censored):
% docker pull hello-world:latest
latest: Pulling from library/hello-world
0e03bdcc26d7: Pull complete
...
% docker tag hello-world:latest internal-rc.jfrog.io/hello-world:1.0.0-beta
% docker push internal-rc.jfrog.io/hello-world:1.0.0-beta
The push refers to repository [internal-rc.jfrog.io/hello-world]
9c27e219663c: Pushed
...
% docker system prune -a
...
Total reclaimed space: 131.8MB
% docker image pull internal-rc.jfrog.io/hello-world:1.0.0-beta
1.0.0-beta: Pulling from hello-world
0e03bdcc26d7: Pull complete
...
% docker image tag internal-rc.jfrog.io/hello-world:1.0.0-beta docker-release.jfrog.io/hello-world:1.0.0
% docker image push docker-release.jfrog.io/hello-world:1.0.0
The push refers to repository [docker-release.jfrog.io/hello-world]
9c27e219663c: Layer already exists
[DEPRECATION NOTICE] registry v2 schema1 support will be removed in an upcoming release. Please contact admins of the docker-release.jfrog.io registry NOW to avoid future disruption. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/
unauthorized: Pushing Docker images with manifest v2 schema 1 to this repository is blocked. For more information visit https://www.jfrog.com/confluence/display/RTF/Advanced+Topics#AdvancedTopics-DockerManifestV2Schema1Deprecation
So I can upload the image fine to the first repository, and confirm that it is using schema 2:
{
"schemaVersion": 2,
"mediaType": "application/vnd.docker.distribution.manifest.v2+json",
"config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 7004,
"digest": "sha256:66f750f4871ba45724699d7341ee7135caba46f63fb205351197464a66b55eff"
...
Does that mediaType being v1 matter? It seems like the manifest itself is version 2... But I don't know how I would change that, or why it would be allowed in one repository but not the other.
I'm using I believe the latest version of docker Docker version 19.03.8, build afacb8b
Anyone have any idea what's going on there? Is the schema version being changed between when I upload it the first time and when I download it? Or is it when I tag it or upload it the second time?
The root of this problem can probably be classified as user error. Specifically the user I'm using somehow had permissions removed from the release repository. Once that was restored everything works as expected.
I say "probably" because the error message has nothing to do with the actual problem, and cost me 2-3 hours worth of wild goose chasing.
So... If you see this error, go ahead and double check everything else around permissions/access before trying to figure out if there's something actually wrong with your image schema version.
We had a different case today with a similar error. I'm adding here because this is the top google result at the moment.
Pulling Docker images with manifest v2 schema 1 to this repository is blocked.
The fix was to change a setting on the remote repository.
Via UI:
Artifactory Admin -> Repositories -> Repositories -> Remote tab
Then select your Docker Hub repo, whatever you named it, then under Basic settings -> Docker Settings, uncheck the checkbox labeled
Block pulling of image manifest v2 schema 1
After that our images began pulling properly again.
There is a similar checkbox on local repos for pushing.
For what it's worth, we're on Artifactory version 7.18.5 rev 71805900
edit: The surprisingness of our particular issue is (potentially) explained in some more detail here: https://www.jfrog.com/jira/browse/RTFACT-2591
Docker pull requests fail due to a change in Docker Hub behavior. Now Docker Hub HTTP response headers return in lower case, for example, 'content-type' instead of 'Content-Type', causing Artifactory to fail to download and cache Docker images from Docker Hub.
but we have not yet tested whether an upgrade allows us to re-enable the aforementioned checkbox.
I have been getting the below errors while either pulling or pushing docker images from build servers. i have a proxy in env which used to connect docker registry. My DNS server while resolving the proxy FQDN it was returning a non functional IP-address. i have 4 DNS servers and multiple proxy servers based on region. Once DNS is updated and working/functional proxy returned it started working. Just check on network side too , it may solve the issue. the error messages were misleading initially i though to be docker layer issue, credential issue ..but no network issue. for the below errors
error pulling image configuration: unknown blob or
[DEPRECATION NOTICE] registry v2 schema1 support will be removed in an upcoming release. Please contact admins of the docker registry NOW to avoid future disruption. More information at https://docs.docker.com/registry/spec/deprecated-schema-v1/
manifest invalid: manifest invalid
. Will start No.6 try.

Why is pulling images using `--ignore-pull-failures` shows 404 failures the first time and done the second time

I have this docker-compose.yml with some local images and some from a remote repository.
When running docker-compose.yml pull --ignore-pull-failures the first time, it shows this error for the local images (which is normal) :
ERROR: 404 Client Error: Not Found ("pull access denied for my-local-image, repository does not exist or may require 'docker login'")
Pulling my-local-image ... done
When running docker-compose.yml pull --ignore-pull-failures the second time, it only shows:
Pulling my-local-image ... done
When the 404 error should show up ? And why ?
If your remote repository is from Docker Hub, log in before doing a pull, if the repo is private. With ...
docker login

GitHub Packages Docker - Error pulling image configuration: unknown blob

GitHub packages started returning error pulling image configuration: unknown blob this weekend when trying to pull docker images. It still works to push images to the registry. I haven't found any infromation pointing to problems at GitHub.
000eee12ec04: Pulling fs layer
db438065d064: Pulling fs layer
e345d85b1d3e: Pulling fs layer
f6285e273036: Waiting
2354ee191574: Waiting
69189c7cf8d6: Waiting
771c701acbb7: Waiting
error pulling image configuration: unknown blob
How do I troubleshoot this?
This is the result of a failed push where the push appears to have been successful but something went wrong on the registry side and something is missing.
To fix it build your container again and push it again.
While this is likely a rare situation it would be possible to test for this by deleting your image locally after pushing and pulling it again to ensure pulls work as expected.
One possible cause of failure of pulling or pushing image layer is the unreliable network connection outlined in this blog. By default docker engine has 5 parallel upload operations.
You can update the docker engine to only use single upload or download operation by specifying values for max-concurrent-downloads for download or max-concurrent-uploads for upload.
On windows, you should update via C:\Users\{username}\.docker\daemon.json or via the Docker for Desktop GUI:
{
...
"max-concurrent-uploads": 1
}
On *Nix, open /etc/docker/daemon.json (If the daemon.json file doesn’t exist in /etc/docker/, create it.) and add following values as needed:
{
...
"max-concurrent-uploads": 1
}
And restart daemon.
Note: Currently this is not possible to specify these options in docker push or docker pull command as per this post.

Nexus 3: "Remote Connection Pending..." for docker hub

I followed the instructions shown here for setting up Docker Hub as a proxy repository, but it appears to be stuck with the "Remote Connection Pending..." status. What am I missing?
I'm using Nexus 3 milestone 6. The Dockerfile I am using is here: https://github.com/baselibrary/docker-nexus/blob/801465b9593afcd1533acf020c529767096b223c/3.0/Dockerfile
The video instructions above are effectively the same as the ones listed in the documentation here: https://books.sonatype.com/nexus-book/3.0/reference/docker.html#docker-proxy
The "connection pending" message is normal in 3.0m6. It just means nothing has been downloaded through the proxy repository yet. Try pulling an image from dockerhub, the status will change once the first file of the image is downloaded.
Your Nexus Repository Manager might be deployed behind a proxy server and it can therefore not connect to Docker Hub.
Have you tried to pull an image from the repository yet?

Why is my Docker image not being pushed to Docker Hub?

I have a Docker image that I'd like to push to Docker Hub:
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mattthomson/hadoop-java8 0.1 d9926f422c14 11 days ago 857.9 MB
```
I've run docker login, logged in as mattthomson, and run docker push mattthomson/hadoop-java8:0.1. This takes a while, showing a progress bar of the upload.
However, it seems not to have worked. If I run docker pull mattthomson/hadoop-java8:0.1 from another computer, I get "Tag 0.1 not found in repository mattthomson/hadoop-java8". The images doesn't show up here, either.
What am I doing wrong?
I had to confirm my email address so my repositories show up on docker hub. Simple but didn't notice at first.
I experienced this when my Docker Hub organisation had reached its limit on the number of private repositories I was allowed to create (and I also got an email from Docker Hub about reaching this limit).
To solve the problem I upgraded my Docker Hub subscription to allow more repositories.
It would be nice if the error message from Docker Hub contained some hint to the cause.
I retried a number of times before this went through successfully. I was misled by the fact that the upload was failing partway through, without displaying an error message, just a timestamp. The error code was non-zero.

Resources