Docker Server message error : You have reached your pull rate limit - docker

i try to install k8s after install some tools , kube throw me error
Warning Failed 10s (x3 over 70s) kubelet Failed to pull image "kubesphere/ks-controller-manager:v3.3.1": rpc error: code = Unknown desc = failed to pull and unpack image "docker.io/kubesphere/ks-controller-manager:v3.3.1": failed to copy: httpReadSeeker: failed open: unexpected status code https://registry-1.docker.io/v2/kubesphere/ks-controller-manager/manifests/sha256:47a8ae9cb4f6f044aaa554727c81bafd67b5c05b5d90fbc707ac67938e62c6d7: 429 Too Many Requests - Server message: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
after that i try to google this error ,i find solution to log in dockerhub account all machine, but i have still this problem.
does anyone know what is solution ?

Authenticated users in docker hub are limited to 200 container image pull requests in six hours.
So if you need more than this limit you need a pro account in docker hub.
You can mitigate this limitation by using a local repo as a pull-through cache as described here :
https://github.com/t83714/docker-registry-mirror

Related

i'm having error wrong when l do any images from docker hub?

C:\Users\D>docker pull nextcloud:latest
Error response from daemon: Head "https://registry-
1.docker.io/v2/library/nextcloud/manifests/latest": net/http: TLS handshake timeout
this error gets when l went to carry any images from docker hub !
l use Windows system
It is very hard to tell from your specific issue, but after a quick search on the internet I've found few links that could help you. Please check them and try it out:
Docker not able to pull images behind proxy TLS handshake timeout
https://serverfault.com/questions/908141/docker-pull-tls-handshake-timeout
https://www.devopsroles.com/docker-pull-issues-tls-handshake-timeout/
All of the above have similar issue and they have at least 1 fix each.

Failed to pull image "mcr.microsoft.comoss/calico/pod2daemon-flexvol:v3.18.1 (missing "/")

Executive summary
For several weeks we sporadically see the following error on all of our AKS Kubernetes clusters:
Failed to pull image "mcr.microsoft.comoss/calico/pod2daemon-flexvol:v3.18.1
Obviously there is a missing "/" after "mcr.microsoft.com".
The problem started after upgrading the clusters from 1.17 to 1.20.
Where does this spelling error come from? Is there anything WE can do about it?
Some details
The full error is:
Failed to pull image "mcr.microsoft.comoss/calico/pod2daemon-flexvol:v3.18.1": rpc error: code = Unknown desc = failed to pull and unpack image "mcr.microsoft.comoss/calico/pod2daemon-flexvol:v3.18.1": failed to resolve reference "mcr.microsoft.comoss/calico/pod2daemon-flexvol:v3.18.1": failed to do request: Head https://mcr.microsoft.comoss/v2/calico/pod2daemon-flexvol/manifests/v3.18.1: dial tcp: lookup mcr.microsoft.comoss on 168.63.129.16:53: no such host
In 50% of the cases the following is logged also:
Pod 'calico-system/calico-typha-685d454c58-pdqkh' triggered a Warning-Event: 'FailedMount'. Warning Message: Unable to attach or mount volumes: unmounted volumes=[typha-ca typha-certs calico-typha-token-424k6], unattached volumes=[typha-ca typha-certs calico-typha-token-424k6]: timed out waiting for the condition
There seems to be no measurable effect on cluster health apart from the warnings - I see no correlating errors in any services.
We did not find a trigger which causes the behavior. It does not seem to be correlated to any change we do from our side (deployments, scaling, ...).
Also there seems to be no pattern as to the frequency. Sometimes there is no problem for several days and then we have the error pop up 10 times per day.
Another observation is that the calico-kube-controller and several pods were restarted. Replicaset and deployments did not change.
Restart time
Since all the pods of the daemonset are running eventually, the problem seems to be solving itself after some time.
Are you behind a firewall, and used this link to set it up
https://learn.microsoft.com/en-us/azure/aks/limit-egress-traffic
If so add HTTP to the mcr.microsoft.com, looks like MS missed the 's' in an update recently
Paul

OpenShift 4 error: Error reading manifest

during OpenShift installation from a local mirror registry, after I started the bootstrap machine i see the following error in the journal log:
release-image-download.sh[1270]:
Error: error pulling image "quay.io/openshift-release-dev/ocp-release#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129":
unable to pull quay.io/openshift-release-dev/ocp-release#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129: unable to pull image:
Error initializing source docker://quay.io/openshift-release-dev/ocp-release#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129:
(Mirrors also failed: [my registry:5000/ocp4/openshift4#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129: Error reading manifest
sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129 in my registry:5000/ocp4/openshift4: manifest unknown: manifest unknown]):
quay.io/openshift-release-dev/ocp-release#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129: error pinging docker registry quay.io:
Get "https://quay.io/v2/": dial tcp 50.16.140.223:443: i/o timeout
Does anyone have any idea what it can be?
The answer is here in the error:
... dial tcp 50.16.140.223:443: i/o timeout
Try this on the command line:
$ podman pull quay.io/openshift-release-dev/ocp-release#sha256:999a6a4bd731075e389ae601b373194c6cb2c7b4dadd1ad06ef607e86476b129
You'll need to be authenticated to actually download the content (this is what the pull secret does). However, if you can't get the "unauthenticated" error then this would more solidly point to some network configuration.
That IP resolves to a quay host (you can verify that with "curl -k https://50.16.140.223"). Perhaps you have an internet filter or firewall in place that's blocking egress?
Resolutions:
fix your network issue, if you have one
look at doing an disconnected /airgap install -- https://docs.openshift.com/container-platform/4.7/installing/installing-mirroring-installation-images.html has more details on that
(If you're already doing an airgap install and it's your local mirror that's failing, then your local mirror is failing)

Kubernetes executor pod creation failing with intermittent 404 and 'unauthorized' errors

I'm trying to run PySpark on a Kubernetes cluster on AWS.
I'm submitting to the cluster with the spark-submit command and viewing the results in the Kubernetes dashboard.
The driver pod is getting created fine, but the executors frequently fail to spin up, failing with either of the following errors:
Failed to pull image "docker.io/joemalt/[image-name]:[tag]": rpc error: code = Unknown desc = Error response from daemon: unauthorized: authentication required
Failed to pull image "docker.io/joemalt/[image name]:[tag]": rpc error: code = Unknown desc = Error response from daemon: error parsing HTTP 404 response body: invalid character 'p' after top-level value: "404 page not found\n"
Kubernetes attempts to recreate the pods, but the errors are frequent enough that it often doesn't manage to get any executor pods working at all.
Neither of these errors occur when setting up the driver pod, or when pulling the image manually. The repository is public so the authentication required in particular doesn't make any sense to me. I've tried replacing the Kubernetes cluster, with no success.

docker pull generate 403 error message on latest version

I'm recently update to docker version 1.8.2, build 0a8c2e3 but when I execute any docker pull , the output show 403 error trying to download image layers.
Output:
docker pull cassandra [80/221]
Using default tag: latest
Pulling repository docker.io/library/cassandra
f86e3cc71c14: Error pulling image (latest) from docker.io/library/cassandra, Server error: Status 403 while fetching image layer (756acc691e31cf79b1a74a404f91b
2f4365cba936cec3f6eb4bc94ef419b33da) 404f91b2f4365cba936cec3f6eb4bc94ef419b33da)
8c00acfb0175: Download complete
756acc691e31: Error pulling dependent layers
Error pulling image (latest) from docker.io/library/cassandra, Server error: Status 403 while fetching image layer (756acc691e31cf79b1a74a404f91b2f4365cba936ce
c3f6eb4bc94ef419b33da)
I got the same problem because the new docker registry on Docker Hub seems to use an external service on cloudfront.net and this site forbids access from my country. The full error was:
Error statting layer: Head https://dseasb33srnrn.cloudfront.net/registry-v2/docker/registry/v2/blobs/sha256/a3/a3ed95caeb02ffe68cdd9fd84406680ae93d633cb16422d00e8a7c22955b46d4/data?Expires=1443470694&Signature=U11dGhTtNemJC-r1jR7fVmd5nlEq~imRzqgQKAmhmmxWLpLnN0Eb7iprdGvbD49Bc65j7omMZQG5cZnO6B3kcvMGF96z0pKJ8rHYJSZZgg4Wv6YoLfuvH~Wr2Sa11vW3ZvfssoK0NfVTsTFvq801TEAQ0g74gN8A6IrsZ8x0RH8_&Key-Pair-Id=APKAJECH5M7VWIS5YZ6Q: net/http: TLS handshake timeout
I found this running the docker daemon with -D (debug) and reading the log at /var/log/upstart/docker.log. Also, if you're behind a proxy, verify that your proxy isn't denying access.

Resources