Gcloud Instances have no Docker Hub RateLimit - docker

Recently Docker introduced rate limit for the Docker Hub: https://docs.docker.com/docker-hub/download-rate-limit
On my local machine and DigitalOcean I can see these in action when running:
TOKEN=$(curl "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" | jq -r .token)
curl --head -H "Authorization: Bearer $TOKEN" https://registry-1.docker.io/v2/ratelimitpreview/test/manifests/latest 2>&1 | grep RateLimit
I see for example:
RateLimit-Limit: 500;w=21600
RateLimit-Remaining: 491;w=21600
But this isn't the case when running this on a fresh GCP Gcloud instance. There the headers for RateLimit are not returned. Any idea why this could be?

At least 2 alternatives:
Google's infrastructure is (inadvertently) stripping the headers
Docker is not (applying the limits adding the headers) to requests from Google's blocks
I suspect the latter is more probable because Docker may be concerned at unintentionally rate limiting by (shared) IPs. However, I tried an authenticated (to Docker) test too and that could have utilized my identity but to rate limit me but that did not include the headers in the response either.
If you suspect the former, you should submit a support ticket to Google and have a support engineer trace the request for you.
NOTE I used a Cloud Shell VM

Related

ORY Hydra introspect token from external client

I managed to setup ORY Hydra in a docker container and first tests show that it is working fine. Especially, I can issue an access token for a client and also later introspect that token using the hydra command line interface. I can even introspect the token with a simple HTTP request from a shell on the docker host machine, like:
curl -X POST -d 'token=Gatyew_trJ8rHo0OEqPU6D6a5-Zwma79ak7KffqT7rA.U7F43t5o0ax_qdj9EBFS8ulR2R1GaCzkaiFPAIE-5d4' http://127.0.0.1:9001/oauth2/introspect
where I use the published port of the introspection endpoint.
Now when it comes to introspect the token with the same curl call from a different machine, like
curl -X POST -d 'token=Gatyew_trJ8rHo0OEqPU6D6a5-Zwma79ak7KffqT7rA.U7F43t5o0ax_qdj9EBFS8ulR2R1GaCzkaiFPAIE-5d4' http://snowflake:9001/oauth2/introspect
the introspection is denied due to missing authorization. This is also indicated in the hydra log. Note that the same call works when issued from a shell in the docker host machine itself, even without authorization. But called from a different machine, the call is denied, even when I use (testwise) basic authentication, like
curl -X POST -H "Authorization: Basic some-consumer:some-secret" -d 'token=Gatyew_trJ8rHo0OEqPU6D6a5-Zwma79ak7KffqT7rA.U7F43t5o0ax_qdj9EBFS8ulR2R1GaCzkaiFPAIE-5d4' http://snowflake:9001/oauth2/introspect
(Note that the hydra server is by default configured for basic authentication only).
What would I have to do to be authorized to introspect the token with a call from a different machine? And how and why can hydra distinguish the two identical calls (either from the docker host machine or from the other machine) and recognize the one as authorized and the other not?
Found it: I had to pass the client-id:client-secret base64-encoded, then it works.
Create a bearer token:
curl -H "Authorization: Basic c29tZS1jb25zdW1lcjpzb21lLXNlY3JldA==" -d "grant_type=client_credentials" http://snowflake:9000/oauth2/token
8SVvB9PTyvGU-td4-VH3BcRMquUFMWG_umFyzQaKAMo.vJfXfIUDzNmmcMqa4_HExREdcmU7iW4CqK9v_qN4Jdg
Introspect the token:
curl -H "Authorization: Basic c29tZS1jb25zdW1lcjpzb21lLXNlY3JldA==" -d "token=8SVvB9PTyvGU-td4-VH3BcRMquUFMWG_umFyzQaKAMo.vJfXfIUDzNmmcMqa4_HExREdcmU7iW4CqK9v_qN4Jdg" http://snowflake:9001/oauth2/introspect
{"active":true,"client_id":"some-consumer","sub":"some-consumer","exp":1612965583,"iat":1612961983,"iss":"http://snowflake:9000/","token_type":"access_token"}
But I still wonder why the introsection request works on the docker host machine without the Authorization header.

Kubernetes is there an analogue to docker diff?

I'm trying to figure out where a program run in a container stores its logs. But, I don't have SSH access to the machine which deployed container, only kubectl. Suppose I had SSH access, I'd do something like this:
ssh machine-running-docker 'docker diff \
$(kubectl describe pod pod-name | \
grep 'Conainer ID' | sed -E s#^[^/]+//(.+)$#\1#)'
(The regex may be imprecise, just to give the idea).
Well, for starters, an app in container should not store it's logs in files inside container. That said, it is sometimes hard to avoid when you work with 3rd party apps not configured for logging to stdout / some logging service.
Good old find to the rescue - just kubectl exec into the pod/container and find / -mmin -1 will give you all files modified in last 1 min. That should narrow the list enough for you (assuming the container lived for few minutes already).

Does Google container registry support docker remote API V2

I'm storing my docker images in my private Google Container Registry and I want to interact with the images through registry V2 APIs, such as getting tags of an image (/v2/:imageName/tags/list). I believe that it is supported, according to this link But I cannot found related documentation. Can anyone help me?
Just got answer from google support, hope this helps others:
$ export NAME=project-id/image
$ export BEARER=$(curl -u _token:$(gcloud auth print-access-token) https://gcr.io/v2/token?scope=repository:$NAME:pull | cut -d'"' -f 10)
$ curl -H "Authorization: Bearer $BEARER" https://gcr.io/v2/$NAME/tags/list
Indeed it is (including that endpoint). You should be able to interact with it after authenticating in the standard way outlined here.
Feel free to reach out on gcr-contact#google.com also if you have any troubles.
To add to Quyen Nguyen Tuan's answer, if you don't want to have to use gcloud at all, create a service account, pass the username _json_key and use the service account's json key as the password instead:
$ export NAME=project-id/image
$ export BEARER=$(curl -u "_json_key:$(cat path/to/json/key.json)" "https://gcr.io/v2/token?scope=repository:$NAME:pull" | cut -d'"' -f 10)
$ curl -H "Authorization: Bearer $BEARER" https://gcr.io/v2/$NAME/tags/list
and remember to prefix appropriately (e.g. eu.gcr.io) if that's where your repo is.

Use docker in restricted internet environnement

I'm planning to use docker in a restricted internet access environment controlled by squid proxy...
And i can't find a way to retrieve urls used by docker under the hood when pulling image.
Could you please help me finding these url in order to add rules for docker repositories
Guess it's quite difficult to find out the exact URLs that are used when Docker is performing the image pull. But at least there is workaround that can give you the list of external servers the Docker interacts with:
# Console #1
sudo tcpdump | grep http | awk '{ gsub(":",""); print $3 "\n" $5 }' | grep -v $YOUR_OWN_FQDN > servers 2&>1
# Console #2
docker pull debian
# Console #1
sed -e 's/\.http\(s\)\?//g' servers | sort -u
I've end up with this list (unfortunately I'm note sure if it's consistent or region-independent):
104.16.105.85
ec2-54-152-161-54.compute-1.amazonaws.com
ec2-54-208-130-47.compute-1.amazonaws.com
ec2-54-208-162-63.compute-1.amazonaws.com
server-205-251-219-168.arn1.r.cloudfront.net
server-205-251-219-226.arn1.r.cloudfront.net

Searching the Google Container Registry

I've been setting up my kubernetes cluster and I have been using the Google Container Registry for storing images.
As a part of my setup I am building some tooling where I need to search the remote repository for images, including tags.
So my question is:
How do I search Google Cloud Registry for images?
I've tried without luck to use docker cli to do the search:
$ docker search eu.gcr.io/project-101
Error response from daemon: Unexpected status code 403
$ gcloud docker search eu.gcr.io/project-101
Error response from daemon: Unexpected status code 403
$ docker login -e not#val.id -u _token -p mytoken https://eu.gcr.io
WARNING: login credentials saved in /Users/drwho/.docker/config.json
Login Succeeded
$ docker search eu.gcr.io/project-101
Error response from daemon: Unexpected status code 403
$ docker search eu.gcr.io/not-known
Error response from daemon: Unexpected status code 404
As you see I've tried a good deal of different approaches. The last option could be using the Google Storage Bucket API and search the file system "manually".
Strangely, gcr.io doesn't support search in any obvious way. Use https://console.cloud.google.com/gcr/images/google-containers/GLOBAL to search.
Docker clients through 1.7.0 only support unauthenticated search. For example if you search for:
$ docker search gcr.io/google-containers/kube
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
google-containers/hyperkube 0
google-containers/kube-apiserver 0
google-containers/kube-controller-manager 0
google-containers/kube-scheduler 0
google-containers/kube-ui 0
google-containers/kube2sky 0
google-containers/kubectl 0
I submitted a PR into 1.8.0 that will authenticate searches, and work similarly for private repositories.
Today the 403 (Forbidden) is because the Docker client isn't actually passing the authentication that is setup via gcloud docker or docker login.
EDIT: As a workaround, it isn't too bad to just CURL the endpoint:
$ curl -u _token:TOKEN "https://gcr.io/v1/search?q=QUERY" | \
python -mjson.tool
You can generate TOKEN with gcloud auth print-access-token.
We support queries of the form <project-id>[/image-substring], e.g. google-containers/ube would match google-containers/kube2sky.

Resources