ORY Hydra introspect token from external client - oauth-2.0

I managed to setup ORY Hydra in a docker container and first tests show that it is working fine. Especially, I can issue an access token for a client and also later introspect that token using the hydra command line interface. I can even introspect the token with a simple HTTP request from a shell on the docker host machine, like:
curl -X POST -d 'token=Gatyew_trJ8rHo0OEqPU6D6a5-Zwma79ak7KffqT7rA.U7F43t5o0ax_qdj9EBFS8ulR2R1GaCzkaiFPAIE-5d4' http://127.0.0.1:9001/oauth2/introspect
where I use the published port of the introspection endpoint.
Now when it comes to introspect the token with the same curl call from a different machine, like
curl -X POST -d 'token=Gatyew_trJ8rHo0OEqPU6D6a5-Zwma79ak7KffqT7rA.U7F43t5o0ax_qdj9EBFS8ulR2R1GaCzkaiFPAIE-5d4' http://snowflake:9001/oauth2/introspect
the introspection is denied due to missing authorization. This is also indicated in the hydra log. Note that the same call works when issued from a shell in the docker host machine itself, even without authorization. But called from a different machine, the call is denied, even when I use (testwise) basic authentication, like
curl -X POST -H "Authorization: Basic some-consumer:some-secret" -d 'token=Gatyew_trJ8rHo0OEqPU6D6a5-Zwma79ak7KffqT7rA.U7F43t5o0ax_qdj9EBFS8ulR2R1GaCzkaiFPAIE-5d4' http://snowflake:9001/oauth2/introspect
(Note that the hydra server is by default configured for basic authentication only).
What would I have to do to be authorized to introspect the token with a call from a different machine? And how and why can hydra distinguish the two identical calls (either from the docker host machine or from the other machine) and recognize the one as authorized and the other not?

Found it: I had to pass the client-id:client-secret base64-encoded, then it works.
Create a bearer token:
curl -H "Authorization: Basic c29tZS1jb25zdW1lcjpzb21lLXNlY3JldA==" -d "grant_type=client_credentials" http://snowflake:9000/oauth2/token
8SVvB9PTyvGU-td4-VH3BcRMquUFMWG_umFyzQaKAMo.vJfXfIUDzNmmcMqa4_HExREdcmU7iW4CqK9v_qN4Jdg
Introspect the token:
curl -H "Authorization: Basic c29tZS1jb25zdW1lcjpzb21lLXNlY3JldA==" -d "token=8SVvB9PTyvGU-td4-VH3BcRMquUFMWG_umFyzQaKAMo.vJfXfIUDzNmmcMqa4_HExREdcmU7iW4CqK9v_qN4Jdg" http://snowflake:9001/oauth2/introspect
{"active":true,"client_id":"some-consumer","sub":"some-consumer","exp":1612965583,"iat":1612961983,"iss":"http://snowflake:9000/","token_type":"access_token"}
But I still wonder why the introsection request works on the docker host machine without the Authorization header.

Related

Google cloud run: Can a service know its own url?

I'm wondering if a container deployed on cloud run can somehow obtain its own service url or is it impossible?
I'm wanting to know this because I want a cloud run worker that creates google cloud tasks for itself.
If it is possible, how can it be done?
Use namespaces.services.get to retrieve the cloud run service info, some information required for this api
Service name
Cloud Run has provided default environment variable K_SERVICE
Project ID
Region
Access token
Project ID, region and access token can retrieve from metadata server
PROJECT_ID=$(curl "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
REGION=$(curl "http://metadata.google.internal/computeMetadata/v1/instance/region" -H "Metadata-Flavor: Google" | sed "s/.*\///")
TOKEN=$(curl -s "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" -H "Metadata-Flavor: Google" | jq -r '.access_token')
Then you can use namespaces.services.get to retrieve the cloud run service info in json, extract url with jq, export environment variable for application use
export PUCLIC_URL=$(curl -s "https://${REGION}-run.googleapis.com/apis/serving.knative.dev/v1/namespaces/${PROJECT_ID}/services/${K_SERVICE}" -H "Authorization: Bearer ${TOKEN}" | jq -r '.status.url')
curl and jq may required to install, for alpine: apk add --no-cache curl jq
Cloud Run service account requires run.services.get permission to call namespaces.services.get
I wrote an article to self call Cloud Run service to prevent Cold Start. The code that I wrote in Go is in my github
The idea is to call the metadata server to find the project number and the region (like this you don't have this hardcoded or in env var), and then you call the namespace API.
If you need help to write it in another language, let me know.
If you know the service name, you can make a GET HTTP request to https://{endpoint}/apis/serving.knative.dev/v1/{name}
Method: namespaces.services.get
For example :
curl -X GET -H "Authorization: Bearer $(gcloud auth print-access-token)" https://us-central1-run.googleapis.com/apis/serving.knative.dev/v1/namespaces/your-project/services/your-service| grep url
"url": "https://cloud-run-xxxxxxxxxx-uc.a.run.app"

How to log in in jfrog container registry?

I start the container registry:
docker run --name artifactory -d -p 8081:8081 -p 8082:8082 docker.bintray.io/jfrog/artifactory-jcr:latest
I was able to login using the UI and create a repository etc.
Now I want to login using the CLI:
docker login localhost:8082
Username: admin
Password:
Error response from daemon: Get http://localhost:8082/v2/: received unexpected HTTP status: 503 Service Unavailable
What am I doing wrong? I got the same error when I use my local 192.168.x.x address (and after adding it to my insecure registries).
I tried it too and had to search for a while.
Using the API I saw: "message" : "status code: 503, reason phrase: In order to use Artifactory you must accept the EULA first"
I didn't find how to sign it using the UI but it worked this way:
$ curl -XPOST -vu admin:password http://localhost:8082/artifactory/ui/jcr/eula/accept
After that I was able to login:
$ curl -XPOST -vu admin:password http://localhost:8082/artifactory/ui/jcr/eula/accept
8:35
docker login localhost:8081/docker/test
Username: admin
Password:
Login Succeeded
First, let us test if the docker client can reach the JCR by running the below curl,
curl -u http://localhost:8082/artifactory/api/docker/docker/v2/token
Moreover, it looks like the docker client isn't taking localhost as the docker container's IP but the server's host, to check this, add the following line in /etc/hosts file,
127.0.0.1 myartifactory
then access it using myartifactory:8082 thru the UI and if it is accessible then use the docker login as "docker login myartifactory:8082"
Because each repo can have different authentication or authorization, you need to login to a specific repo.
Let's say you created a docker repo "myrepo", you can login as follows
docker login localhost:8082/myrepo

Gcloud Instances have no Docker Hub RateLimit

Recently Docker introduced rate limit for the Docker Hub: https://docs.docker.com/docker-hub/download-rate-limit
On my local machine and DigitalOcean I can see these in action when running:
TOKEN=$(curl "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" | jq -r .token)
curl --head -H "Authorization: Bearer $TOKEN" https://registry-1.docker.io/v2/ratelimitpreview/test/manifests/latest 2>&1 | grep RateLimit
I see for example:
RateLimit-Limit: 500;w=21600
RateLimit-Remaining: 491;w=21600
But this isn't the case when running this on a fresh GCP Gcloud instance. There the headers for RateLimit are not returned. Any idea why this could be?
At least 2 alternatives:
Google's infrastructure is (inadvertently) stripping the headers
Docker is not (applying the limits adding the headers) to requests from Google's blocks
I suspect the latter is more probable because Docker may be concerned at unintentionally rate limiting by (shared) IPs. However, I tried an authenticated (to Docker) test too and that could have utilized my identity but to rate limit me but that did not include the headers in the response either.
If you suspect the former, you should submit a support ticket to Google and have a support engineer trace the request for you.
NOTE I used a Cloud Shell VM

Docker Registry v2 api return strange code

The screenshot shows what I get from http api
When I enter:
curl -X GET registry.yiqixie.com:5000/v2/
It returns something that I can't read:
For a remote registry, you should access it through https.
And you can add a -v in order to see the encoding of the answer.
curl -k -v -X GET https://registry.yiqixie.com:5000/v2/_catalog
Make sure your bash supports utf8.

Simplest way to login to Salesfoce API? Oauth?

Forgive the complete newbie questions. I'm very new to the Salesforce API.
I'm attempting to connect to one specific account where I have the login/password info. This app will not be for public use. I've done a lot of research and it seems I do not need Oauth 2.0 and can instead use Oauth.
Well, there is a huge tangle of different identifiers needed to make this work including username, password, customer key, secret and token.
I created a test Connected App in order to obtain the customer key and secret and then attempted to curl directly from the shell like this (got example from https://www.salesforce.com/us/developer/docs/api_rest/):
curl https://login.salesforce.com/services/oauth2/token -d "grant_type=password" -d "customer_key" -d "client_secret=secret" -d "username=abc#def.com" -d "password=xxxxx"
but I get an error that
{"error_description":"authentication failure","error":"invalid_grant"}
Is the token needed? I've seen some info that if the IP range is set and the connection is from that range then it should not be appended to the password.
All I want to do is connect to this account via the API so I can pull in data that will be used elsewhere. This seems needlessly complex and error-prone. How can I easily connect?
Here is what I use:
curl https://login.salesforce.com/services/oauth2/token \
-d "grant_type=password" \
-d "client_id=YOUR_CLIENT_ID" \
-d "client_secret=YOUR_CLIENT_SECRET" \
-d "username=YOUR_USERNAME" \
-d "password=YOUR_PASSWORD_AND_SECURITY_TOKEN"
Maybe you are forgetting to append your security token to the end of your password?

Resources