I'm storing my docker images in my private Google Container Registry and I want to interact with the images through registry V2 APIs, such as getting tags of an image (/v2/:imageName/tags/list). I believe that it is supported, according to this link But I cannot found related documentation. Can anyone help me?
Just got answer from google support, hope this helps others:
$ export NAME=project-id/image
$ export BEARER=$(curl -u _token:$(gcloud auth print-access-token) https://gcr.io/v2/token?scope=repository:$NAME:pull | cut -d'"' -f 10)
$ curl -H "Authorization: Bearer $BEARER" https://gcr.io/v2/$NAME/tags/list
Indeed it is (including that endpoint). You should be able to interact with it after authenticating in the standard way outlined here.
Feel free to reach out on gcr-contact#google.com also if you have any troubles.
To add to Quyen Nguyen Tuan's answer, if you don't want to have to use gcloud at all, create a service account, pass the username _json_key and use the service account's json key as the password instead:
$ export NAME=project-id/image
$ export BEARER=$(curl -u "_json_key:$(cat path/to/json/key.json)" "https://gcr.io/v2/token?scope=repository:$NAME:pull" | cut -d'"' -f 10)
$ curl -H "Authorization: Bearer $BEARER" https://gcr.io/v2/$NAME/tags/list
and remember to prefix appropriately (e.g. eu.gcr.io) if that's where your repo is.
Related
I'm wondering if a container deployed on cloud run can somehow obtain its own service url or is it impossible?
I'm wanting to know this because I want a cloud run worker that creates google cloud tasks for itself.
If it is possible, how can it be done?
Use namespaces.services.get to retrieve the cloud run service info, some information required for this api
Service name
Cloud Run has provided default environment variable K_SERVICE
Project ID
Region
Access token
Project ID, region and access token can retrieve from metadata server
PROJECT_ID=$(curl "http://metadata.google.internal/computeMetadata/v1/project/project-id" -H "Metadata-Flavor: Google")
REGION=$(curl "http://metadata.google.internal/computeMetadata/v1/instance/region" -H "Metadata-Flavor: Google" | sed "s/.*\///")
TOKEN=$(curl -s "http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token" -H "Metadata-Flavor: Google" | jq -r '.access_token')
Then you can use namespaces.services.get to retrieve the cloud run service info in json, extract url with jq, export environment variable for application use
export PUCLIC_URL=$(curl -s "https://${REGION}-run.googleapis.com/apis/serving.knative.dev/v1/namespaces/${PROJECT_ID}/services/${K_SERVICE}" -H "Authorization: Bearer ${TOKEN}" | jq -r '.status.url')
curl and jq may required to install, for alpine: apk add --no-cache curl jq
Cloud Run service account requires run.services.get permission to call namespaces.services.get
I wrote an article to self call Cloud Run service to prevent Cold Start. The code that I wrote in Go is in my github
The idea is to call the metadata server to find the project number and the region (like this you don't have this hardcoded or in env var), and then you call the namespace API.
If you need help to write it in another language, let me know.
If you know the service name, you can make a GET HTTP request to https://{endpoint}/apis/serving.knative.dev/v1/{name}
Method: namespaces.services.get
For example :
curl -X GET -H "Authorization: Bearer $(gcloud auth print-access-token)" https://us-central1-run.googleapis.com/apis/serving.knative.dev/v1/namespaces/your-project/services/your-service| grep url
"url": "https://cloud-run-xxxxxxxxxx-uc.a.run.app"
Recently Docker introduced rate limit for the Docker Hub: https://docs.docker.com/docker-hub/download-rate-limit
On my local machine and DigitalOcean I can see these in action when running:
TOKEN=$(curl "https://auth.docker.io/token?service=registry.docker.io&scope=repository:ratelimitpreview/test:pull" | jq -r .token)
curl --head -H "Authorization: Bearer $TOKEN" https://registry-1.docker.io/v2/ratelimitpreview/test/manifests/latest 2>&1 | grep RateLimit
I see for example:
RateLimit-Limit: 500;w=21600
RateLimit-Remaining: 491;w=21600
But this isn't the case when running this on a fresh GCP Gcloud instance. There the headers for RateLimit are not returned. Any idea why this could be?
At least 2 alternatives:
Google's infrastructure is (inadvertently) stripping the headers
Docker is not (applying the limits adding the headers) to requests from Google's blocks
I suspect the latter is more probable because Docker may be concerned at unintentionally rate limiting by (shared) IPs. However, I tried an authenticated (to Docker) test too and that could have utilized my identity but to rate limit me but that did not include the headers in the response either.
If you suspect the former, you should submit a support ticket to Google and have a support engineer trace the request for you.
NOTE I used a Cloud Shell VM
I want to promote image from test to prod environment. How do I use "curl POST" to tag and push an image thru docker registry API v2? (Docker API 1.22)
The equivalent command are:
docker tag my_testrepo:6000/new_test_image:test_tag myprod_repo:5000/new_prod_image:tag
docker push myprod_repo:5000/new_prod_image:tag
How do I use curl command to tag an image into a repo:
POST /images/test/tag?repo=myrepo&force=0&tag=v42 HTTP/1.1
Could not find any instructions. Tried many times, all failed.
While researching this issue I stumbled upon this question. The solution I found resolved around this blog post. Credit to wheleph for the solution.
Essentially there is no method to tag an existing image, you can simply download the manifest of the existing tag, and re-upload the manifest as a new tag:
curl /v2/mybusybox/manifests/latest -H 'accept: application/vnd.docker.distribution.manifest.v2+json' > manifest.json
Then upload that manifest file back up.
curl -XPUT '/v2/mybusybox/manifests/new_tag' -H 'content-type: application/vnd.docker.distribution.manifest.v2+json' -d '#manifest.json'
The screenshot shows what I get from http api
When I enter:
curl -X GET registry.yiqixie.com:5000/v2/
It returns something that I can't read:
For a remote registry, you should access it through https.
And you can add a -v in order to see the encoding of the answer.
curl -k -v -X GET https://registry.yiqixie.com:5000/v2/_catalog
Make sure your bash supports utf8.
Forgive the complete newbie questions. I'm very new to the Salesforce API.
I'm attempting to connect to one specific account where I have the login/password info. This app will not be for public use. I've done a lot of research and it seems I do not need Oauth 2.0 and can instead use Oauth.
Well, there is a huge tangle of different identifiers needed to make this work including username, password, customer key, secret and token.
I created a test Connected App in order to obtain the customer key and secret and then attempted to curl directly from the shell like this (got example from https://www.salesforce.com/us/developer/docs/api_rest/):
curl https://login.salesforce.com/services/oauth2/token -d "grant_type=password" -d "customer_key" -d "client_secret=secret" -d "username=abc#def.com" -d "password=xxxxx"
but I get an error that
{"error_description":"authentication failure","error":"invalid_grant"}
Is the token needed? I've seen some info that if the IP range is set and the connection is from that range then it should not be appended to the password.
All I want to do is connect to this account via the API so I can pull in data that will be used elsewhere. This seems needlessly complex and error-prone. How can I easily connect?
Here is what I use:
curl https://login.salesforce.com/services/oauth2/token \
-d "grant_type=password" \
-d "client_id=YOUR_CLIENT_ID" \
-d "client_secret=YOUR_CLIENT_SECRET" \
-d "username=YOUR_USERNAME" \
-d "password=YOUR_PASSWORD_AND_SECURITY_TOKEN"
Maybe you are forgetting to append your security token to the end of your password?