Docker registry 2.0 api v2 access using auth token - docker

I have private docker repository in quay.io and I want to get list of repositories and tags related to it.
So I use curl -IL https://quay.io/user/accountName/v2/_catalog
it returns
HTTP/1.1 401 UNAUTHORIZED
Server: nginx/1.9.5
Date: Thu, 18 Feb 2016 11:02:19 GMT
Content-Type: application/json
Content-Length: 117
Connection: keep-alive

Related

docker registry: https instead of http

I've just deployed a docker registry.
I'm able to get access to it using:
$ curl -I chart-example.local/v2/
HTTP/1.1 200 OK
Content-Length: 2
Content-Type: application/json; charset=utf-8
Date: Tue, 28 Jan 2020 20:10:35 GMT
Docker-Distribution-Api-Version: registry/2.0
Vary: Accept-Encoding
X-Content-Type-Options: nosniff
However, when I'm trying to push an local image to it, I'm getting this message:
$ docker push chart-example.local/feedly:latest
The push refers to repository [chart-example.local/feedly]
Get https://chart-example.local/v2/: x509: certificate has expired or is not yet valid
Why docker is trying to get access using https instead of http?
Docker uses https by default for security. You can override this setting by modifying your daemon.json file with the following content. Do not use this setting in production.
{
"insecure-registries" : ["chart-example.local"]
}
See this link for more information: https://docs.docker.com/registry/insecure/

Websocket request error on Cloud Run on GKE

Just deployed a cloud run on GKE (version 1.13.6-gke.6) service following this: https://cloud.google.com/run/docs/deploying
Code were similar to https://cloud.google.com/appengine/docs/flexible/python/using-websockets-and-session-affinity
and it worked on my local
but my requests (via curl and javascript) return 503 error:
< HTTP/1.1 503 Service Unavailable
HTTP/1.1 503 Service Unavailable
< content-length: 85
content-length: 85
< content-type: text/plain
content-type: text/plain
< date: Fri, 14 Jun 2019 05:07:37 GMT
date: Fri, 14 Jun 2019 05:07:37 GMT
< server: istio-envoy
server: istio-envoy
< connection: close
connection: close
Any idea what am I missing out?

cannot deploy docker image form AWS private registry

I am trying to push an app from a docker image hosted in the AWS Elastic Container Registry and am getting 500 error codes from the cloudfoundry API when trying to push. Am i doing something wrong or is there just an issue with the API currently? Any help is appreciated.
push command used (replaced real route, app and image name):
cf push dockerized-app --docker-image 300401118676.dkr.ecr.eu-central-1.amazonaws.com/my/image:latest --docker-username AWS --hostname my-dockerized-app -i 1 -m 1024M -k 1024M
cf-cli version:
cf version 6.34.1+bbdf81482.2018-01-17
This ist the standard log output i get:
Using docker repository password from environment variable CF_DOCKER_PASSWORD.
Pushing app dockerized-app to org ORG / space SPACE as someone#somewhere.ch...
Getting app info...
Creating app with these attributes...
+ name: dockerized-app
+ docker image: 300401118676.dkr.ecr.eu-central-1.amazonaws.com/my/image:latest
+ docker username: AWS
+ disk quota: 1G
+ instances: 1
+ memory: 1G
routes:
+ my-dockerized-app.scapp.io
Creating app dockerized-app...
Unexpected Response
Response code: 500
CC code: 0
CC error code:
Request ID: f0789965-19b1-4178-5cce-e42ff671a99b::6eb55c40-70de-4011-ad30-ee60aab54d82
Description: {
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}
FAILED
Here is the relevant log output with the -v flag set
Creating app with these attributes...
+ name: dockerized-app
+ docker image: 300401118676.dkr.ecr.eu-central-1.amazonaws.com/my/image:latest
+ docker username: AWS
+ disk quota: 1G
+ instances: 1
+ memory: 1G
routes:
+ my-dockerized-app.scapp.io
Creating app dockerized-app...
REQUEST: [2018-02-27T18:39:28+01:00]
POST /v2/apps HTTP/1.1
Host: api.lyra-836.appcloud.swisscom.com
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: cf/6.34.1+bbdf81482.2018-01-17 (go1.9.2; amd64 darwin)
{
"disk_quota": 1024,
"docker_credentials": {
"password": "[PRIVATE DATA HIDDEN]",
"username": "AWS"
},
"docker_image": "300401118676.dkr.ecr.eu-central-1.amazonaws.com/my/image:latest",
"instances": 1,
"memory": 1024,
"name": "dockerized-app",
"space_guid": "07cead83-7db5-477e-83ca-f7bbee10e557"
}
RESPONSE: [2018-02-27T18:39:28+01:00]
HTTP/1.1 500 Internal Server Error
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Length: 99
Content-Type: application/json;charset=utf-8
Date: Tue, 27 Feb 2018 17:39:28 GMT
Expires: 0
Pragma: no-cache
Server: nginx
Strict-Transport-Security: max-age=16070400; includeSubDomains
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Vcap-Request-Id: 6c6acb3a-4ead-4f88-5d2c-e7d7f846b2af::0e919224-e372-46f1-8d70-19bf30f85145
X-Xss-Protection: 1; mode=block
{
"code": 10001,
"description": "An unknown error occurred.",
"error_code": "UnknownError"
}
Unexpected Response
Response code: 500
CC code: 0
CC error code:
Request ID: 6c6acb3a-4ead-4f88-5d2c-e7d7f846b2af::0e919224-e372-46f1-8d70-19bf30f85145
Description: {
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}
Seems to me like the docker registry username and password get picked up just fine (and yes they work).
From an operator perspective, it looks like you're hitting CloudFoundry's password limit of 1000 characters by using the Amazon Elastic Container Registry signed tokens (which are around 2000 chars):
/var/vcap/sys/log/cloud_controller_ng/cloud_controller_ng.log.5.gz:
{"timestamp":1526311559.8367982,"message":"Request failed: 500:
{\"error_code\"=>\"UnknownError\", \"description\"=>\"An unknown
error occurred.\", \"code\"=>10001, \"test_mode_info\"=>
{\"description\"=>\"docker_password can be up to 1,000 characters\",
...
We filed the issue with the CC team: https://github.com/cloudfoundry/cloud_controller_ng/issues/1141
I'm not sure what version of Cloud Foundry your provider is running right now, but support for private docker registries (i.e. registries using HTTPS & basic auth) requires a fairly recent version of Cloud Foundry.
It definitely works in API versions 2.103 and later, as that's what we're running at Meshcloud right now and we have customer successfully using private registries ;-)
$ cf api
api endpoint: https://api.cf.eu-de-netde.msh.host
api version: 2.103.0
Disclaimer: I'm a co-founder at Meshcloud.

Bitbucket Pipelines. Cannot specify windowsservercore docker image

Bitbucket Pipelines allows (using bitbucket-pipelines.yml) to specify a custom docker image from Dockerhub as build environment. Next image is used as default for .NET Core:
# You can specify a custom docker image from Dockerhub as your build environment
image: microsoft/dotnet:onbuild
But cause I need Windows Containers image, I am trying to change image to "windowsservercore". Based on information in microsoft/dotnet docker hub, I have tried
image: microsoft/dotnet:1.0.0-windowsservercore-core
and
image: microsoft/dotnet:1.0.0-preview2-windowsservercore-sdk
but image has not been downloaded:
+ docker pull "microsoft/dotnet:1.0.0-windowsservercore-core"
1.0.0-windowsservercore-core: Pulling from microsoft/dotnet
1239394e5a8a: Pulling fs layer
d90a2ac79ff2: Pulling fs layer
cde3fa87b2c9: Pulling fs layer
9f60be4f8205: Pulling fs layer
c4f6347ed968: Pulling fs layer
9f60be4f8205: Waiting
c4f6347ed968: Waiting
1239394e5a8a: Retrying in 5 seconds
d90a2ac79ff2: Verifying Checksum
d90a2ac79ff2: Download complete
cde3fa87b2c9: Verifying Checksum
cde3fa87b2c9: Download complete
1239394e5a8a: Retrying in 4 seconds
c4f6347ed968: Verifying Checksum
c4f6347ed968: Download complete
...
1239394e5a8a: Retrying in 3 seconds
1239394e5a8a: Retrying in 2 seconds
1239394e5a8a: Retrying in 1 second
1239394e5a8a: Downloading
unknown blob
You might not be able to use that image at all if Bitbucket pipelines doesn't support running Windows images yet...
The error you are reporting, is the error you get when a windowsservercore or nanoserver image is pulled from an unsupported host.
Additionally, my local docker does the same when running that pull.
$ docker pull microsoft/dotnet:1.0.0-windowsservercore-core`.
1.0.0-windowsservercore-core: Pulling from microsoft/dotnet
1239394e5a8a: Downloading
d90a2ac79ff2: Download complete
cde3fa87b2c9: Download complete
9f60be4f8205: Download complete
c4f6347ed968: Download complete
unknown blob
You can have a look detailed look via the Registry API at the 1.0.0-windowsservercore-core tags manifest:
curl -i -H "Authorization: Bearer $TOKEN" https://index.docker.io/v2/microsoft/dotnet/manifests/1.0.0-windowsservercore-core
HTTP/1.1 200 OK
Content-Length: 4168
Content-Type: application/vnd.docker.distribution.manifest.v1+prettyjws
Docker-Content-Digest: sha256:190e1596bf49b844f6fc3361bbedcd50c917079e5f9f305a1fe807ae4b66a6a7
Docker-Distribution-Api-Version: registry/2.0
Etag: "sha256:190e1596bf49b844f6fc3361bbedcd50c917079e5f9f305a1fe807ae4b66a6a7"
Date: Sun, 07 Aug 2016 13:25:58 GMT
Strict-Transport-Security: max-age=31536000
{
"schemaVersion": 1,
"name": "microsoft/dotnet",
"tag": "1.0.0-windowsservercore-core",
"architecture": "amd64",
"fsLayers": [
{
"blobSum": "sha256:9f60be4f8205c0d384e6af06d61e253141395d4ef7000d8bb34032d1cbd8ee98"
},
{
"blobSum": "sha256:cde3fa87b2c91c895014a6c83481b27ede659f502538c6ed416574a3abe5a7a2"
},
{
"blobSum": "sha256:d90a2ac79ff2c769b497fabddbd14ae8a66f8034dda53fd5781402ec58416989"
},
{
"blobSum": "sha256:1239394e5a8ab79fbd3b751dc5d98decf5886f14339958fdf5c1f96c89da58a7"
}
],
The manifest includes the 1239394e5a8 blob Docker is not able to retrieve. Then running a GET for that blob returns a 404.
curl -i -H "Authorization: Bearer $TOKEN" https://index.docker.io/v2/microsoft/dotnet/blobs/sha256:1239394e5a8ab79fbd3b751dc5d98decf5886f14339958fdf5c1f96c89da58a7
HTTP/1.1 404 Not Found
Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0
Date: Sun, 07 Aug 2016 13:29:02 GMT
Content-Length: 157
Strict-Transport-Security: max-age=31536000
{"errors":[{"code":"BLOB_UNKNOWN","message":"blob unknown to registry","detail":"sha256:1239394e5a8ab79fbd3b751dc5d98decf5886f14339958fdf5c1f96c89da58a7"}]}
Whereas that other blobs return the usual redirect to the data download:
curl -i -H "Authorization: Bearer $TOKEN" https://index.docker.io/v2/microsoft/dotnet/blobs/sha256:d90a2ac79ff2c769b497fabddbd14ae8a66f8034dda53fd5781402ec58416989
HTTP/1.1 307 Temporary Redirect
Content-Type: application/octet-stream
Docker-Distribution-Api-Version: registry/2.0
Location: https://dseasb33srnrn.cloudfront.net/registry-v2/docker/registry/v2/blobs/sha256/d9/d90a2ac79ff2c769b497fabddbd14ae8a66f8034dda53fd5781402ec58416989/data?Expires=1470577758&Signature=B6n1cC~fNwgeYYbA2w6peZOWM5RyV79OrBW-9nN2NdxpB60FC1sUe7e9I4kcA7Meq1SAG7z4P4gQiLvNfokHr8u0p3LTUQgk4JpqZPxqSPNtDWoSyjzyTN0sK3iZPhgxcNBVfddHyxgkAw7xb47zUg76RjZ5-O8QNl2YeEKeX24_&Key-Pair-Id=APKAJECH5M7VWIS5YZ6Q
Date: Sun, 07 Aug 2016 13:29:18 GMT
Content-Length: 432
Strict-Transport-Security: max-age=31536000
Temporary Redirect.
It's probably a registry bug and the MS dotnet team might need to rebuild that layer/image and publish again to work around the issue. Once they've fixed that you will find out if Bitbucket pipelines can run Windows images (which I've found no evidence to say they do yet).
I was able to work around this by using the mono docker image and xbuild.

What is the canonical way to see if an image exists in the docker public registry?

We want to check if an image exists in the public registry (Docker Hub) automatically before we start a deployment. With the v1 API, we would just query https://index.docker.io/v1/repositories/gliderlabs/alpine/tags/3.2 for example.
But now the official API for the registry is v2, what is the official way of checking the existence of an image in the public registry?
v1
$ curl -i https://index.docker.io/v1/repositories/gliderlabs/alpine/tags/latest
HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Tue, 11 Aug 2015 10:02:09 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Vary: Cookie
X-Frame-Options: SAMEORIGIN
Strict-Transport-Security: max-age=31536000
[{"pk": 20307475, "id": "5bd56d81"}, {"pk": 20355979, "id": "511136ea"}]
v2:
$ curl -i https://index.docker.io/v2/repositories/gliderlabs/alpine/tags/latest
HTTP/1.1 301 MOVED PERMANENTLY
Server: nginx/1.6.2
Date: Tue, 11 Aug 2015 10:04:20 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
X-Frame-Options: SAMEORIGIN
Location: https://index.docker.io/v2/repositories/gliderlabs/alpine/tags/latest/
Strict-Transport-Security: max-age=31536000
$ curl -i https://index.docker.io/v2/repositories/gliderlabs/alpine/tags/latest/
HTTP/1.1 301 MOVED PERMANENTLY
Server: nginx/1.6.2
Date: Tue, 11 Aug 2015 10:04:26 GMT
Content-Type: text/html; charset=utf-8
Transfer-Encoding: chunked
X-Frame-Options: SAMEORIGIN
Location: https://registry.hub.docker.com/v2/repositories/gliderlabs/alpine/tags/latest/
Strict-Transport-Security: max-age=31536000
$ curl -i https://registry.hub.docker.com/v2/repositories/gliderlabs/alpine/tags/latest/
HTTP/1.1 200 OK
Server: nginx/1.6.2
Date: Tue, 11 Aug 2015 10:04:34 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Vary: Cookie
X-Frame-Options: SAMEORIGIN
Allow: GET, DELETE, HEAD, OPTIONS
Strict-Transport-Security: max-age=31536000
{"name": "latest", "full_size": 5250074, "id": 130839, "repository": 127805, "creator": 152141, "last_updater": 152141, "image_id": null, "v2": false}
Am I supposed to stick to the v1 url even though it is now kind of deprecated or use v2 URLs but there is no documentation about it? If I use v2, shall I use directly https://registry.hub.docker.com/v2/ or still use https://index.docker.io/v1/ and follow the redirects?
Upstream's download-frozen-image-v2.sh script should be of some use as at least a decent API example here (https://github.com/docker/docker/blob/6bf8844f1179108b9fabd271a655bf9eaaf1ee8c/contrib/download-frozen-image-v2.sh#L47-L54).
The main key is that you'll need to be hitting registry-1.docker.io instead of index.docker.io, and that you need a "token" from auth.docker.io (https://auth.docker.io/token?service=registry.docker.io&scope=repository:gliderlabs/alpine:pull), even if you're just requesting read-only access to a public repository. Once you've got that token, you can hit https://registry-1.docker.io/v2/gliderlabs/alpine/manifests/latest with an Authorization header which will either return the JSON manifest of the image or error out with a 404.
token="$(curl -sSL "https://auth.docker.io/token?service=registry.docker.io&scope=repository:$image:pull" | jq --raw-output .token)"
curl -fsSL -H "Authorization: Bearer $token" "https://registry-1.docker.io/v2/$image/manifests/$digest"

Resources