I am trying to pull an image from private registry and I have the auth stored in /root/.docker/config.json in a kubernetes cluster node.
I have also verified that the auth works as expected while pulling the docker image.
curl -v \
-X GET \
-H "Authorization: Bearer $(cat /tmp/auth_bearer.txt)" repo-url/manifests/latest \
-H "Accept: application/vnd.docker.distribution.manifest.v2+json"
Response:
< HTTP/1.1 200 OK < Date: Wed, 11 Mar 2020 23:27:09 GMT <
Content-Type: application/vnd.docker.distribution.manifest.v2+json <
Content-Length: 3455 < Connection: keep-alive < Vary: Origin <
opc-request-id: 772f679098749bb474d59161 < Docker-Content-Digest:
sha256:17dcbbf7c670d8894ddfefc2907c9f045bfc45e60954525635632abbf02910
< { "schemaVersion": 2, "mediaType":
"application/vnd.docker.distribution.manifest.v2+json", "config": {
"mediaType": "application/vnd.docker.container.image.v1+json",
"size": 9504,
"digest": "sha256:d59db4a22d6ba8f1d3b5d7c8f8f410688dee569a947bf242e6c3e3b708f634829"
}, "layers": [
{ [...]
From the above response, it is clear that I have the image present at the private repo location and the auth is correct. However when I try to do docker pull <repo-url>/image-name:image-tag I get this error:
Trying to pull repository <repo-url>/image-name:image-tag ...
pull access denied for <repo-url>/image-name:image-tag, repository does not exist or may require 'docker login'
Can someone please tell me what I am missing here?
Why is the node ignoring docker credentials stored at /root/.docker/config.json?
Use file based config
According to the documentation there is the following option: https://kubernetes.io/docs/concepts/containers/images/#configuring-nodes-to-authenticate-to-a-private-registry
You can set the docker secrets in a file listed here:
{--root-dir:-/var/lib/kubelet}/config.json
{cwd of kubelet}/config.json
${HOME}/.docker/config.json
/.docker/config.json
{--root-dir:-/var/lib/kubelet}/.dockercfg
{cwd of kubelet}/.dockercfg
${HOME}/.dockercfg
/.dockercfg
Note: You may have to set HOME=/root explicitly in your environment file for kubelet.
Related
I'm trying to learn about the Docker Engine restful API, v1.41, but that's a problem. I managed to use cURL successfully to send all HTTP requests concerning containers, images, volumes and network, but it's impossible to me to use them for exec instances.
I read in the official documentation that when I use the docker exec command, he behaves the same way as calling POST /containers/:id/exec and then POST /exec/:id/start.
Now, in order to execute docker exec -it my_container bash that's what I did and got:
[claudio#gulliver ~]$ curl -X POST --unix-socket /run/docker.sock\
http/containers/18d7dee7470d/exec\
-H "Content-Type: application/json"\
-d '{
"AttachStdin":true,
"AttachStdout": true,
"AttachStderr": true,
"Tty": true,
"Cmd":["bash"]
}'
{"Id": "853938b2621606f85c04409ec7e345b884d46b95985a4fca0e8ddf28e20e1f79"}
[claudio#gulliver ~]$ curl -X POST --unix-socket /run/docker.sock\
http/exec/853938b2621606f85c04409ec7e345b884d46b95985a4fca0e8ddf28e20e1f79/start\
-H "Content-Type: application/json"\
-d '{
"Detach":false,
"Tty":true
}' --verbose
Note: Unnecessary use of -X or --request, POST is already inferred.
* Trying /run/docker.sock:0...
* Connected to http () port 80 (#0)
> POST /exec/853938b2621606f85c04409ec7e345b884d46b95985a4fca0e8ddf28e20e1f79/start HTTP/1.1
> Host: http
> User-Agent: curl/7.85.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 28
>
* Mark bundle as not supporting multiuse
< HTTP/1.1 200 OK
< Content-Type: application/vnd.docker.raw-stream
< Api-Version: 1.41
< Docker-Experimental: false
< Ostype: linux
< Server: Docker/20.10.21 (linux)
* no chunk, no close, no size. Assume close to signal end
<
And from now I cannot do anything, just interrupt the process.
Is there a way I don't know to interact with this?
I have a local Jfrog Artifactory Pro.
I use "http://localhost:8081/artifactory/webapp/#/home" to go to my Artifactory.
I created local Docker registry:
I configured a direct reverse proxy, from Rest API:
$ curl -u admin:xxxxxx-i "http://127.0.0.1:8081/artifactory/api/system/configuration/webServer"
HTTP/1.1 200 OK
Server: Artifactory/6.2.0
X-Artifactory-Id: de89ec654198c960:3f9aa2d0:167a7c20d5e:-8000
Access-Control-Allow-Methods: GET, POST, DELETE, PUT
Access-Control-Allow-Headers: X-Requested-With, Content-Type, X-Codingpedia
Cache-Control: no-store
Content-Type: application/json
Transfer-Encoding: chunked
Date: Sun, 16 Dec 2018 13:04:52 GMT
{
"key" : "direct",
"webServerType" : "DIRECT",
"artifactoryAppContext" : "artifactory",
"publicAppContext" : "artifactory",
"serverName" : "127.0.0.1",
"serverNameExpression" : "*.localhost",
"artifactoryServerName" : "localhost",
"artifactoryPort" : 8081,
"dockerReverseProxyMethod" : "SUBDOMAIN",
"useHttps" : false,
"useHttp" : true,
"httpsPort" : 443,
"httpPort" : 8081,
"upStreamName" : "artifactory"
}
Configuration from artifactory:
I want to login to my registry "mylocaldocker" via docker client but I get an error:
$ docker login mylocaldocker.localhost -u admin -p xxxxxx
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get https://mylocaldocker.localhost/v2/: dial tcp: lookup mylocaldocker.localhost on 192.168.65.1:53: no such host
How can I log in to artifactory docker registry? and pull/push images to it!?
First: docker login related to Artifactory -> Configurations -> HTTP Settings I used "Docker access method" as "Repository path"
docker login -u admin -p **** x.x.x.x:8081
Second:
Due to a limitation in docker, we cannot use login to localhost.
we should replace "localhost" or "127.0.0.1" by real machine IP (private IP).
And add private_ip:8081 (x.x.x.x:8081) to insecure registries in docker.
See the answer in this link: Pull Artifactory Docker Images
I am trying to push an app from a docker image hosted in the AWS Elastic Container Registry and am getting 500 error codes from the cloudfoundry API when trying to push. Am i doing something wrong or is there just an issue with the API currently? Any help is appreciated.
push command used (replaced real route, app and image name):
cf push dockerized-app --docker-image 300401118676.dkr.ecr.eu-central-1.amazonaws.com/my/image:latest --docker-username AWS --hostname my-dockerized-app -i 1 -m 1024M -k 1024M
cf-cli version:
cf version 6.34.1+bbdf81482.2018-01-17
This ist the standard log output i get:
Using docker repository password from environment variable CF_DOCKER_PASSWORD.
Pushing app dockerized-app to org ORG / space SPACE as someone#somewhere.ch...
Getting app info...
Creating app with these attributes...
+ name: dockerized-app
+ docker image: 300401118676.dkr.ecr.eu-central-1.amazonaws.com/my/image:latest
+ docker username: AWS
+ disk quota: 1G
+ instances: 1
+ memory: 1G
routes:
+ my-dockerized-app.scapp.io
Creating app dockerized-app...
Unexpected Response
Response code: 500
CC code: 0
CC error code:
Request ID: f0789965-19b1-4178-5cce-e42ff671a99b::6eb55c40-70de-4011-ad30-ee60aab54d82
Description: {
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}
FAILED
Here is the relevant log output with the -v flag set
Creating app with these attributes...
+ name: dockerized-app
+ docker image: 300401118676.dkr.ecr.eu-central-1.amazonaws.com/my/image:latest
+ docker username: AWS
+ disk quota: 1G
+ instances: 1
+ memory: 1G
routes:
+ my-dockerized-app.scapp.io
Creating app dockerized-app...
REQUEST: [2018-02-27T18:39:28+01:00]
POST /v2/apps HTTP/1.1
Host: api.lyra-836.appcloud.swisscom.com
Accept: application/json
Authorization: [PRIVATE DATA HIDDEN]
Content-Type: application/json
User-Agent: cf/6.34.1+bbdf81482.2018-01-17 (go1.9.2; amd64 darwin)
{
"disk_quota": 1024,
"docker_credentials": {
"password": "[PRIVATE DATA HIDDEN]",
"username": "AWS"
},
"docker_image": "300401118676.dkr.ecr.eu-central-1.amazonaws.com/my/image:latest",
"instances": 1,
"memory": 1024,
"name": "dockerized-app",
"space_guid": "07cead83-7db5-477e-83ca-f7bbee10e557"
}
RESPONSE: [2018-02-27T18:39:28+01:00]
HTTP/1.1 500 Internal Server Error
Cache-Control: no-cache, no-store, max-age=0, must-revalidate
Content-Length: 99
Content-Type: application/json;charset=utf-8
Date: Tue, 27 Feb 2018 17:39:28 GMT
Expires: 0
Pragma: no-cache
Server: nginx
Strict-Transport-Security: max-age=16070400; includeSubDomains
X-Content-Type-Options: nosniff
X-Frame-Options: DENY
X-Vcap-Request-Id: 6c6acb3a-4ead-4f88-5d2c-e7d7f846b2af::0e919224-e372-46f1-8d70-19bf30f85145
X-Xss-Protection: 1; mode=block
{
"code": 10001,
"description": "An unknown error occurred.",
"error_code": "UnknownError"
}
Unexpected Response
Response code: 500
CC code: 0
CC error code:
Request ID: 6c6acb3a-4ead-4f88-5d2c-e7d7f846b2af::0e919224-e372-46f1-8d70-19bf30f85145
Description: {
"error_code": "UnknownError",
"description": "An unknown error occurred.",
"code": 10001
}
Seems to me like the docker registry username and password get picked up just fine (and yes they work).
From an operator perspective, it looks like you're hitting CloudFoundry's password limit of 1000 characters by using the Amazon Elastic Container Registry signed tokens (which are around 2000 chars):
/var/vcap/sys/log/cloud_controller_ng/cloud_controller_ng.log.5.gz:
{"timestamp":1526311559.8367982,"message":"Request failed: 500:
{\"error_code\"=>\"UnknownError\", \"description\"=>\"An unknown
error occurred.\", \"code\"=>10001, \"test_mode_info\"=>
{\"description\"=>\"docker_password can be up to 1,000 characters\",
...
We filed the issue with the CC team: https://github.com/cloudfoundry/cloud_controller_ng/issues/1141
I'm not sure what version of Cloud Foundry your provider is running right now, but support for private docker registries (i.e. registries using HTTPS & basic auth) requires a fairly recent version of Cloud Foundry.
It definitely works in API versions 2.103 and later, as that's what we're running at Meshcloud right now and we have customer successfully using private registries ;-)
$ cf api
api endpoint: https://api.cf.eu-de-netde.msh.host
api version: 2.103.0
Disclaimer: I'm a co-founder at Meshcloud.
docker version: 1.11.2
curl version: 7.50.3 (x86_64-pc-linux-gnu) libcurl/7.50.3 OpenSSL/1.0.1e zlib/1.2.7
/usr/local/sbin/bin/curl --unix-socket /var/run/docker.sock http://images/json -v
* Trying /var/run/docker.sock...
* Connected to images (/var/run/docker.sock) port 80 (#0)
> GET /json HTTP/1.1
> Host: images
> User-Agent: curl/7.50.3
> Accept: */*
>
< HTTP/1.1 404 Not Found
< Content-Type: text/plain; charset=utf-8
< X-Content-Type-Options: nosniff
< Date: Thu, 22 Sep 2016 06:11:52 GMT
< Content-Length: 19
<
404 page not found
* Curl_http_done: called premature == 0
* Connection #0 to host images left intact
Is there anything wrong with my docker daemon? How can I get the containers info from the docker unix-socket?
docker deamon is absolutely started.
I followed this page:https://docs.docker.com/engine/reference/api/docker_remote_api/#/v1-23-api-changes, its suggestion us to use curl 7.40 or later, command curl --unix-socket /var/run/docker.sock http:/containers/json. You can found that there is a unavild URL http:/containers/json in this command.
Then I download the newest curl 7.50.3, the key of this problem is the curl's version, we should exec like below:
curl --unix-socket /var/run/docker.sock http://localhost/images/json
More detail watch this page.https://superuser.com/questions/834307/can-curl-send-requests-to-sockets. Hope it help some other people who confused.
Bitbucket Pipelines allows (using bitbucket-pipelines.yml) to specify a custom docker image from Dockerhub as build environment. Next image is used as default for .NET Core:
# You can specify a custom docker image from Dockerhub as your build environment
image: microsoft/dotnet:onbuild
But cause I need Windows Containers image, I am trying to change image to "windowsservercore". Based on information in microsoft/dotnet docker hub, I have tried
image: microsoft/dotnet:1.0.0-windowsservercore-core
and
image: microsoft/dotnet:1.0.0-preview2-windowsservercore-sdk
but image has not been downloaded:
+ docker pull "microsoft/dotnet:1.0.0-windowsservercore-core"
1.0.0-windowsservercore-core: Pulling from microsoft/dotnet
1239394e5a8a: Pulling fs layer
d90a2ac79ff2: Pulling fs layer
cde3fa87b2c9: Pulling fs layer
9f60be4f8205: Pulling fs layer
c4f6347ed968: Pulling fs layer
9f60be4f8205: Waiting
c4f6347ed968: Waiting
1239394e5a8a: Retrying in 5 seconds
d90a2ac79ff2: Verifying Checksum
d90a2ac79ff2: Download complete
cde3fa87b2c9: Verifying Checksum
cde3fa87b2c9: Download complete
1239394e5a8a: Retrying in 4 seconds
c4f6347ed968: Verifying Checksum
c4f6347ed968: Download complete
...
1239394e5a8a: Retrying in 3 seconds
1239394e5a8a: Retrying in 2 seconds
1239394e5a8a: Retrying in 1 second
1239394e5a8a: Downloading
unknown blob
You might not be able to use that image at all if Bitbucket pipelines doesn't support running Windows images yet...
The error you are reporting, is the error you get when a windowsservercore or nanoserver image is pulled from an unsupported host.
Additionally, my local docker does the same when running that pull.
$ docker pull microsoft/dotnet:1.0.0-windowsservercore-core`.
1.0.0-windowsservercore-core: Pulling from microsoft/dotnet
1239394e5a8a: Downloading
d90a2ac79ff2: Download complete
cde3fa87b2c9: Download complete
9f60be4f8205: Download complete
c4f6347ed968: Download complete
unknown blob
You can have a look detailed look via the Registry API at the 1.0.0-windowsservercore-core tags manifest:
curl -i -H "Authorization: Bearer $TOKEN" https://index.docker.io/v2/microsoft/dotnet/manifests/1.0.0-windowsservercore-core
HTTP/1.1 200 OK
Content-Length: 4168
Content-Type: application/vnd.docker.distribution.manifest.v1+prettyjws
Docker-Content-Digest: sha256:190e1596bf49b844f6fc3361bbedcd50c917079e5f9f305a1fe807ae4b66a6a7
Docker-Distribution-Api-Version: registry/2.0
Etag: "sha256:190e1596bf49b844f6fc3361bbedcd50c917079e5f9f305a1fe807ae4b66a6a7"
Date: Sun, 07 Aug 2016 13:25:58 GMT
Strict-Transport-Security: max-age=31536000
{
"schemaVersion": 1,
"name": "microsoft/dotnet",
"tag": "1.0.0-windowsservercore-core",
"architecture": "amd64",
"fsLayers": [
{
"blobSum": "sha256:9f60be4f8205c0d384e6af06d61e253141395d4ef7000d8bb34032d1cbd8ee98"
},
{
"blobSum": "sha256:cde3fa87b2c91c895014a6c83481b27ede659f502538c6ed416574a3abe5a7a2"
},
{
"blobSum": "sha256:d90a2ac79ff2c769b497fabddbd14ae8a66f8034dda53fd5781402ec58416989"
},
{
"blobSum": "sha256:1239394e5a8ab79fbd3b751dc5d98decf5886f14339958fdf5c1f96c89da58a7"
}
],
The manifest includes the 1239394e5a8 blob Docker is not able to retrieve. Then running a GET for that blob returns a 404.
curl -i -H "Authorization: Bearer $TOKEN" https://index.docker.io/v2/microsoft/dotnet/blobs/sha256:1239394e5a8ab79fbd3b751dc5d98decf5886f14339958fdf5c1f96c89da58a7
HTTP/1.1 404 Not Found
Content-Type: application/json; charset=utf-8
Docker-Distribution-Api-Version: registry/2.0
Date: Sun, 07 Aug 2016 13:29:02 GMT
Content-Length: 157
Strict-Transport-Security: max-age=31536000
{"errors":[{"code":"BLOB_UNKNOWN","message":"blob unknown to registry","detail":"sha256:1239394e5a8ab79fbd3b751dc5d98decf5886f14339958fdf5c1f96c89da58a7"}]}
Whereas that other blobs return the usual redirect to the data download:
curl -i -H "Authorization: Bearer $TOKEN" https://index.docker.io/v2/microsoft/dotnet/blobs/sha256:d90a2ac79ff2c769b497fabddbd14ae8a66f8034dda53fd5781402ec58416989
HTTP/1.1 307 Temporary Redirect
Content-Type: application/octet-stream
Docker-Distribution-Api-Version: registry/2.0
Location: https://dseasb33srnrn.cloudfront.net/registry-v2/docker/registry/v2/blobs/sha256/d9/d90a2ac79ff2c769b497fabddbd14ae8a66f8034dda53fd5781402ec58416989/data?Expires=1470577758&Signature=B6n1cC~fNwgeYYbA2w6peZOWM5RyV79OrBW-9nN2NdxpB60FC1sUe7e9I4kcA7Meq1SAG7z4P4gQiLvNfokHr8u0p3LTUQgk4JpqZPxqSPNtDWoSyjzyTN0sK3iZPhgxcNBVfddHyxgkAw7xb47zUg76RjZ5-O8QNl2YeEKeX24_&Key-Pair-Id=APKAJECH5M7VWIS5YZ6Q
Date: Sun, 07 Aug 2016 13:29:18 GMT
Content-Length: 432
Strict-Transport-Security: max-age=31536000
Temporary Redirect.
It's probably a registry bug and the MS dotnet team might need to rebuild that layer/image and publish again to work around the issue. Once they've fixed that you will find out if Bitbucket pipelines can run Windows images (which I've found no evidence to say they do yet).
I was able to work around this by using the mono docker image and xbuild.