I've been setting up my kubernetes cluster and I have been using the Google Container Registry for storing images.
As a part of my setup I am building some tooling where I need to search the remote repository for images, including tags.
So my question is:
How do I search Google Cloud Registry for images?
I've tried without luck to use docker cli to do the search:
$ docker search eu.gcr.io/project-101
Error response from daemon: Unexpected status code 403
$ gcloud docker search eu.gcr.io/project-101
Error response from daemon: Unexpected status code 403
$ docker login -e not#val.id -u _token -p mytoken https://eu.gcr.io
WARNING: login credentials saved in /Users/drwho/.docker/config.json
Login Succeeded
$ docker search eu.gcr.io/project-101
Error response from daemon: Unexpected status code 403
$ docker search eu.gcr.io/not-known
Error response from daemon: Unexpected status code 404
As you see I've tried a good deal of different approaches. The last option could be using the Google Storage Bucket API and search the file system "manually".
Strangely, gcr.io doesn't support search in any obvious way. Use https://console.cloud.google.com/gcr/images/google-containers/GLOBAL to search.
Docker clients through 1.7.0 only support unauthenticated search. For example if you search for:
$ docker search gcr.io/google-containers/kube
NAME DESCRIPTION STARS OFFICIAL AUTOMATED
google-containers/hyperkube 0
google-containers/kube-apiserver 0
google-containers/kube-controller-manager 0
google-containers/kube-scheduler 0
google-containers/kube-ui 0
google-containers/kube2sky 0
google-containers/kubectl 0
I submitted a PR into 1.8.0 that will authenticate searches, and work similarly for private repositories.
Today the 403 (Forbidden) is because the Docker client isn't actually passing the authentication that is setup via gcloud docker or docker login.
EDIT: As a workaround, it isn't too bad to just CURL the endpoint:
$ curl -u _token:TOKEN "https://gcr.io/v1/search?q=QUERY" | \
python -mjson.tool
You can generate TOKEN with gcloud auth print-access-token.
We support queries of the form <project-id>[/image-substring], e.g. google-containers/ube would match google-containers/kube2sky.
Related
I'm having an issue where I can't push docker images with JFrog CLI:
This works:
docker login -u <username> --password <ACCESS_TOKEN> <URL>
docker tag my_img:latest <url>/<repository>/my_image:latest
docker push <url>/<repository>/my_image:latest
However
jf config add --interactive=false --user <username> --url <URL> --access_token <ACCESS_TOKEN> myServer
docker tag my_img:latest <url>/<repository>/my_image:latest
jf use myServer
jf docker push <url>/<repository>/my_image:latest --server-id myServer
gives me an error:
[🚨Error] received invalid access-token
I would expect that those two are equivalent.
I also tried a couple of variations - to the same effect:
jf docker push image_name/latest <repository>/image_name:latest --server-id myServer
jf rt dp my_image/latest <repository>/my_image:latest
To make thing even weirder, the same jf config works fine for conan:
jf config use myServer
jf rt upload <path>/<artifact> <conan_repository>/<artifact>
What am I missing?
I am suspecting that the CLI call is running into a similar issue as reported here.
Instead of using an identity token created from the user profile section, can you please try using an Access Token generated under the following path?
JFrog Platform UI -> Administration -> User management -> Access Token?
I want to check if a container on gitlab is built properly with the right content. As a first step, I'm trying to login to the registry by running the following command:
sudo docker login -u "ci-registry-user" -p "some-token" "registry.gitlab.com/some-registry:container"
However, I run into Get "https://registry.gitlab.com/v2/": unauthorized: HTTP Basic: Access denied errors.
My question is in two folds:
How do I access the hosted containers on gitlab? My goal is to access the container and run docker exec -it container_name bash && cat /some/path/to_container.py
Is there an alternative way to achieve this without logging in to the registry?
Check your GitLab PAT scope, to make sure it is API or at least read_registry.
Read-only (pull) for Container Registry images if a project is private and authorization is required.
And make sure you have access to that project with that token, if thesekyi/paalup is a private project.
Avoid sudo, as it changes your environment execution from your logged-in user to root.
I am able to get a token with:
(base) ➜ ~ curl artifactory.example.com/artifactory/api/docker/null/v2/token -u myusername:mypassword
{"token":"mytoken","expires_in":3600}
However when I try to login:
(base) ➜ ~ docker login artifactory.example.com -u myusername -p mypassword
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
Error response from daemon: Get http://artifactory.example.com/v2/: error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: "<!DOCTYPE HTML PUBLIC \"-//IETF//DTD HTML 2.0//EN\">\n<html><head>\n<title>404 Not Found</title>\n</head><body>\n<h1>Not Found</h1>\n<p>The requested URL /artifactory/api/docker/null/v2/token was not found on this server.</p>\n</body></html>\n"
It's like it's trying to do http://artifactory.example.com/v2/artifactory/api/docker/null/v2/token when it should be doing http://artifactory.example.com/artifactory/api/docker/null/v2/token?
The registry API requires that it is at the root of the url, and will not work under a path based reverse proxy. You need to allocate either a port or hostname to the docker API that responds to a request for /v2 on that hostname. Artifactory implements this by giving you a reverse proxy config generator.
How is your example.com is setup? For me it looks like you are using the repository path as docker access method here. From the error it seems that the docker client got a HTML response instead of a response from Artifactory. It looks like the request did not reach Artifactory at first. Kindly share if any reverse proxy is on top of the example.com?
If possible, directly test it out with the IP as "docker login 12.23.34.45:8081" and let us know if this helps. If the login is successful then the issue is with the reverse proxy being used here. Share the reverse proxy logs and configs to look into it.
Ok so I'm trying to make Bitbucket build a docker image using Bitbucket pipelines and I could sign in a week ago but now it doesn't work anymore.
And I'm using the same username and password, here it's a list of the commands I have tried and their output.
docker login cloud.canister.io:5000 --username $CANISTER_USERNAME --password $CANISTER_PASSWORD:
Error response from daemon: Get https://cloud.canister.io:5000/v2/: authorization server did not include a token in the response
docker login --username $CANISTER_USERNAME --password $CANISTER_PASSWORD cloud.canister.io:5000
Error response from daemon: Get https://cloud.canister.io:5000/v2/: authorization server did not include a token in the response
docker login cloud.canister.io:5000 --username $CANISTER_USERNAME
Password: xxxxxxxxxxxxxxxxxxx
Error saving credentials: error storing credentials - err: exit status 1, out: Cannot autolaunch D-Bus without X11 $DISPLAY
echo "$CANISTER_PASSWORD" | docker login cloud.canister.io:5000 --username $CANISTER_USERNAME --password-stdin
Error response from daemon: Get https://cloud.canister.io:5000/v2/: authorization server did not include a token in the response
echo "$CANISTER_PASSWORD" | docker login --username $CANISTER_USERNAME --password-stdin cloud.canister.io:5000
Error response from daemon: Get https://cloud.canister.io:5000/v2/: authorization server did not include a token in the response
I've also tried on a local machine and tried to do it without environment variables also tried to sign out and then try to sign in again but nothing works
for logging in this worked for me:
docker login --username=USERNAME cloud.canister.io:5000
Unfortunately, i am still getting this error message whenever i try to push my image.
I had the same issue when trying to push a new image to cloud.canister.io. It turned out I had not created the repository through the web frontend yet.
After creating the repo on cloud.canister.io I could successfully push my image up.
First you need to create an empty repo on cloud.canister.io
website with same name of the image you are trying to push.
Then you will be able to push to that repo.
Make sure you have authenticated the canister account using
sudo docker login --username=<username> cloud.canister.io:5000
Been a while. But this helped me:
docker push (registryFullUrl)/$(dockerId)/$(imageName):$(MAJOR).$(MINOR).$(PATCH)
where:
$(registryFullUrl) = cloud.canister.io:5000
$(dockerId) = your canister id
$(imageName) = repository name
$(MAJOR).$(MINOR).$(PATCH) = version
This worked for me. Hopefully this will be helpful to somebody.
I built an image from a custom Dockerfile. I am running Docker Desktop on my Win11.
docker build -t <image>:<tag> .
I logged into canister.io.
docker login --username=<username> --password=<username> cloud.canister.io:5000
I tagged the build.
docker tag <image>:<tag> cloud.canister.io:5000/<canister-namespace>/<canister-repo>
I pushed the image to canister.
docker push cloud.canister.io:5000/<canister-namespace>/<canister-repo>
I deleted the image from my Docker Desktop and I tried to pull it from canister.
docker pull cloud.canister.io:5000/<canister-namespace>/<canister-repo>
Here's an examples with some dummy values:
docker build -t tc5:tc5tag .
docker login --username=myusername --password=mypassword cloud.canister.io:5000
docker tag tc5:tc5tag cloud.canister.io:5000/mynamespace/testrepo
docker push cloud.canister.io:5000/mynamespace/testrepo
# pull test
docker pull cloud.canister.io:5000/mynamespace/testrepo
I am trying to push a Docker image to Google Container Registry from a CircleCI build, as per their instructions. However, pushing to GCR fails due to an apparent authentication error:
Using 'push eu.gcr.io/realtimemusic-147914/realtimemusic-test/realtimemusic-test' for DOCKER_ARGS.
The push refers to a repository [eu.gcr.io/realtimemusic-147914/realtimemusic-test/realtimemusic-test] (len: 1)
Post https://eu.gcr.io/v2/realtimemusic-147914/realtimemusic-test/realtimemusic-test/blobs/uploads/: token auth attempt for registry: https://eu.gcr.io/v2/token?account=oauth2accesstoken&scope=repository%3Arealtimemusic-147914%2Frealtimemusic-test%2Frealtimemusic-test%3Apush%2Cpull&service=eu.gcr.io request failed with status: 403 Forbidden
I've prior to pushing the Docker image authenticated the service account against Google Cloud:
echo $GCLOUD_KEY | base64 --decode > ${HOME}/client-secret.json
gcloud auth activate-service-account --key-file ${HOME}/client-secret.json
gcloud config set project $GCLOUD_PROJECT_ID
Then I build the image and push it to GCR:
docker build -t $EXTERNAL_REGISTRY_ENDPOINT/realtimemusic-test -f docker/test/Dockerfile .
gcloud docker push -- $EXTERNAL_REGISTRY_ENDPOINT/realtimemusic-test
What am I doing wrong here?
Have you tried using the _json_key method for authenticating with Docker?
https://cloud.google.com/container-registry/docs/advanced-authentication
After that, please use naked 'docker' (without 'gcloud').
If you are pushing docker image using google cloud sdk. You can use temporary authorization with the following command:
gcloud docker --authorize-only
The above command gives you a temporary authorization for pushing and pulling images using docker.
You can refer this link for details Gcloud docker.
Hope it helps to solve your issue.
After many retries... I solved using access token:
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://[HOSTNAME]
The service account requires permission to write to the Cloud Storage bucket containing the container registry. Granting the service account either the project editor role or write access to the bucket (via ACL) solves the issue. The latter should be preferable since the account doesn't receive wider permissions than it needs.