I try to push my docker container to the google container registry, using this tutorial, but when I run
gcloud docker push b.gcr.io/my-bucket/image-name
I get the error :
The push refers to a repository [b.gcr.io/my-bucket/my-image] (len: 1)
Sending image list
Error: Status 403 trying to push repository my-bucket/my-image: "Access denied."
I couldn't find any more explanation (no -D, --debug, --verbose arguments were recognized), gcloud auth list and docker info tell me I'm connected to both services.
Anything I'm missing ?
You need to make sure the VM instance has enough access rights. You can set these at the time of creating the instance, or if you have already created the instance, you can also edit it (but first, you'll need to stop the instance). There are two ways to manage this access:
Option 1
Under the Identity and API access, select Allow full access to all Cloud APIs.
Option 2 (recommended)
Under the Identity and API access, select Set access for each API and then choose Read Write for Storage.
Note that you can also change these settings even after you have already created the instance. To do this, you'll first need to stop the instance, and then edit the configuration as mentioned above.
Use gsutil to check the ACL to make sure you have permission to write to the bucket:
$ gsutil acl get gs://<my-bucket>
You'll need to check which group the account you are using is in ('owners', 'editors', 'viewers' etc.)
EDIT: I have experienced a very similar problem to this myself recently and, as #lampis mentions in his post, it's because the correct permission scopes were not set when I created the VM I was trying to push the image from. Unfortunately there's currently no way of changing the scopes once a VM has been created, so you have to delete the VM (making sure the disks are set to auto-delete!) and recreate the VM with the correct scopes ('compute-rw', 'storage-rw' seems sufficient). It doesn't take long though ;-).
See the --scopes section here: https://cloud.google.com/sdk/gcloud/reference/compute/instances/create
I am seeing this but on an intermittent basis. e.g. I may get the error denied: Permission denied for "latest" from request "/v2/...."., but when trying again it will work.
Is anyone else experiencing this?
For me I forgot to prepend gcloud in the line (and I was wondering how docker would authenticate):
$ gcloud docker push <image>
In your terminal, run the code below
$ sudo docker login -u oauth2accesstoken -p "$(gcloud auth print-access-token)" https://[HOSTNAME]
Where
-[HOSTNAME] is your container registry location (it is either gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io). Check your tagged images to be sure by running $ sudo docker images).
If this doesn't fix it, try reviewing the VM's access scopes.
If you are using Docker 1.7.0, there was a breaking change to how they handle authentication, which affects users who are using a mix of gcloud docker and docker login.
Be sure you are using the latest version of gcloud via: gcloud components update.
So far this seems to affect gcloud docker, docker-compose and other tools that were reading/writing the Docker auth file.
Hopefully this helps.
Same problem here, the troubleshooting section from https://cloud.google.com/tools/container-registry/#access_denied wasn't very helpful. I have Docker and GCloud full updated. Don't know what else to do.
BTW, I'm trying to push to "gcr.io".
Fixed. I was using a VM in compute engine as my development machine, and looks like I didn't give it enough rigths in Storage.
I had the same problem with access denied and I resolved it with creating new image using Tag:
docker tag IMAGE_WITH_ACCESS_DENIED gcr.io/my-project/my-new-image:test
After that I could PUSH It to Container registry:
gcloud docker -- push gcr.io/my-project/my-new-image:test
Today I also got this error inside Jenkins running on Google Kubernetes Engine when pushing the docker container. The reason was a node pool node version upgrade from 1.9.6-gke.1 to 1.9.7-gke.0 in gcp I did before. Worked again after the downgrade.
You need to login to gcloud from the machine you are:
gcloud auth login
Related
When trying to add the Github Registry to Synology Docker, I always get a prompt saying "Registry returned bad result".
The URL I try to connect to is: https://ghcr.io
I'm trying to do the same (DS920+, DSM 7.1 latest). According to this Reddit:
https://www.reddit.com/r/portainer/comments/u1vf1s/how_to_add_ghcr_as_a_registry/
it used to work with 'docker.pkg.github.com' as the repo url, but according to the current Github docs, it was the old namespace and the actual repo is now 'https://ghcr.io'
https://docs.github.com/en/packages/working-with-a-github-packages-registry/working-with-the-container-registry
According to the docs, authentication is implied many times, maybe it is not possible to use the repo w/o authentication (tried with access tokens, not working).
I opened a Synology support ticket, let's see what they can say.
2022-10-27 - Synology Support replied and the official statement is that the token authentication currently used by Github Container Registry is not supported on the DSM's Docker package GUI. Its possible to ssh to the DSM and use docker from the command line.
So I did a tutorial based on tensorflow-servings and Kubernetes. All steps are working fine except the docker image pushing to the cluster.
this is the tutorial that i have tried.
https://www.tensorflow.org/tfx/serving/serving_kubernetes
And when I'm trying to push the docker image it gives an error like this,
I have tried to create the cluster with scopes also. But the result is same as above.
The command I use to create a cluster with scopes:
gcloud container clusters create resnet-serving-cluster --num-nodes 5 --scopes=storage-rw
So what is the wrong with this? Have I done something wrong???
Ok found the answer. My project ID and registry name are not equal. I re-tag the docker image with new registry name providing my project id and push it. It works.
There may be a variety of reasons.
1) I'd recommend to start with check if full API access has been granted.
2) Update gcloud components gcloud components update
3) Use gsutil to make sure you have permission to write to the bucket:
$ gsutil acl get gs://<my-bucket>
You are trying to push your image into your private registry on gcloud. Please verify if you can access your private registry:
gcloud container images list-tags gcr.io/"your-project"/"image"
all information about gcloud private registry you can find here:
Additional helpful information you can find here
Please notice that:
By default, project Owners and Editors have push and pull permissions
for that project's Container Registry bucket.
Project Viewers have pull permission only.
I followed the guide here (Grant AKS access to ACR), but am still getting "unauthorized: authentication required" when a Pod is attempting to pull an image from ACR.
The bash script executed without any errors. I have tried deleting my Deployment and creating it from scratch kubectl apply -f ..., no luck.
I would like to avoid using the 2nd approach of using a secret.
The link you posted in the question is the correct steps for Authenticate with Azure Container Registry from Azure Kubernetes Service. I tried before and it works well.
So I suggest you can check if the service-principal-ID and service-principal-password are correct in the command kubectl create secret docker-registry acr-auth --docker-server <acr-login-server> --docker-username <service-principal-ID> --docker-password <service-principal-password> --docker-email <email-address>. And the secret you set in the yaml file should also be check if the same as the secret you created.
Jeff & Charles - I also experienced this issue, but found that the actual cause of the issue was that AKS was trying to pull an image tag from the container registry that didn't exist (e.g. latest). When I updated this to a tag that was available (e.g. 9) the deployment script on azure kubernetes service (AKS) worked successfully.
I've commented on the product feedback for the guide to request the error message context be improved to reflect this root cause.
Hope this helps! :)
In my case, I was having this problem because my clock was out of sync. I run on Windows Subsytem for Linux, so running sudo hwclock -s fixed my issue.
See this GitHub thread for longer discussion.
In my case, the Admin User was not enabled in the Azure Container Registry.
I had to enable it:
Go to "Container registries" page > Open your Registry > In the side pannel under Settings open Access keys and switch Admin user on. This generates a Username, a Password, and a Password2.
I'm having trouble pushing to GitLab Container Registry.
I can login successfully using my username and a personal access token but when I try to push the image to the registry, I get the following error:
$ docker push registry.gitlab.com/[groupname]/dockerfiles/nodemon
The push refers to a repository
[registry.gitlab.com/[groupname]/dockerfiles/nodemon]
15d2ea6e1aeb: Preparing
2260f979a949: Preparing
f8e848bb8c20: Preparing
740a5345706a: Preparing
5bef08742407: Preparing
denied: requested access to the resource is denied
I assume the issue is not with authentication because when I run a docker login registry.gitlab.com, I get a Login Succeeded message.
Where is the problem?
How should I push my images to GitLab Container Registry?
I got it working by including api scope to my personal access token.
The docs states The minimal scope needed is read_registry. But that probably applies for read only access.
Reference: https://gitlab.com/gitlab-com/support-forum/issues/2370#note_44796408
In my case it was really dumb, maybe even a gitlab bug :
I renamed the gitlab project after the creation of the container registry, so the container registry url was still with the old name ...
The project name under gitlab had the typo error corrected but not the registry link and it led to this error
Had a similar issue, it was because of the url that was used for tagging and pushing the repo.
It should be
docker push registry.gitlab.com/[account or group-name]/[reponame]/imagename
It was previously a correct answer to say that the personal access token needs to include the api permission, and several answers on this page say exactly that.
Recently, GitLab appear to have improved the granularity of their permission system. So if you want to push container images to the GitLab Docker registry, you can create a token merely with the read_registry and write_registry permissions. This is likely to be a lot safer than giving full permissions.
I have tested this successfully today.
Enable the personal access token by adding api scope as per this guidelines. After creating the token and username, use these credentials for logging into the Docker environment or pushing.
Deploy tokens created under CI/CD setup is not sufficient for pushing the image to a Docker registry.
I had the same issue.
In my case, the issue was I had AutoDevOps enabled before, which seem to generate a deploy token automatically.
Now deploy tokens are just API keys basically for deployment.
But GitLab has a special handling for gitlab-deploy-token which you can then access via $CI_DEPLOY_USER and $CI_DEPLOY_PASSWORD as a predefined variable.
However, I did not double-check the default token.
In my case, it only had read_registry, of course though, it also needs write_registry permissions.
If you do this, then you can follow the official documentation.
Alternatively, you can apparently also switch to $CI_REGISTRY_USER and $CI_REGISTRY_PASSWORD, which are ephemeral, however.
I have been having a lot of problems pushing images with gcloud docker push over the past few weeks. I've read through the many stack overflow discussions and github issues and workarounds but I haven't come across a solution to the inconsistency yet.
Typically I will attempt to push a container image or two. The first push will almost always fail with the following retry-until-timeout output:
I can only get around it with gcloud auth login. At most 5 minutes later I will attempt to push a second image, and will again see the retry-until-timeout issue. I will see this on every attempt until I gcloud auth login again.
Often I will have to manually retry several more times immediately after authenticating before the image is actually pushed.
Am I actually being logged out (I can still access pods and instances, etc with kubectl and gcloud machines)? If so, why is being logged out inconsistent and what does building docker containers do that it would invalidate my local gcloud session?
If not, why can't I gcloud docker push until I authenticate again? After that, why is this still inconsistent (I suspect it may have little or nothing to do with the real issue).
Is there a way to make pushing images on OSX with docker-machine and gcloud docker push reliable? Is there another way to get images to the cloud repository (preferably from the command line)?
gcloud --version
alpha 2016.01.12
beta 2016.01.12
bq 2.0.18
bq-nix 2.0.18
core 2016.02.11
core-nix 2016.02.05
gcloud
gsutil 4.16
gsutil-nix 4.15
kubectl
kubectl-darwin-x86_64 1.1.7
docker --version
Docker version 1.10.1, build 9e83765
docker-machine --version
docker-machine version 0.6.0, build e27fb87
virtualbox version 5.0.14 r105127
I had the same or similar problem. After a few minutes of retry loop depicted with screenshoot above, the command will fail with net/http: TLS handshake timeout.
The solution that fixed it for me was editing the docker daemon configuration with
DOCKER_OPTS="--max-concurrent-uploads=1"
I had a feeling this issue was connected with docker clogging up the network, as I noticed even browsing to gmail can get a timeout(!)
Switching to regular docker push doesn't help timeouts. This appears to be related to your ISP and uploading assets.
I was receiving the same error. After moving the Docker build process to the cloud (which has a much larger pipeline), gcloud docker builds and deploys the image just fine.
I never faced the problems you mentioned with gcloud docker, but regarding your last point,
Is there another way to get images to the cloud repository (preferably from the command line)?
it is indeed possible to push to the gcr.io repos without going through gcloud, e.g:
docker login -e dummy#example.com -p $(gcloud auth print-access-token) -u _token https://gcr.io
docker push [your-image]
Credits to mattmoor, more info in original answer here:
Access google container registry without the gcloud client