I'm trying to push an image to my gitlab registry which i previously built with success.
docker login registry.gitlab.com
I give the credentials and it returns me a "Login Succeeded"
Then, as always, i do a
docker push registry.gitlab.com/username/registry/base:latest
And it ends with
unauthorized: authentication required
i already tried to
docker logout registry.gitlab.com
and login again.
The process can be found here, it's pretty simple
link to github/gitlabhq
I'm used to do it like that, first time i face the issue, don't understand
Any help appreciated !
Ensure that your account has read/write access to the registry you are trying to access. What you might need to do is to create a new Access Token as there is a difference between API/Access tokens and your "normal" user password. Use this access token as described in the documentation (https://github.com/gitlabhq/gitlabhq/blob/master/doc/user/packages/container_registry/index.md#authenticate-with-the-container-registry)
docker login registry.example.com -u <username> -p <token>
The token can be created by going to Edit Profile -> Access Tokens -> Select Scopes -> Ticking off 'Read registry' & 'Write registry'
The Gitlab support told me this is due to a native limitation of the token duration.
You cannot customize this duration in Saas mode.
So pushing large image results in auto logout.
It's possible to authenticate with a docker repo automatically using rsa certificates as described here.
However, this sets up this authentication for all users. This is a problem because I have personal certificates I want to use to authenticate with from my account only. If I followed the steps above then anyone who happened to be using the same VM would automatically authenticate with docker as me, which I don't want.
So how can I configure docker so I get the same convenience of automatic authentication with my cert without risking someone else on the machine accidentally using the same certs to authenticate?
Podman can do this trick: https://docs.podman.io/en/latest/markdown/podman-login.1.html
--cert-dir=path
Use certificates at path (*.crt, *.cert, *.key) to connect to the registry. (Default: /etc/containers/certs.d) Please
refer to containers-certs.d(5) for details. (This option is not
available with the remote Podman client, including Mac and Windows
(excluding WSL2) machines)
$ podman login --cert-dir /home/myuser/certs.d/ -u foo -p bar localhost:5000
Login Succeeded!
I've an on-prem instance of Gitlab-CE 13.0.5 running, I'm using the official docker image of Gitlab.
I've enabled the integrated container registry.
Testing the login and push at the registry using a personal access token works, both on the commandline and within a CI script.
Using the CI job token in a CI script, the docker login passes, the docker push fails.
Using a group access token (with the read and write registry privilege), both login and then of course also push fails. Testing the group access token manually on the commandline the login step also fails.
I've checked the logfile of the registry, I only see the access denied message, no further hint whats might be wrong.
I've considered to tag the image with the correct hiearchy of group and project name.
Has anyone an idea where I should continue to search?
Thanks and cheers
Wolfgang
Finally, I found it!
If there is a port number in the registry name in the login command, exactly the same name including the port number has to be used when tagging and pushing an image.
So, if in the gitlab configuration in the variable gitlab_rails['registry_port'] = "443" the default port number 443 is mentioned, it appears in the variable $CI_REGISTRY and you have to use it in the tag and the push command.
Setting the variable gitlab_rails['registry_port'] = "" to an empty string let the system still use the port 443 - since it is the default port. However, it will be removed from the name.
To be honest, I was a bit surprised.
I'm trying to push my docker image to google container image registry but get an error which says I do not have the needed permission to perform this operation.
I have already tried gcloud auth configure-docker but it doesn't work for me.
I first build the image using:
docker build -t gcr.io/trynew/hello-world-image:v1 .
Then I'm trying to attach a tag and push it:
docker push gcr.io/trynew/hello-world-image:v1
This is my output :
The push refers to repository [gcr.io/trynew/hello-world-image]
e62774cdb1c2: Preparing
0f6265b750f3: Preparing
f82351274ce3: Preparing
31a16430afc8: Preparing
67298499a3ed: Preparing
62d5f39c8fe4: Waiting
9f8566ee5135: Waiting
unauthorized: You don't have the needed permissions to perform this
operation, and you may have invalid credentials.
To authenticate your request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication
Google cloud services have specific information how to grant permissions for docker push, this is the first thing you should have a look I think, https://cloud.google.com/container-registry/docs/access-control
After checking that you have sufficient permissions you should proceed with authentication with something like:
gcloud auth configure-docker
See more here: https://cloud.google.com/container-registry/docs/pushing-and-pulling
If you are running docker as root (i.e. with sudo docker), then make sure to configure the authentication as root. You can run for example:
sudo -s
gcloud auth login
gcloud auth configure-docker
...that will create (or update) a file under /root/.docker/config.json.
(Are there any security implications of gcloud auth login as root? Let me know in the comments.)
In order to be able to push images to the private registry you need two things: API Access Scopes and Authenticate your VM with the registry.
For the API Access Scopes (https://cloud.google.com/container-registry/docs/using-with-google-cloud-platform) we can read in the official documentation:
For GKE:
By default, new Google Kubernetes Engine clusters are created with
read-only permissions for Storage buckets. To set the read-write
storage scope when creating a Google Kubernetes Engine cluster, use
the --scopes option.
For GCE:
By default, a Compute Engine VM has the read-only access scope
configured for storage buckets. To push private Docker images, your
instance must have read-write storage access scope configured as
described in Access scopes.
So first, verify if your GKE cluster or GCE instance actually has the proper scopes set.
The next is to authenticate to the registry:
a) If you are using a Linux based image, you need to use "gcloud auth configure-docker" (https://cloud.google.com/container-registry/docs/advanced-authentication).
b) For Container-Optimized OS (COS), the command is “docker-credential-gcr configure-docker” (https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#accessing_private_google_container_registry)
Windows / Powershell
I got this error on Windows when I was trying to run docker push from a normal powershell window after authenticating in the google cloud shell that had opened when I installed the SDK.
The solution was simple:
Start a new powershell window to run docker push after running the gcloud auth configure-docker command.
Make sure you've activated the registry too:
gcloud services enable containerregistry.googleapis.com
Also Google has a tendency to jump to a default account (maybe your personal gmail) which may or may not be the one you want (your business email). Make sure if you're opening any links in a browser that you're in the correct Google account.
I'm not exactly sure what's going on yet because I'm brand new to docker, but something got refreshed when starting a new Powershell instance.
as noted https://stackoverflow.com/a/59799035/26283371 there appears to be a bug in the Linux version of cloud sdk where authentication fails using the standard authentication method (gcloud auth configure-docker). Instead, create a JSON keyfile per this and that tends to work.
I still can't get the gcloud auth configure-docker helper to work. What did was authenticating with an access token, like so
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://HOSTNAME
where HOSTNAME is gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io. (Be sure to include https://, otherwise it won't work).
You can view options for print-access-token here.
First thing, Make sure you covered all points listed in the following official documentation
https://cloud.google.com/container-registry/docs/advanced-authentication
This error occurs mostly due to docker config update, which you can check using command cat .docker/config.json
Now update with gcr with following command
gcloud auth configure-docker
Just in case anyone else is banging their head against a wall my PIA VPN caused this behavior.
"unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication"
Turn my VPN off and it works fine. Turn it back on and it breaks again.
This is the only way that worked for me. I found it in a kubernetes/kompose Github issue.
Remove the credsStore key in ~/.docker/config.json
This will force docker to write the auth into the json when you use docker login. You can't untick Securely store Docker logins in macOS keychain in the docker desktop any more -- and the current credStore is no longer macOS keychain, it's desktop.
gcloud auth login Auth with gcloud (just to be explicit)
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://eu.gcr.io
You should see this:
WARNING! Your password will be stored unencrypted in /Users/andrew/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Source: https://github.com/kubernetes/kompose/issues/1043#issuecomment-609019141
The fix is as follows: run gcloud auth login (the browser will open and allow you to authenticate) then run gcloud auth configure-docker and select Y - then redo push. It should work like charm.
I also have the same issue in the Linux environment. So I just set the Docker to run as a non-root user, (https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user), and it works.
In my case DOCKER_CONFIG env variable was defined with an invalid value (not pointing to a docker config json).
I had the same issue, but for me, the problem was with internal users in my Linux system. I authenticated with gcloud my personal Linux user and when pushing, I was doing with root. So I had to authenticate my root user with gcloud as well:
sudo gcloud init
This issue happens to me when i switch service account which is pointing to different GCP Projects. Even though the service account has permission to push it says it does not have the permission. To resolve this by deleting config.json file which is present in .docker
Once this is done run the below commands and you should be able to push the image.
gcloud auth configure-docker
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://HOSTNAME
Where HOSTNAME= gcr.io , asia.gcr.io etc
I'm finding different behavior from within and outside of a docker image for authenticating a google service account.
Outside. Succeeds.
C:\Users\Ben\AppData\Local\Google\Cloud SDK>gcloud auth activate-service-account 773889352370-compute#developer.gserviceaccount.com --key-file C:/Users/Ben/Dropbox/Google/MeerkatReader-d77c0d6aa04f.json --project api-project-773889352370
Activated service account credentials for: [773889352370-compute#developer.gserviceaccount.com]
Run docker container, pass the .json key to tmp directory.
C:\Users\Ben\AppData\Local\Google\Cloud SDK>docker run -it -v C:/Users/Ben/Dropbox/Google/MeerkatReader-d77c0d6aa04f.json:/tmp/MeerkatReader-d77c0d6aa04f.json --rm -p "127.0.0.1:8080:8080" --entrypoint=/bin/bash gcr.io/cloud-datalab/datalab:local-20161227
From within docker, confirm the file is there
root#4a4a9314f15c:/tmp# ls
MeerkatReader-d77c0d6aa04f.json npm-24-b7aa1bcf npm-45-fd13ef7c npm-7-22ec336e
Run the same command as before. Fails.
root#4a4a9314f15c:/tmp# gcloud auth activate-service-account 773889352370-compute#developer.gserviceaccoun
t.com --key-file MeerkatReader-d77c0d6aa04f.json --project api-project-773889352370
ERROR: (gcloud.auth.activate-service-account) Failed to activate the given service account. Please ensure provided key file is valid.
What might cause this error? More broadly, what is the suggested strategy for passing credentials. I've tried this and it fails as well. I'm using the cloudml API and cloud vision, and i'd like to avoid manual gcloud init at the beginning of every run.
EDIT: To show gcloud info
root#7ff49b26484f:/# gcloud info --run-diagnostics
Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic (1/1 checks) passed.
confirmed same behavior
root#7ff49b26484f:/tmp# gcloud auth activate-service-account 773889352370-compute#developer.gserviceaccount.com --key-file MeerkatReader-d77c0d6aa04f.json --project api-project-773889352370
ERROR: (gcloud.auth.activate-service-account) Failed to activate the given service account. Please ensure provided key file is valid.
This is probably due to a clock skew of the docker VM. I debugged the activate-service-account function of the google SDK and got the following error message:
There was a problem refreshing your current auth tokens: invalid_grant:
Invalid JWT: Token must be a short-lived token and in a reasonable timeframe
Please run:
$ gcloud auth login
to obtain new credentials, or if you have already logged in with a different account:
$ gcloud config set account ACCOUNT
to select an already authenticated account to use.
After rebooting the VM, it worked like a charm.
Have you attempted to put the credential in the image from the beginning? Is that a similar outcome?
On the other hand, have you tried using --key-file /tmp/MeerkatReader-d77c0d6aa04f.json? Since it appears you're putting the json file in /tmp.
You might also consider checking the network configuration inside the container and with docker from the outside.
In my case, I was using a workload identity provider, and I made a little mistake, I set the workload provider with the full name of the pool
How it should be: /projects/${project-number}/locations/global/workloadIdentityPools/my-pool/providers/${id-provider}
And I also added the following command:
gcloud config set account ${{GCP_SERVICE_ACCOUNT}}
Before my docker push, because it was required.
In addition, according to the docs https://github.com/google-github-actions/auth#usage, my service account was missing the required roles:
roles/iam.serviceAccountTokenCreator
roles/iam.workloadIdentityUser
Edit: You may also need to grant access for your service account to your Workload Identity Pool, you can do it by command or interface:
gcloud iam service-accounts add-iam-policy-binding SERVICE_ACCOUNT_EMAIL \
--role=roles/iam.workloadIdentityUser \
--member="MEMBER_ID"
Docs:https://cloud.google.com/iam/docs/using-workload-identity-federation#gcloud