disable docker config.json credentials for google cloud registry - docker

I use Google Cloud Registry, which adds "auths" and "credHelpers" keys to my ~/.docker/config.json.
The problem I have is that when I'm offline, or just building locally, it tries to connect to each hostname, which either fails (when offline) or is really slow (when online).
How can I tell docker-compose to not use these credentials/hosts when building?
My workaround now is to delete the properties from the ~/.docker/config.json, and then gcloud auth configure-docker each time, but I'd rather not have to keep authenticating to push when I do want to use GCR.
ocker.api.build._set_auth_headers: Looking for auth config
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'https://asia.gcr.io'
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'https://eu.gcr.io'
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'https://gcr.io'
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'https://marketplace.gcr.io'
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'https://staging-k8s.gcr.io'
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'https://us.gcr.io'
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'gcr.io'
docker.auth._resolve_authconfig_credstore: Looking for auth entry for 'us.gcr.io'

Related

Issues using skopeo copy to gcr, unable to authenticate on google cloud

I'm trying to pull and push images in a Gitlab pipeline avoiding to use docker-in-docker approach so I'm trying to use skopeo for that.
But right now I'm having issues authenticating skopeo on gcr because we use key authentication for Service Accounts, that doesn't seems to be supported (at least I couldn't make it work) by skopeo, and we don't want to use user and password for that.
The error message is this one:
unable to retrieve auth token: invalid username/password: unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
What are the possibilities I can explore to make authentication work?
I just found a way to solve that authentication issue, we must use --dest-registry-token flag like this:
skopeo copy --dest-registry-token "$(gcloud auth print-access-token)" docker://nginx:1.23.1 docker://us.gcr.io/project/nginx:1.23.1
And make sure that before doing that you activate the Service Account.
gcloud auth activate-service-account --key-file $SOME_FILE

Berglas not finding my google cloud credentials

I am trying to read my google cloud default credentials with berglas, and it says that:
failed to create berglas client: failed to create kms client: google: could not find default credentials. See https://developers.google.com/accounts/docs/application-default-credentials for more information.
And I am passing the right path, and i have tried with many paths but none of them work.
$HOME/.config/gcloud:/root/.config/gcloud
I'm unfamiliar with Berglas (please include references) but the error is clear. Google's client libraries attempt to find credentials automatically. The documentation describes the process by which credentials are sought.
Since the credentials aren't being found, you're evidently not running on a Google Cloud compute service (where credentials are found automatically). Have you set an environment variable called APPLICATION_DEFAULT_CREDENTIALS and is it pointing to a valid Service Account key file?
The Berglas' README suggests using the following command to auth your user's credentials as Application Default Credentials. You may not have completed this step:
gcloud auth application-default login

GCP docker authentication: How is using gcloud more secure than just using a JSON keyfile?

Setting up authentication for Docker  |  Artifact Registry Documentation suggests that gcloud is more secure than using a JSON file with credentials. I disagree. In fact I'll argue the exact opposite is true. What am I misunderstanding?
Setting up authentication for Docker | Artifact Registry Documentation says:
gcloud as credential helper (Recommended)
Configure your Artifact Registry credentials for use with Docker directly in gcloud. Use this method when possible for secure, short-lived access to your project resources. This option only supports Docker versions 18.03 or above.
followed by:
JSON key file
A user-managed key-pair that you can use as a credential for a service account. Because the credential is long-lived, it is the least secure option of all the available authentication methods
The JSON key file contains a private key and other goodies giving a hacker long-lived access. The keys to the kingdom. But only to the Artifact Repository in this instance, because the service account that the JSON file is for only has specifically those rights.
Now gcloud has two auth options:
gcloud auth activate-service-account ACCOUNT --key-file=KEYFILE
gcloud auth login
Lets start with gcloud and a service account: Here it stores KEYFILE in unencrypted in ~/.config/gcloud/credentials.db. Using the JSON file directly boils down docker login -u _json_key --password-stdin https://some.server < KEYFILE which stores the KEYFILE contents in ~/.docker/config.json. So using gcloud with a service account or just using the JSON file directly should be equivalent, security wise. They both store the same KEYFILE unencrypted in a file.
gcloud auth login requires login with a browser where I give consent to giving gcloud access to my user account in its entirety. It is not limited to the Artifact Repository like the service account is. Looking with sqlite3 ~/.config/gcloud/credentials.db .dump I can see that it stores an access_token but also a refresh_token. If the hacker has access to ~/.config/gcloud/credentials.db with access and refresh tokens, doesn't he own the system just as much as if he had access to the JSON file? Actually, this is worse because my user account is not limited to just accessing the Artifact Registry - now the user has access to everything my user has access to.
So all in all: gcloud auth login is at best security-wise equivalent to using the JSON file. But because the access is not limited to the Artifact Registry, it is in fact worse.
Do you disagree?

Permission issues while docker push

I'm trying to push my docker image to google container image registry but get an error which says I do not have the needed permission to perform this operation.
I have already tried gcloud auth configure-docker but it doesn't work for me.
I first build the image using:
docker build -t gcr.io/trynew/hello-world-image:v1 .
Then I'm trying to attach a tag and push it:
docker push gcr.io/trynew/hello-world-image:v1
This is my output :
The push refers to repository [gcr.io/trynew/hello-world-image]
e62774cdb1c2: Preparing
0f6265b750f3: Preparing
f82351274ce3: Preparing
31a16430afc8: Preparing
67298499a3ed: Preparing
62d5f39c8fe4: Waiting
9f8566ee5135: Waiting
unauthorized: You don't have the needed permissions to perform this
operation, and you may have invalid credentials.
To authenticate your request, follow the steps in:
https://cloud.google.com/container-registry/docs/advanced-authentication
Google cloud services have specific information how to grant permissions for docker push, this is the first thing you should have a look I think, https://cloud.google.com/container-registry/docs/access-control
After checking that you have sufficient permissions you should proceed with authentication with something like:
gcloud auth configure-docker
See more here: https://cloud.google.com/container-registry/docs/pushing-and-pulling
If you are running docker as root (i.e. with sudo docker), then make sure to configure the authentication as root. You can run for example:
sudo -s
gcloud auth login
gcloud auth configure-docker
...that will create (or update) a file under /root/.docker/config.json.
(Are there any security implications of gcloud auth login as root? Let me know in the comments.)
In order to be able to push images to the private registry you need two things: API Access Scopes and Authenticate your VM with the registry.
For the API Access Scopes (https://cloud.google.com/container-registry/docs/using-with-google-cloud-platform) we can read in the official documentation:
For GKE:
By default, new Google Kubernetes Engine clusters are created with
read-only permissions for Storage buckets. To set the read-write
storage scope when creating a Google Kubernetes Engine cluster, use
the --scopes option.
For GCE:
By default, a Compute Engine VM has the read-only access scope
configured for storage buckets. To push private Docker images, your
instance must have read-write storage access scope configured as
described in Access scopes.
So first, verify if your GKE cluster or GCE instance actually has the proper scopes set.
The next is to authenticate to the registry:
a) If you are using a Linux based image, you need to use "gcloud auth configure-docker" (https://cloud.google.com/container-registry/docs/advanced-authentication).
b) For Container-Optimized OS (COS), the command is “docker-credential-gcr configure-docker” (https://cloud.google.com/container-optimized-os/docs/how-to/run-container-instance#accessing_private_google_container_registry)
Windows / Powershell
I got this error on Windows when I was trying to run docker push from a normal powershell window after authenticating in the google cloud shell that had opened when I installed the SDK.
The solution was simple:
Start a new powershell window to run docker push after running the gcloud auth configure-docker command.
Make sure you've activated the registry too:
gcloud services enable containerregistry.googleapis.com
Also Google has a tendency to jump to a default account (maybe your personal gmail) which may or may not be the one you want (your business email). Make sure if you're opening any links in a browser that you're in the correct Google account.
I'm not exactly sure what's going on yet because I'm brand new to docker, but something got refreshed when starting a new Powershell instance.
as noted https://stackoverflow.com/a/59799035/26283371 there appears to be a bug in the Linux version of cloud sdk where authentication fails using the standard authentication method (gcloud auth configure-docker). Instead, create a JSON keyfile per this and that tends to work.
I still can't get the gcloud auth configure-docker helper to work. What did was authenticating with an access token, like so
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://HOSTNAME
where HOSTNAME is gcr.io, us.gcr.io, eu.gcr.io, or asia.gcr.io. (Be sure to include https://, otherwise it won't work).
You can view options for print-access-token here.
First thing, Make sure you covered all points listed in the following official documentation
https://cloud.google.com/container-registry/docs/advanced-authentication
This error occurs mostly due to docker config update, which you can check using command cat .docker/config.json
Now update with gcr with following command
gcloud auth configure-docker
Just in case anyone else is banging their head against a wall my PIA VPN caused this behavior.
"unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication"
Turn my VPN off and it works fine. Turn it back on and it breaks again.
This is the only way that worked for me. I found it in a kubernetes/kompose Github issue.
Remove the credsStore key in ~/.docker/config.json
This will force docker to write the auth into the json when you use docker login. You can't untick Securely store Docker logins in macOS keychain in the docker desktop any more -- and the current credStore is no longer macOS keychain, it's desktop.
gcloud auth login Auth with gcloud (just to be explicit)
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://eu.gcr.io
You should see this:
WARNING! Your password will be stored unencrypted in /Users/andrew/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
Source: https://github.com/kubernetes/kompose/issues/1043#issuecomment-609019141
The fix is as follows: run gcloud auth login (the browser will open and allow you to authenticate) then run gcloud auth configure-docker and select Y - then redo push. It should work like charm.
I also have the same issue in the Linux environment. So I just set the Docker to run as a non-root user, (https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user), and it works.
In my case DOCKER_CONFIG env variable was defined with an invalid value (not pointing to a docker config json).
I had the same issue, but for me, the problem was with internal users in my Linux system. I authenticated with gcloud my personal Linux user and when pushing, I was doing with root. So I had to authenticate my root user with gcloud as well:
sudo gcloud init
This issue happens to me when i switch service account which is pointing to different GCP Projects. Even though the service account has permission to push it says it does not have the permission. To resolve this by deleting config.json file which is present in .docker
Once this is done run the below commands and you should be able to push the image.
gcloud auth configure-docker
gcloud auth print-access-token | docker login -u oauth2accesstoken --password-stdin https://HOSTNAME
Where HOSTNAME= gcr.io , asia.gcr.io etc

How to use gcloud auth list in python

We could run "gcloud auth list" to get our credentialed account, and now I want to do the same thing in my python code, that is checking the credential account by API in python. But I didn't fine it..... Any suggestion?
More information is:
I want to check my account name before I create credentials
CREDENTIALS = GoogleCredentials.from_stream(ACCOUNT_FILE)
CREDENTIALS = GoogleCredentials.get_application_default()
gcloud stores credentials obtained via
gcloud auth login
gcloud auth activate-service-account
in its internal local database. There is no API besides gcloud auth list command to query them. Note that this is different (usually a subset) from the list of credentials in GCP.
Credentials used by gcloud are meant to be separate from what you use in your python code.
Perhaps you want to use
https://cloud.google.com/sdk/gcloud/reference/iam/service-accounts/keys/list, there is also API for that https://cloud.google.com/iam/docs/creating-managing-service-accounts.
For application default credentials you would download json key file using developer console https://console.cloud.google.com/iam-admin/serviceaccounts/project?project=YOUR_PROJECT or use gcloud iam service-accounts keys create command.
There is also gcloud auth application-default login command, which will create application default credential file in well known location, but you should not use it for anything serious except perhaps developing/testing. Note that credentials obtained via this command do not show up in gcloud auth list.

Resources