Why doesn't gsutil use the Gcloud credentials as it should when running in a docker container on Cloud Shell?
According to [1] gsutil should use gcloud credentials when they are available:
Once credentials have been configured via gcloud auth, those credentials will be used regardless of whether the user has any boto configuration files (which are located at ~/.boto unless a different path is specified in the BOTO_CONFIG environment variable). However, gsutil will still look for credentials in the boto config file if a type of non-GCS credential is needed that's not stored in the gcloud credential store (e.g., an HMAC credential for an S3 account).
This seems to work fine in gcloud installs but not in docker images. The process I used in Cloud Shell is:
docker run -ti --name gcloud-config google/cloud-sdk gcloud auth login
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gcloud compute instances list --project my_project
... (works ok)
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gsutil ls gs://bucket/
ServiceException: 401 Anonymous caller does not have storage.objects.list access to bucket.
[1] https://cloud.google.com/storage/docs/gsutil/addlhelp/CredentialTypesSupportingVariousUseCases
You need to mount a volume with your credentials :
docker run -v ~/.config/gcloud:/root/.config/gcloud your_docker_image
The following steps solve this problem for me:
Set the gs_service_key_file in the [Credentials] section of the boto config file (see here)
Activate your service account with gcloud auth activate-service-account
Set your default project in gcloud config
Dockerfile snipped:
ENV GOOGLE_APPLICATION_CREDENTIALS=/.gcp/your_service_account_key.json
ENV GOOGLE_PROJECT_ID=your-project-id
RUN echo '[Credentials]\ngs_service_key_file = /.gcp/your_service_account_key.json' \
> /etc/boto.cfg
RUN mkdir /.gcp
COPY your_service_account_key.json $GOOGLE_APPLICATION_CREDENTIALS
RUN gcloud auth activate-service-account --key-file=$GOOGLE_APPLICATION_CREDENTIALS --project $GOOGLE_PROJECT_ID
RUN gcloud config set project $GOOGLE_PROJECT_ID
I found #Alexandre's answer basically worked for me, except for one problem: my credentials worked for bq, but not for gsutil (the subject of OP's question), which returned
ServiceException: 401 Anonymous caller does not have storage.objects.list access to bucket
How could the same credentials work for one but not the other!?
Eventually I tracked it down: ~/.config/configurations/config_default looks like this:
[core]
account = xxx#xxxxxxx.xxx
project = xxxxxxxx
pass_credentials_to_gsutil = false
Why?! Why isn't this documented??
Anyway...change the flag to true, and you're all sorted.
Related
I would like to pass my Google Cloud Platform's service account JSON credentials file to a docker container so that the container can access a cloud storage bucket. So far I tried to pass the file as an environment parameter on the run command like this:
Using the --env flag: docker run -p 8501:8501 --env GOOGLE_APPLICATION_CREDENTIALS=/Users/gcp_credentials.json" -t -i image_name
Using the -e flag and even exporting the same env variable in the command line: docker run -p 8501:8501 -e GOOGLE_APPLICATION_CREDENTIALS=/Users/gcp_credentials.json" -t -i image_name
But nothing worked, and I always get the following error when running the docker container:
W
external/org_tensorflow/tensorflow/core/platform/cloud/google_auth_provider.cc:184]
All attempts to get a Google authentication bearer token failed,
returning an empty token. Retrieving token from files failed with "Not
found: Could not locate the credentials file.".
How to pass the google credentials file to a container running locally on my personal laptop?
You cannot "pass" an external path, but have to add the JSON into the container.
Two ways to do it:
Volumes: https://docs.docker.com/storage/volumes/
Secrets: https://docs.docker.com/engine/swarm/secrets/
secrets - work with docker swarm mode.
create docker secrets
use secret with a container using --secret
Advantage being, secrets are encrypted. Secrets are decrypted when mounted to containers.
I log into gcloud in my local environment then share that json file as a volume in the same location in the container.
Here is great post on how to do it with relevant extract below: Use Google Cloud user credentials when testing containers locally
Login locally
To get your default user credentials on your local environment, you
have to use the gcloud SDK. You have 2 commands to get authentication:
gcloud auth login to get authenticated on all subsequent gcloud
commands gcloud auth application-default login to create your ADC
locally, in a “well-known” location.
Note location of credentials
The Google auth library tries to get a valid credentials by performing
checks in this order
Look at the environment variable GOOGLE_APPLICATION_CREDENTIALS value.
If exists, use it, else… Look at the metadata server (only on Google
Cloud Platform). If it returns correct HTTP codes, use it, else… Look
at “well-know” location if a user credential JSON file exists The
“well-known” locations are
On linux: ~/.config/gcloud/application_default_credentials.json On
Windows: %appdata%/gcloud/application_default_credentials.json
Share volume with container
Therefore, you have to run your local docker run command like this
ADC=~/.config/gcloud/application_default_credentials.json \ docker run
\
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
-v ${ADC}:/tmp/keys/FILE_NAME.json:ro \ <IMAGE_URL>
NB: this is only for local development, on Google Cloud Platform the credentials for the service are automatically inserted for you.
I [believe I] have setup the tooling and configuration similarly between my local system and the build systems.
I can run this, successfully, locally:
$ cat key.json | docker login -u _json_key --password-stdin https://us.gcr.io
WARNING! Your password will be stored unencrypted in /home/local/MAGICLEAP/doprea/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store
Login Succeeded
..but the same command fails from the job, which is running in GCE:
cat key.json | docker login -u _json_key --password-stdin https://us.gcr.io
error getting credentials - err: exit status 1, out: `docker-credential-gcr/helper: could not retrieve GCR's access token: metadata: GCE metadata "instance/service-accounts/default/token?scopes=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform" not defined`
It seems to really want to find the token in the instance metadata rather than just using the file that I'm giving it.
I could use some advice.
There may be an issue with your “.docker/config.json” file, you can compare the “config.json” between your local system and the build systems.
Running the following commands in your the build system to verify status and config file of docker, please make sure that “docker-credential-gcr” is installed before troubleshooting.
$ docker -v
$ docker-credential-gcr configure-docker
$ cat ~/.docker/config.json
$ docker login -u oauth2accesstoken -p "$(gcloud auth print-access-token)" https://us.gcr.io
I recommend these topics “Pushing and pulling images” and “Authentication methods” for troubleshooting.
I am trying to push docker image to GCP, but i am still getting this error:
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I follow this https://cloud.google.com/container-registry/docs/quickstart step by step and everything works fine until docker push
It's clear GCP project
I've already tried:
use gcloud as a Docker credential helper:
gcloud auth configure-docker
reinstall Cloud SDK and gcloud init
add Storage Admin role to my account
What I am doing wrong?
Thanks for any suggestions
If it can help those in the same situation as me:
Docker 19.03
Google cloud SDK 288.0.0
Important: My user is not in a docker user group. I then have to prepend sudo before any docker command
When gcloud and docker are not using the same config.json
When I use gcloud credential helper:
gcloud auth configure-docker
it updates the JSON config file in my $HOME: [/home/{username}/.docker/config.json]. However, when logging out and login again from Docker CLI,
sudo docker login
The warning shows a different path, which makes sense as I sudo-ed:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
sudo everywhere
To fix it, I did the following steps:
# Clear everything
sudo docker logout
sudo rm /root/.docker/config.json
rm /home/{username}/.docker/config.json
# Re-login
sudo docker login
sudo gcloud auth login --no-launch-browser # --no-launch-browser is optional
# Check both Docker CLI and gcloud credential helper are here
sudo vim /root/.docker/config.json
# Just in case
sudo gcloud config set project {PROJECT_ID}
I can now push my Docker images to both GCR and Docker hub
I'm using the official Google Cloud SDK Docker image (https://hub.docker.com/r/google/cloud-sdk/) to use the GCloud CLI on my workstation, since I have some restrictions on directly installing things on this machine. One of my main issues is that whenever I SSH into my instance, the SSH key generation process is repeated. I followed the instructions listed in the info section of the docker image. The command I'm using to login is -
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gcloud compute --project "dummy" ssh --zone "asia-southeast1-a" "test"
How do I make the SSH login persist as would be the case if I was using the GCLoud CLI on my host machine?
I am a bit confused about how I can authenticate the gcloud sdk on a docker container. Right now, my docker file includes the following:
#Install the google SDK
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
RUN mkdir -p /usr/local/gcloud
RUN tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz
RUN /usr/local/gcloud/google-cloud-sdk/install.sh
RUN /usr/local/gcloud/google-cloud-sdk/bin/gcloud init
However, I am confused how I would authenticate? When I run gcloud auth application-default login on my machine, it opens a new tab in chrome which prompts me to login. How would I input my credentials on the docker container if it opens a new tab in google chrome in the container?
You might consider using deb packages when setting up your docker container as it is done on docker hub.
That said you should NOT run gcloud init or gcloud auth application-default login or gcloud auth login... those are interactive commands which launch browser. To provide credentials to the container supply it with service account key file.
You can download one from cloud console: https://console.cloud.google.com/iam-admin/serviceaccounts/project?project=YOUR_PROJECT or create it with gcloud command
gcloud iam service-accounts keys create
see reference guide.
Either way once you have the key file ADD it to your container and run
gcloud auth activate-service-account --key-file=MY_KEY_FILE.json
You should be now set, but if you want to use it as Application Default Credentials (ADC), that is in the context of other libraries and tools, you need to set the following environment variable to point to the key file:
export GOOGLE_APPLICATION_CREDENTIALS=/the/path/to/MY_KEY_FILE.json
One thing to point out here is that gcloud tool does not use ADC, so later if you change your account to something else, for example via
gcloud config set core/account my_other_login#gmail.com
other tools and libraries will continue using old account via ADC key file but gcloud will now use different account.
You can map your local Google SDK credentials into the image. [Source].
Begin by signing in using:
$ gcloud auth application-default login
Then add the following to your docker-compose.yaml:
volumes:
- ~/.config/gcloud:/root/.config/gcloud