I am a bit confused about how I can authenticate the gcloud sdk on a docker container. Right now, my docker file includes the following:
#Install the google SDK
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
RUN mkdir -p /usr/local/gcloud
RUN tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz
RUN /usr/local/gcloud/google-cloud-sdk/install.sh
RUN /usr/local/gcloud/google-cloud-sdk/bin/gcloud init
However, I am confused how I would authenticate? When I run gcloud auth application-default login on my machine, it opens a new tab in chrome which prompts me to login. How would I input my credentials on the docker container if it opens a new tab in google chrome in the container?
You might consider using deb packages when setting up your docker container as it is done on docker hub.
That said you should NOT run gcloud init or gcloud auth application-default login or gcloud auth login... those are interactive commands which launch browser. To provide credentials to the container supply it with service account key file.
You can download one from cloud console: https://console.cloud.google.com/iam-admin/serviceaccounts/project?project=YOUR_PROJECT or create it with gcloud command
gcloud iam service-accounts keys create
see reference guide.
Either way once you have the key file ADD it to your container and run
gcloud auth activate-service-account --key-file=MY_KEY_FILE.json
You should be now set, but if you want to use it as Application Default Credentials (ADC), that is in the context of other libraries and tools, you need to set the following environment variable to point to the key file:
export GOOGLE_APPLICATION_CREDENTIALS=/the/path/to/MY_KEY_FILE.json
One thing to point out here is that gcloud tool does not use ADC, so later if you change your account to something else, for example via
gcloud config set core/account my_other_login#gmail.com
other tools and libraries will continue using old account via ADC key file but gcloud will now use different account.
You can map your local Google SDK credentials into the image. [Source].
Begin by signing in using:
$ gcloud auth application-default login
Then add the following to your docker-compose.yaml:
volumes:
- ~/.config/gcloud:/root/.config/gcloud
Related
I would like to pass my Google Cloud Platform's service account JSON credentials file to a docker container so that the container can access a cloud storage bucket. So far I tried to pass the file as an environment parameter on the run command like this:
Using the --env flag: docker run -p 8501:8501 --env GOOGLE_APPLICATION_CREDENTIALS=/Users/gcp_credentials.json" -t -i image_name
Using the -e flag and even exporting the same env variable in the command line: docker run -p 8501:8501 -e GOOGLE_APPLICATION_CREDENTIALS=/Users/gcp_credentials.json" -t -i image_name
But nothing worked, and I always get the following error when running the docker container:
W
external/org_tensorflow/tensorflow/core/platform/cloud/google_auth_provider.cc:184]
All attempts to get a Google authentication bearer token failed,
returning an empty token. Retrieving token from files failed with "Not
found: Could not locate the credentials file.".
How to pass the google credentials file to a container running locally on my personal laptop?
You cannot "pass" an external path, but have to add the JSON into the container.
Two ways to do it:
Volumes: https://docs.docker.com/storage/volumes/
Secrets: https://docs.docker.com/engine/swarm/secrets/
secrets - work with docker swarm mode.
create docker secrets
use secret with a container using --secret
Advantage being, secrets are encrypted. Secrets are decrypted when mounted to containers.
I log into gcloud in my local environment then share that json file as a volume in the same location in the container.
Here is great post on how to do it with relevant extract below: Use Google Cloud user credentials when testing containers locally
Login locally
To get your default user credentials on your local environment, you
have to use the gcloud SDK. You have 2 commands to get authentication:
gcloud auth login to get authenticated on all subsequent gcloud
commands gcloud auth application-default login to create your ADC
locally, in a “well-known” location.
Note location of credentials
The Google auth library tries to get a valid credentials by performing
checks in this order
Look at the environment variable GOOGLE_APPLICATION_CREDENTIALS value.
If exists, use it, else… Look at the metadata server (only on Google
Cloud Platform). If it returns correct HTTP codes, use it, else… Look
at “well-know” location if a user credential JSON file exists The
“well-known” locations are
On linux: ~/.config/gcloud/application_default_credentials.json On
Windows: %appdata%/gcloud/application_default_credentials.json
Share volume with container
Therefore, you have to run your local docker run command like this
ADC=~/.config/gcloud/application_default_credentials.json \ docker run
\
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
-v ${ADC}:/tmp/keys/FILE_NAME.json:ro \ <IMAGE_URL>
NB: this is only for local development, on Google Cloud Platform the credentials for the service are automatically inserted for you.
I am trying to push docker image to GCP, but i am still getting this error:
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
I follow this https://cloud.google.com/container-registry/docs/quickstart step by step and everything works fine until docker push
It's clear GCP project
I've already tried:
use gcloud as a Docker credential helper:
gcloud auth configure-docker
reinstall Cloud SDK and gcloud init
add Storage Admin role to my account
What I am doing wrong?
Thanks for any suggestions
If it can help those in the same situation as me:
Docker 19.03
Google cloud SDK 288.0.0
Important: My user is not in a docker user group. I then have to prepend sudo before any docker command
When gcloud and docker are not using the same config.json
When I use gcloud credential helper:
gcloud auth configure-docker
it updates the JSON config file in my $HOME: [/home/{username}/.docker/config.json]. However, when logging out and login again from Docker CLI,
sudo docker login
The warning shows a different path, which makes sense as I sudo-ed:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
sudo everywhere
To fix it, I did the following steps:
# Clear everything
sudo docker logout
sudo rm /root/.docker/config.json
rm /home/{username}/.docker/config.json
# Re-login
sudo docker login
sudo gcloud auth login --no-launch-browser # --no-launch-browser is optional
# Check both Docker CLI and gcloud credential helper are here
sudo vim /root/.docker/config.json
# Just in case
sudo gcloud config set project {PROJECT_ID}
I can now push my Docker images to both GCR and Docker hub
I've discovered a flow that works through GCP console but not through the gcloud CLI.
Minimal Repro
The following bash snippet creates a fresh GCP project and attempts to push an image to gcr.io, but fails with "access denied" even though the user is project owner:
gcloud auth login
PROJECT_ID="example-project-20181120"
gcloud projects create "$PROJECT_ID" --set-as-default
gcloud services enable containerregistry.googleapis.com
gcloud auth configure-docker --quiet
mkdir ~/docker-source && cd ~/docker-source
git clone https://github.com/mtlynch/docker-flask-upload-demo.git .
LOCAL_IMAGE_NAME="flask-demo-app"
GCR_IMAGE_PATH="gcr.io/${PROJECT_ID}/flask-demo-app"
docker build --tag "$LOCAL_IMAGE_NAME" .
docker tag "$LOCAL_IMAGE_NAME" "$GCR_IMAGE_PATH"
docker push "$GCR_IMAGE_PATH"
Result
The push refers to repository [gcr.io/example-project-20181120/flask-demo-app]
02205dbcdc63: Preparing
06ade19a43a0: Preparing
38d9ac54a7b9: Preparing
f83363c693c0: Preparing
b0d071df1063: Preparing
90d1009ce6fe: Waiting
denied: Token exchange failed for project 'example-project-20181120'. Access denied.
The system is Ubuntu 16.04 with the latest version of gcloud 225.0.0, as of this writing. The account I auth'ed with has role roles/owner.
Inconsistency with GCP Console
I notice that if I follow the same flow through GCP Console, I can docker push successfully:
Create a new GCP project via GCP Console
Create a service account with roles/owner via GCP Console
Download JSON key for service account
Enable container registry API via GCP Console
gcloud auth activate-service-account --key-file key.json
gcloud config set project $PROJECT_ID
gcloud auth configure-docker --quiet
docker tag "$LOCAL_IMAGE_NAME" "$GCR_IMAGE_PATH" && docker push "$GCR_IMAGE_PATH"
Result: Works as expected. Successfully pushes docker image to gcr.io.
Other attempts
I also tried using gcloud auth login as my #gmail.com account, then using that account to create a service account with gcloud, but that gets the same denied error:
SERVICE_ACCOUNT_NAME=test-service-account
gcloud iam service-accounts create "$SERVICE_ACCOUNT_NAME"
KEY_FILE="${HOME}/key.json"
gcloud iam service-accounts keys create "$KEY_FILE" \
--iam-account "${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com"
gcloud projects add-iam-policy-binding "$PROJECT_ID" \
--member "serviceAccount:${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com" \
--role roles/owner
gcloud auth activate-service-account --key-file="${HOME}/key.json"
docker push "$GCR_IMAGE_PATH"
Result: denied: Token exchange failed for project 'example-project-20181120'. Access denied.
I tried to reproduce the same error using bash snippet you provided, however it successfully built the ‘flask-demo-app’ container registry image for me. I used below steps to reproduce the issue:
Step 1: Use account which have ‘role: roles/owner’ and ‘role: roles/editor’
Step 2: Created bash script using your given snippet
Step 3: Added ‘gcloud auth activate-service-account --key-file skey.json’ in script to authenticate the account
Step 4: Run the bash script
Result : It created the ‘flask-demo-app’ container registry image
This leads me to believe that there might be an issue with your environment which is causing this error for you. To troubleshoot this you could try running your code on a different machine, a different network or even on the Cloud Shell.
In my case project IAM permission was the issue. Make sure proper permission given and also cloud resource/container registry API enabled.
GCP Access Control
Why doesn't gsutil use the Gcloud credentials as it should when running in a docker container on Cloud Shell?
According to [1] gsutil should use gcloud credentials when they are available:
Once credentials have been configured via gcloud auth, those credentials will be used regardless of whether the user has any boto configuration files (which are located at ~/.boto unless a different path is specified in the BOTO_CONFIG environment variable). However, gsutil will still look for credentials in the boto config file if a type of non-GCS credential is needed that's not stored in the gcloud credential store (e.g., an HMAC credential for an S3 account).
This seems to work fine in gcloud installs but not in docker images. The process I used in Cloud Shell is:
docker run -ti --name gcloud-config google/cloud-sdk gcloud auth login
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gcloud compute instances list --project my_project
... (works ok)
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gsutil ls gs://bucket/
ServiceException: 401 Anonymous caller does not have storage.objects.list access to bucket.
[1] https://cloud.google.com/storage/docs/gsutil/addlhelp/CredentialTypesSupportingVariousUseCases
You need to mount a volume with your credentials :
docker run -v ~/.config/gcloud:/root/.config/gcloud your_docker_image
The following steps solve this problem for me:
Set the gs_service_key_file in the [Credentials] section of the boto config file (see here)
Activate your service account with gcloud auth activate-service-account
Set your default project in gcloud config
Dockerfile snipped:
ENV GOOGLE_APPLICATION_CREDENTIALS=/.gcp/your_service_account_key.json
ENV GOOGLE_PROJECT_ID=your-project-id
RUN echo '[Credentials]\ngs_service_key_file = /.gcp/your_service_account_key.json' \
> /etc/boto.cfg
RUN mkdir /.gcp
COPY your_service_account_key.json $GOOGLE_APPLICATION_CREDENTIALS
RUN gcloud auth activate-service-account --key-file=$GOOGLE_APPLICATION_CREDENTIALS --project $GOOGLE_PROJECT_ID
RUN gcloud config set project $GOOGLE_PROJECT_ID
I found #Alexandre's answer basically worked for me, except for one problem: my credentials worked for bq, but not for gsutil (the subject of OP's question), which returned
ServiceException: 401 Anonymous caller does not have storage.objects.list access to bucket
How could the same credentials work for one but not the other!?
Eventually I tracked it down: ~/.config/configurations/config_default looks like this:
[core]
account = xxx#xxxxxxx.xxx
project = xxxxxxxx
pass_credentials_to_gsutil = false
Why?! Why isn't this documented??
Anyway...change the flag to true, and you're all sorted.
I have made a Dockerfile for deploying my node.js application into google container engine .It looks like as below
FROM node:0.12
COPY google-cloud-sdk /google-cloud-sdk
RUN /google-cloud-sdk/bin/gcloud init
COPY bpe /bpe
CMD cd /bpe;npm start
I should use gcloud init inside Dockerfile because my node.js application is using gcloud-node module for creating buckets in GCS .
When i am using the above dockerfile and doing docker built it is failing with following errors
sudo docker build -t gcr.io/[PROJECT_ID]/test-node:v1 .
Sending build context to Docker daemon 489.3 MB
Sending build context to Docker daemon
Step 0 : FROM node:0.12
---> 57ef47f6c658
Step 1 : COPY google-cloud-sdk /google-cloud-sdk
---> f102b82812f5
Removing intermediate container 4433b0f3627f
Step 2 : RUN /google-cloud-sdk/bin/gcloud init
---> Running in 21aead97cf65
Welcome! This command will take you through the configuration of gcloud.
Your current configuration has been set to: [default]
To continue, you must log in. Would you like to log in (Y/n)?
Go to the following link in your browser:
https://accounts.google.com/o/oauth2/auth?redirect_uri=urn%3Aietf%3Awg%3Aoauth%3A2.0%3Aoob&prompt=select_account&response_type=code&client_id=32555940559.apps.googleusercontent.com&scope=https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fuserinfo.email+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcloud-platform+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fappengine.admin+https%3A%2F%2Fwww.googleapis.com%2Fauth%2Fcompute&access_type=offline
ERROR: There was a problem with web authentication.
ERROR: (gcloud.auth.login) invalid_grant
ERROR: (gcloud.init) Failed command: [auth login --force --brief] with exit code [1]
I done it working by hard coding the authentication key inside google-cloud-sdk source code.Please let me know the proper way to solve this issue .
gcloud init is a wrapper command which runs
gcloud config configurations create MY_CONFIG
gcloud config configurations activate MY_CONFIG
gcloud auth login
gcloud config set project MY_PROJECT
which allows user to choose configuration, login (via browser) and choose a project.
For your use case you probably do not want to use gcloud init, instead you should download service account key file from https://console.cloud.google.com/iam-admin/serviceaccounts/project?project=MY_PROJECT, make it accessible inside docker container and activate it via
gcloud auth activate-service-account --key-file my_service_account.json
gcloud config set project MY_PROJECT
Better way to use gcs from container engine is give permission to cluster.
For example, if you had created your VM with devstorage.read_only scope, trying to write to a bucket would fail, even if your service account has permission to write to the bucket. You would need devstorage.full_control or devstorage.read_write.
while creating cluster we can use following command
gcloud container clusters create catch-world \
--num-nodes 1 \
--machine-type n1-standard-1 \
--scopes https://www.googleapis.com/auth/devstorage.full_control