I would like to pass my Google Cloud Platform's service account JSON credentials file to a docker container so that the container can access a cloud storage bucket. So far I tried to pass the file as an environment parameter on the run command like this:
Using the --env flag: docker run -p 8501:8501 --env GOOGLE_APPLICATION_CREDENTIALS=/Users/gcp_credentials.json" -t -i image_name
Using the -e flag and even exporting the same env variable in the command line: docker run -p 8501:8501 -e GOOGLE_APPLICATION_CREDENTIALS=/Users/gcp_credentials.json" -t -i image_name
But nothing worked, and I always get the following error when running the docker container:
W
external/org_tensorflow/tensorflow/core/platform/cloud/google_auth_provider.cc:184]
All attempts to get a Google authentication bearer token failed,
returning an empty token. Retrieving token from files failed with "Not
found: Could not locate the credentials file.".
How to pass the google credentials file to a container running locally on my personal laptop?
You cannot "pass" an external path, but have to add the JSON into the container.
Two ways to do it:
Volumes: https://docs.docker.com/storage/volumes/
Secrets: https://docs.docker.com/engine/swarm/secrets/
secrets - work with docker swarm mode.
create docker secrets
use secret with a container using --secret
Advantage being, secrets are encrypted. Secrets are decrypted when mounted to containers.
I log into gcloud in my local environment then share that json file as a volume in the same location in the container.
Here is great post on how to do it with relevant extract below: Use Google Cloud user credentials when testing containers locally
Login locally
To get your default user credentials on your local environment, you
have to use the gcloud SDK. You have 2 commands to get authentication:
gcloud auth login to get authenticated on all subsequent gcloud
commands gcloud auth application-default login to create your ADC
locally, in a “well-known” location.
Note location of credentials
The Google auth library tries to get a valid credentials by performing
checks in this order
Look at the environment variable GOOGLE_APPLICATION_CREDENTIALS value.
If exists, use it, else… Look at the metadata server (only on Google
Cloud Platform). If it returns correct HTTP codes, use it, else… Look
at “well-know” location if a user credential JSON file exists The
“well-known” locations are
On linux: ~/.config/gcloud/application_default_credentials.json On
Windows: %appdata%/gcloud/application_default_credentials.json
Share volume with container
Therefore, you have to run your local docker run command like this
ADC=~/.config/gcloud/application_default_credentials.json \ docker run
\
-e GOOGLE_APPLICATION_CREDENTIALS=/tmp/keys/FILE_NAME.json
-v ${ADC}:/tmp/keys/FILE_NAME.json:ro \ <IMAGE_URL>
NB: this is only for local development, on Google Cloud Platform the credentials for the service are automatically inserted for you.
Related
I have a docker container golang code which interacts with aws resources. In the testing environment, we use iam role. But How do I test locally. How to use aws credentials to run my docker locally.I am using docker file to build the docker image.
Just mount your credential directory as read-only using:
docker run -v ${HOME}/.aws/credentials:/root/.aws/credentials:ro ...
given you have root as the user in the container and also have setup the host using this guide for credentials file.
or pass them directly using environment variables as:
docker run -e AWS_ACCESS_KEY_ID=<ACCESS_KEY> -e AWS_SECRET_ACCESS_KEY=<SECRET_KEY> ...
Why doesn't gsutil use the Gcloud credentials as it should when running in a docker container on Cloud Shell?
According to [1] gsutil should use gcloud credentials when they are available:
Once credentials have been configured via gcloud auth, those credentials will be used regardless of whether the user has any boto configuration files (which are located at ~/.boto unless a different path is specified in the BOTO_CONFIG environment variable). However, gsutil will still look for credentials in the boto config file if a type of non-GCS credential is needed that's not stored in the gcloud credential store (e.g., an HMAC credential for an S3 account).
This seems to work fine in gcloud installs but not in docker images. The process I used in Cloud Shell is:
docker run -ti --name gcloud-config google/cloud-sdk gcloud auth login
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gcloud compute instances list --project my_project
... (works ok)
docker run --rm -ti --volumes-from gcloud-config google/cloud-sdk gsutil ls gs://bucket/
ServiceException: 401 Anonymous caller does not have storage.objects.list access to bucket.
[1] https://cloud.google.com/storage/docs/gsutil/addlhelp/CredentialTypesSupportingVariousUseCases
You need to mount a volume with your credentials :
docker run -v ~/.config/gcloud:/root/.config/gcloud your_docker_image
The following steps solve this problem for me:
Set the gs_service_key_file in the [Credentials] section of the boto config file (see here)
Activate your service account with gcloud auth activate-service-account
Set your default project in gcloud config
Dockerfile snipped:
ENV GOOGLE_APPLICATION_CREDENTIALS=/.gcp/your_service_account_key.json
ENV GOOGLE_PROJECT_ID=your-project-id
RUN echo '[Credentials]\ngs_service_key_file = /.gcp/your_service_account_key.json' \
> /etc/boto.cfg
RUN mkdir /.gcp
COPY your_service_account_key.json $GOOGLE_APPLICATION_CREDENTIALS
RUN gcloud auth activate-service-account --key-file=$GOOGLE_APPLICATION_CREDENTIALS --project $GOOGLE_PROJECT_ID
RUN gcloud config set project $GOOGLE_PROJECT_ID
I found #Alexandre's answer basically worked for me, except for one problem: my credentials worked for bq, but not for gsutil (the subject of OP's question), which returned
ServiceException: 401 Anonymous caller does not have storage.objects.list access to bucket
How could the same credentials work for one but not the other!?
Eventually I tracked it down: ~/.config/configurations/config_default looks like this:
[core]
account = xxx#xxxxxxx.xxx
project = xxxxxxxx
pass_credentials_to_gsutil = false
Why?! Why isn't this documented??
Anyway...change the flag to true, and you're all sorted.
I am a bit confused about how I can authenticate the gcloud sdk on a docker container. Right now, my docker file includes the following:
#Install the google SDK
RUN curl https://dl.google.com/dl/cloudsdk/release/google-cloud-sdk.tar.gz > /tmp/google-cloud-sdk.tar.gz
RUN mkdir -p /usr/local/gcloud
RUN tar -C /usr/local/gcloud -xvf /tmp/google-cloud-sdk.tar.gz
RUN /usr/local/gcloud/google-cloud-sdk/install.sh
RUN /usr/local/gcloud/google-cloud-sdk/bin/gcloud init
However, I am confused how I would authenticate? When I run gcloud auth application-default login on my machine, it opens a new tab in chrome which prompts me to login. How would I input my credentials on the docker container if it opens a new tab in google chrome in the container?
You might consider using deb packages when setting up your docker container as it is done on docker hub.
That said you should NOT run gcloud init or gcloud auth application-default login or gcloud auth login... those are interactive commands which launch browser. To provide credentials to the container supply it with service account key file.
You can download one from cloud console: https://console.cloud.google.com/iam-admin/serviceaccounts/project?project=YOUR_PROJECT or create it with gcloud command
gcloud iam service-accounts keys create
see reference guide.
Either way once you have the key file ADD it to your container and run
gcloud auth activate-service-account --key-file=MY_KEY_FILE.json
You should be now set, but if you want to use it as Application Default Credentials (ADC), that is in the context of other libraries and tools, you need to set the following environment variable to point to the key file:
export GOOGLE_APPLICATION_CREDENTIALS=/the/path/to/MY_KEY_FILE.json
One thing to point out here is that gcloud tool does not use ADC, so later if you change your account to something else, for example via
gcloud config set core/account my_other_login#gmail.com
other tools and libraries will continue using old account via ADC key file but gcloud will now use different account.
You can map your local Google SDK credentials into the image. [Source].
Begin by signing in using:
$ gcloud auth application-default login
Then add the following to your docker-compose.yaml:
volumes:
- ~/.config/gcloud:/root/.config/gcloud
I am setting up Hashicorp vault on my development environment in -dev mode and trying to use access token created from policies to access to secret which policy is created for but I get "*permission denied" while I try to access to secret from CLI or API. Based on Vault documentation it should work.
The following is what I have done to set up:
Set up docker container using docker run --cap-add=IPC_LOCK -p 8200:8200 -e 'VAULT_DEV_ROOT_TOKEN_ID=roottoken' -v //c/confi
g:/config vault
Connect to docker container using docker exec -it {docker name} ash. I know it should be bash command but bash doesn't work and ash works!
After bashing to docker, export VAULT_ADDR='http://127.0.0.1:8200'
Set the root token in environment variable export VAULT_TOKEN='roottoken'
Create a secret vault secret/foo/bar value=secret
Create a policy file called secret.hcl with the content path "secret/foo/*" {
policy = "read"
}
Create a policy for the secret vault policy-write secret /config/secret.hcl and make sure the policy is created
Create a token for the policy is just created vault token-create -policy="secret"
Try to access the policy using API 'http://127.0.0.1:8200/v1/secret/foo' passing X-Vault-Token='token created in step 8' in header
Getting "*permission denied" error
Would be great is someone can shed a light..
I have been working with google's machine learning platform, cloudML.
Big picture:
I'm trying to figure out the cleanest way to get their docker environment up and running on google compute instances, have access to the cloudML API and my storage bucket.
Starting locally, I have my service account configured
C:\Program Files (x86)\Google\Cloud SDK>gcloud config list
Your active configuration is: [service]
[compute]
region = us-central1
zone = us-central1-a
[core]
account = 773889352370-compute#developer.gserviceaccount.com
disable_usage_reporting = False
project = api-project-773889352370
I boot a compute instance with the google container image family
gcloud compute instances create gci --image-family gci-stable --image-project google-containers --scopes 773889352370-compute#developer.gserviceaccount.com="https://www.googleapis.com/auth/cloud-platform"
EDIT: Need to explicitly set scope for communicating with cloudML.
I can then ssh into that instance (for debugging)
gcloud compute ssh benweinstein2010#gci
On the compute instance, I can pull the cloudML docker from GCR and run it
docker pull gcr.io/cloud-datalab/datalab:local
docker run -it --rm -p "127.0.0.1:8080:8080" \
--entrypoint=/bin/bash \
gcr.io/cloud-datalab/datalab:local
I can confirm I have access to my desired bucket. No credential problems there
root#cd6cc28a1c8a:/# gsutil ls gs://api-project-773889352370-ml
gs://api-project-773889352370-ml/Ben/
gs://api-project-773889352370-ml/Cameras/
gs://api-project-773889352370-ml/MeerkatReader/
gs://api-project-773889352370-ml/Prediction/
gs://api-project-773889352370-ml/TrainingData/
gs://api-project-773889352370-ml/cloudmldist/
But when I try to mount the bucket
root#139e775fcf6b:~# gcsfuse api-project-773889352370-ml /mnt/gcs-bucket
Using mount point: /mnt/gcs-bucket
Opening GCS connection...
Opening bucket...
Mounting file system...
daemonize.Run: readFromProcess: sub-process: mountWithArgs: mountWithConn: Mount: mount: running fusermount: exit status 1
stderr:
fusermount: failed to open /dev/fuse: Operation not permitted
It must be that I am required to activate my service account from within the docker container? I have had similar (unsolved issues elsewhere)
gcloud auth activate-service-account
I could pass docker a credentials .json file, but i'm not sure where/if gcloud ssh passes those files to my instance?
I have access to cloud platform more broadly, for example I can post a request to the cloudML API.
gcloud beta ml predict --model ${MODEL_NAME} --json-instances images/request.json > images/${outfile}
which succeeds. So some credentials are being passed.I guess I could pass it to compute engine, and then from the compute engine to the docker instance? It feels like i'm not using the tools as intended. I thought gcloud would handle this once I authenticated locally.
This was a docker issue, not a gcloud permissions issue. Docker needs to be run as --privileged to allow fuse to mount.