I Create a google compute instance with service account
gcloud --project my-proj compute instances create test1 \
--image-family "debian-9" --image-project "debian-cloud" \
--machine-type "g1-small" --network "default" --maintenance-policy "MIGRATE" \
--service-account "gke-build-robot#myproj-184015.iam.gserviceaccount.com" \
--scopes "https://www.googleapis.com/auth/cloud-platform" \
--tags "gitlab-runner" \
--boot-disk-size "10" --boot-disk-type "pd-standard" --boot-disk-device-name "$RESOURCE_NAME" \
--metadata register_token=mytoken,config_bucket=gitlab_config,runner_name=test1,gitlab_uri=myuri,runner_tags=backend \
--metadata-from-file "startup-script=startup-scripts/prepare-runner.sh"
Log to instance though ssh: gcloud compute --project "myproj" ssh --zone "europe-west1-b" "gitlab-shared-runner-pool"
After install and configure docker machine. i try create instance:
docker-machine create --driver google --google-project myproj test2
Running pre-create checks...
(test2) Check that the project exists
(test2) Check if the instance already exists
Creating machine...
(test2) Generating SSH Key
(test2) Creating host...
(test2) Opening firewall ports
(test2) Creating instance
(test2) Waiting for Instance
Error creating machine: Error in driver during machine creation: Operation error: {EXTERNAL_RESOURCE_NOT_FOUND The resource '1045904521672-compute#developer.gserviceaccount.com' of type 'serviceAccount' was not found. []}
1045904521672-compute#developer.gserviceaccount.com is my default account.
I don;t understand why it used. Because activated is gke-build-robot#myproj-184015.iam.gserviceaccount.com
gcloud config list
[core]
account = gke-build-robot#myproj-184015.iam.gserviceaccount.com
disable_usage_reporting = True
project = novaposhta-184015
Your active configuration is: [default]
gcloud auth list
Credentialed Accounts
ACTIVE ACCOUNT
* gke-build-robot#myproj-184015.iam.gserviceaccount.com
Can some one explain me, what i do wrong?
There was double problem.
First of all, docker-machine can't work with specific service account, at least in 0.12 and 0.13 version.
Docker+Machine google driver have only scope parameter and can't get specific one.
So Instance where docker+machine was installed is work fine with specified sa. But instance that was created with docker+machine, must have default service account.
And when during debug, I turn off it.
I've got this error as a result.
A similar issue (bosh-google-cpi-release issue 144) suggests somehow the
This error message is unclear, particularly because the credentials which also need to be specified in the manifest may be associated with another account altogether.
The default service_account for the bosh-google-cpi-release is set to "default" if it is not proactively set by the bosh manifest, so this will happen anytime you use service_scopes instead of a service_account.
While you are not using bosh-google-cpi-release, the last sentence made me double-check the gcloud reference page, in particular gcloud compute instance create.
A service account is an identity attached to the instance. Its access tokens can be accessed through the instance metadata server and are used to authenticate applications on the instance.
The account can be either an email address or an alias corresponding to a service account. You can explicitly specify the Compute Engine default service account using the 'default' alias.
If not provided, the instance will get project's default service account.
It is as if your service account is either ignored or incorrect (and falls back to the project default's one)
See "Creating and Enabling Service Accounts for Instances" to double-check its value:
Usually, the service account's email is derived from the service account ID, in the format:
[SERVICE-ACCOUNT-NAME]#[PROJECT_ID].iam.gserviceaccount.com
Or try setting first the service scope and account.
Related
I've assigned ownership of one of my google sheets to a service account from a gcloud project I am working on (not a smart thing to do, I know...). How can I re-assign ownership of this sheet to my main user account?
If you have permissions on the service account (e.g. you're owner of the GCP project), you can use the command line tools to authenticate as the service account and modify the permissions there.
Step by step process (you might have already some of those steps done):
Download and install the GCP SDK:
curl https://sdk.cloud.google.com | bash
exec -l $SHELL
gcloud init
During the initialization, follow the steps to authenticate with the account owner of the GCP project, and select the project in question. You can ignore the rest of the steps.
Create and download a key for the service account that is the current owner of the file (change the service account in this command):
gcloud iam service-accounts keys create key --iam-account service_account_id#project_id.iam.gserviceaccount.com
Hack the SDK to include the Drive scope:
sed -i 's/\(^CLOUDSDK_SCOPES = (\)/\1"https:\/\/www.googleapis.com\/auth\/drive",/' $(gcloud info --format 'value(installation.sdk_root)')/lib/googlecloudsdk/core/config.py
Activate the service account (change the service account in this command):
gcloud auth activate-service-account service_account_id#project_id.iam.gserviceaccount.com --key-file key
Make a call to the Drive API giving back the ownership (change the drive file ID and the new owner email address in this command):
curl -H"Authorization: Bearer $(gcloud auth print-access-token)" https://www.googleapis.com/drive/v3/files/DRIVE_FILE_ID/permissions?transferOwnership=true -d '{"role":"owner","type":"user","emailAddress":"YOUR_EMAIL#example.com"}' -H'content-type:application/json'
After these steps, your regular email account should be the new owner.
This is a pretty bad solution (hacking the SDK, etc..), but it's barely 7 bash commands, so I think it's likely the fastest/simplest one, at least for a one-off situation.
If this happens often (I guess not), it's likely that a real script would be more useful.
I have created a custom service account travisci-deployer#PROJECT_ID.iam.gserviceaccount.com on my project and gave it the Cloud Run Admin role:
gcloud projects add-iam-policy-binding "${PROJECT_ID}" \
--member="serviceAccount:${SERVICE_ACCOUNT_EMAIL}" \
--role="roles/run.admin"
Then I set this service account as the identity for my gcloud commands:
gcloud auth activate-service-account --key-file=google-key.json
But when I ran gcloud beta run deploy command, I got an error about the "Compute Engine default service account" not having iam.serviceAccounts.actAs permission:
gcloud beta run deploy -q "${SERVICE_NAME}" \
--image="${CONTAINER_IMAGE}" \
--allow-unauthenticated
Deploying container to Cloud Run service [$APP_NAME] in project [$PROJECT_ID] region [us-central1]
Deploying...
Deployment failed
ERROR: (gcloud.beta.run.deploy) PERMISSION_DENIED: Permission 'iam.serviceaccounts.actAs'
denied on service account 1075231960084-compute#developer.gserviceaccount.com
This seems weird to me (because I'm not using the GCE default service account identity, although it's used by Cloud Run app once the app is deployed).
So the 1075231960084-compute#developer.gserviceaccount.com account is being used for the API call, and not my travisci-deployer#PROJECT_ID.iam.gserviceacount service account configured on gcloud?
How can I address this?
TLDR: Add Cloud Run Admin and Service Account User roles to your service account.
If we read the docs in detail for the IAM Reference page for Cloud Run which is found here, we find the following text:
A user needs the following permissions to deploy new Cloud Run
services or revisions:
run.services.create and run.services.update on the project level.
Typically assigned through the roles/run.admin role. It can be changed
in the project permissions admin page.
iam.serviceAccounts.actAs for
the Cloud Run runtime service account. By default, this is
PROJECT_NUMBER-compute#developer.gserviceaccount.com. The permission
is typically assigned through the roles/iam.serviceAccountUser role.
I think these extra steps explain the story as you see it.
Adding Cloud Run Admin and Service Account User roles to my own service account fixed this for me. See step 2 in the docs here:
https://cloud.google.com/run/docs/continuous-deployment#continuous
For the best practice, you should only allow specific permission for the cloud run instance.
Reference: https://cloud.google.com/run/docs/reference/iam/roles#additional-configuration
Assuming you have two service accounts in your GCP.
One is the Clound Run identity service account/runtime service account.
Let it as identity-cloudrun#project-id.iam.gserviceaccount.com and this service account doesn't need to assign any permission to it because it is just as a identiy for the cloud run. If you need this cloud run instance access other GCP resource, you may add some permission for this service account.
Another one is the Deployment Service account which is used to deploy your Cloud Run.
Let it as deploy-cloudrun#project-id.iam.gserviceaccount.com
For the Deployment Service account, you need to grant Cloud Run Admin permissions to it specific to your-cloudrun-instance. So, it cannot access other cloud run instance.
gcloud run services add-iam-policy-binding your-cloudrun-instance \
--member="serviceAccount:deploy-cloudrun#project-id.iam.gserviceaccount.com" \
--role="roles/run.admin" \
--region=europe-west1
Also, you need to grant iam.serviceAccounts.actAs permission of identity service account to your deployment service account. This is mentioned by the documentation.
gcloud iam service-accounts add-iam-policy-binding \
identity-cloudrun#project-id.iam.gserviceaccount.com \
--member="serviceAccount:deploy-cloudrun#project-id.iam.gserviceaccount.com" \
--role="roles/iam.serviceAccountUser"
So, you can deploy your cloud run like below by your deployment service account.
Note: In practice, you should use workload identity federation instead using your deployment service account directly.
gcloud run deploy your-cloudrun-instance \
--image="us-docker.pkg.dev/cloudrun/container/hello" \
--service-account="identity-cloudrun#project-id.iam.gserviceaccount.com"
Though you can resolve this particular error by granting the account permission to act as the Compute Engine default service account, it goes against the "best practices" advice:
By default, Cloud Run services run as the default Compute Engine service account. However, Google recommends using a user-managed service account with the most minimal set of permissions. Learn how to deploy Cloud Run services with user-managed service accounts in the Cloud Run service identity documentation.
You can indicate which service account identity the Cloud Run deployment will assume like so:
gcloud run deploy -q "${SERVICE_NAME}" \
--image="${CONTAINER_IMAGE}" \
--allow-unauthenticated \
--service-account "${SERVICE_ACCOUNT_EMAIL}"
Currently, in beta, all Cloud Run services run as the default compute account (The same as the Google Compute Engine default service account).
The ability to run services as a different service account will be available in a future release.
I have a local Openshift instance where I'm trying to install Sentry using helm as:
helm install --name sentry --wait stable/sentry.
All pods are deployed fine other than the PostgreSQL pod also deployed as a dependency for Sentry.
This pod's initiliazation fails as a CrashLoopBackOff and the logs show the following:
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
initdb: could not change permissions of directory "/var/lib/postgresql/data/pgdata": Operation not permitted
Not sure where to start to fix this issue so I can get sentry deployed successfully with all its dependencies
The issue was resolved by adding permissions to the service account that was being used to run commands on the pod.
In my case the default service account on OpenShift was being used.
I added the appropriate permissions to this service account using the cli:
oc adm policy add-scc-to-user anyuid -z default --as system:admin
Also see: https://blog.openshift.com/understanding-service-accounts-sccs/
I'm running tests and pushing my docker images from CircleCi to Google Container Registry. At least I'm trying to.
Which roles does my service account require to be able to pull and push images to GCR?
Even as an account with the role "Project Owner", I get this error:
gcloud --quiet container clusters get-credentials $PROJECT_ID
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials)
ResponseError: code=403,
message=Required "container.clusters.get" permission(s)
for "projects/$PROJECT_ID/locations/europe-west1/clusters/$CLUSTER".
According to this doc, you will need the storage.admin role to Push (Read & Write), and storage.objectViewer to Pull (Read Only) from Google Container Registry.
On the topic of not being able to get credentials as owner, you are likely using the service account of the machine instead of your owner account. Check which account you are using with the command:
gcloud auth list
You can change the service account the machine is using through the UI by first stopping the instance, then editing the service account. You can also use your Google credentials using the command:
gcloud auth login
Hope this helps
When you get Required "___ANYTHING____" permission message:
go to Console -> IAM -> Roles -> Create new custom role [ROLE_NAME]
add container.clusters.get and/or whatever other permissions you need in order to get the whole thing going (I needed some rights for kubectl for example)
assign that role (Console -> IAM -> Add+) to your service account
I'm finding different behavior from within and outside of a docker image for authenticating a google service account.
Outside. Succeeds.
C:\Users\Ben\AppData\Local\Google\Cloud SDK>gcloud auth activate-service-account 773889352370-compute#developer.gserviceaccount.com --key-file C:/Users/Ben/Dropbox/Google/MeerkatReader-d77c0d6aa04f.json --project api-project-773889352370
Activated service account credentials for: [773889352370-compute#developer.gserviceaccount.com]
Run docker container, pass the .json key to tmp directory.
C:\Users\Ben\AppData\Local\Google\Cloud SDK>docker run -it -v C:/Users/Ben/Dropbox/Google/MeerkatReader-d77c0d6aa04f.json:/tmp/MeerkatReader-d77c0d6aa04f.json --rm -p "127.0.0.1:8080:8080" --entrypoint=/bin/bash gcr.io/cloud-datalab/datalab:local-20161227
From within docker, confirm the file is there
root#4a4a9314f15c:/tmp# ls
MeerkatReader-d77c0d6aa04f.json npm-24-b7aa1bcf npm-45-fd13ef7c npm-7-22ec336e
Run the same command as before. Fails.
root#4a4a9314f15c:/tmp# gcloud auth activate-service-account 773889352370-compute#developer.gserviceaccoun
t.com --key-file MeerkatReader-d77c0d6aa04f.json --project api-project-773889352370
ERROR: (gcloud.auth.activate-service-account) Failed to activate the given service account. Please ensure provided key file is valid.
What might cause this error? More broadly, what is the suggested strategy for passing credentials. I've tried this and it fails as well. I'm using the cloudml API and cloud vision, and i'd like to avoid manual gcloud init at the beginning of every run.
EDIT: To show gcloud info
root#7ff49b26484f:/# gcloud info --run-diagnostics
Network diagnostic detects and fixes local network connection issues.
Checking network connection...done.
Reachability Check passed.
Network diagnostic (1/1 checks) passed.
confirmed same behavior
root#7ff49b26484f:/tmp# gcloud auth activate-service-account 773889352370-compute#developer.gserviceaccount.com --key-file MeerkatReader-d77c0d6aa04f.json --project api-project-773889352370
ERROR: (gcloud.auth.activate-service-account) Failed to activate the given service account. Please ensure provided key file is valid.
This is probably due to a clock skew of the docker VM. I debugged the activate-service-account function of the google SDK and got the following error message:
There was a problem refreshing your current auth tokens: invalid_grant:
Invalid JWT: Token must be a short-lived token and in a reasonable timeframe
Please run:
$ gcloud auth login
to obtain new credentials, or if you have already logged in with a different account:
$ gcloud config set account ACCOUNT
to select an already authenticated account to use.
After rebooting the VM, it worked like a charm.
Have you attempted to put the credential in the image from the beginning? Is that a similar outcome?
On the other hand, have you tried using --key-file /tmp/MeerkatReader-d77c0d6aa04f.json? Since it appears you're putting the json file in /tmp.
You might also consider checking the network configuration inside the container and with docker from the outside.
In my case, I was using a workload identity provider, and I made a little mistake, I set the workload provider with the full name of the pool
How it should be: /projects/${project-number}/locations/global/workloadIdentityPools/my-pool/providers/${id-provider}
And I also added the following command:
gcloud config set account ${{GCP_SERVICE_ACCOUNT}}
Before my docker push, because it was required.
In addition, according to the docs https://github.com/google-github-actions/auth#usage, my service account was missing the required roles:
roles/iam.serviceAccountTokenCreator
roles/iam.workloadIdentityUser
Edit: You may also need to grant access for your service account to your Workload Identity Pool, you can do it by command or interface:
gcloud iam service-accounts add-iam-policy-binding SERVICE_ACCOUNT_EMAIL \
--role=roles/iam.workloadIdentityUser \
--member="MEMBER_ID"
Docs:https://cloud.google.com/iam/docs/using-workload-identity-federation#gcloud