I'm quite new to github actions and gcloud. I have trouble to get my github-CI/CD-Pipeline running because I can't push any docker image to the google Cloud Registry due to access restrictions.
What have I done so far:
I have a Quarkus app hosted on github
I used github actions to build the Maven project and the docker image
I created a project in google Cloud and added a service account which I use for the github action. The login seems to work:
Run google-github-actions/setup-gcloud#master
/usr/bin/tar xz --warning=no-unknown-keyword -C /home/runner/work/_temp/ac85f67a-89fa-4eb4-8d30-3f6379124ec2 -f /home/runner/work/_temp/de491940-a4b1-4a15-bf0a-95d563e68362
/opt/hostedtoolcache/gcloud/342.0.0/x64/bin/gcloud --quiet config set project ***
Updated property [core/project].
Successfully set default project
/opt/hostedtoolcache/gcloud/342.0.0/x64/bin/gcloud --quiet auth activate-service-account github-actions#***.iam.gserviceaccount.com --key-file -
Activated service account credentials for: [github-actions#***.iam.gserviceaccount.com]
If I now try to push the docker image I get the following (expected) error message:
Run docker push "$GCR_HOSTNAME/$PROJECT_ID/$IMAGE:$IMAGE_TAG"
The push refers to repository [eu.gcr.io/***/***]
715ac1ae8693: Preparing
435cfe5f5775: Preparing
313d03d71d4d: Preparing
c5c8d86ccee1: Preparing
1b0f2238925b: Preparing
144a43b910e8: Preparing
4a2bc86056a8: Preparing
144a43b910e8: Waiting
4a2bc86056a8: Waiting
denied: Token exchange failed for project '***'. Caller does not have permission 'storage.buckets.get'. To configure permissions, follow instructions at: https://cloud.google.com/container-registry/docs/access-control
Error: Process completed with exit code 1.
Next, I opened the Google Cloud Console and created a custom role (IAM & Admin -> Roles -> Create Role) which has the necessary permissions.
Then, I had trouble to assign my new custom role to the service account (IAM & Admin -> Service Accounts -> Manage Access -> Add member). I used the email address of the service account as "New members", but I could not choose the custom role I just created. What am I missing here?
I read somewhere that I can also add service accounts as member (IAM & Admin -> IAM -> Add). Again I used the email address of the service account as "New Members". This time I could choose my custom role. What's the difference to the first approach?
Anyways, if a I try to run the github action again, now I get the following error:
Run docker push "$GCR_HOSTNAME/$PROJECT_ID/$IMAGE:$IMAGE_TAG"
The push refers to repository [eu.gcr.io/***/***]
c4f14c9d3b6e: Preparing
fe78d438e8e2: Preparing
843fcae4a8f4: Preparing
dcf8cc80cedb: Preparing
45e8815b101d: Preparing
144a43b910e8: Preparing
4a2bc86056a8: Preparing
144a43b910e8: Waiting
4a2bc86056a8: Waiting
denied: Access denied.
Error: Process completed with exit code 1.
The error message is different, so I guess the permission for the service account somehow worked, but still I can't succeed. Which steps did I miss?
Any help is highly appreciated. Thanks a lot!
One way to debug this is to create a key for the service account on your local host, configure your script|gcloud to use the service account as its credentials and then try the push manually.
One immediate problem may be that you're not authenticating against Google Container Registry (GCR). GCR implements Docker's registry API and you'll need to use one of the mechanisms to authenticate before you can interact with the registry.
Notes:
I think you don't need to create a custom role. You have 2 options. Either (preferred) create an account specifically for the CI/CD job and grant it the minimum set of roles needed including storage.buckets.get. I think you can start with roles/storage.admin (link) and perhaps refine later.
You can grant roles e.g. roles/storage.admin to a Project in which case the permission applies to all Cloud Storage resources or to a specific Bucket in which case the permission applies only to the bucket and its objects.
Service Accounts have a dual role in GCP. As an identity and as a resource (that can be used by other identities). It can be confusing.
Related
I contact you because I am stuck since 2 days trying to push my work on google cloud run.
When I enter the command : docker push $MULTI_REGION/$PROJECT/$IMAGE
The result is :
Using default tag: latest
The push refers to repository [eu.gcr.io/le-wagon-ds/image-name]
b028f6321f0b: Preparing
f0049b6a2a38: Preparing
e4105b8f307b: Preparing
292a2a280e93: Preparing
bacec458bca2: Preparing
ddd6db8a3bdc: Waiting
75b1484c54ef: Waiting
b2fa61b044ea: Waiting
aee88de63e6f: Waiting
13e8b36cbbec: Waiting
821bf6371720: Waiting
8d3476a52917: Waiting
40572195ea84: Waiting
024595e1462f: Waiting
denied: Token exchange failed for project 'le-wagon-ds'. Caller does not have permission 'storage.buckets.create'. To configure permissions, follow instructions at: https://cloud.google.com/container-registry/docs/access-control
And unfortunately, the push failed... So I went in 'Service Account section', then 'permission' and I gave almost all the permissions to my principal 'demange.louis#hotmail.fr', that's to say :
Cloud Composer v2 API Service Agent Extension
Editor
Owner
Security Admin
Security Reviewer
Service Account Token Creator
Velostrata Manager
Viewer
But it is not possible to give 'storage administrator' as I need.
I checked well in my terminal with gcloud auth list that 'demange.louis#hotmail.fr' was indeed the principal being used when trying to push the work.
I precise that I didn't create bucket because I don't think it is necessary to push my work. If you have an idea of the problem, please write me ! :) . I precise that I am new on gcp, so my knowledge is really limited at the moment.
Thanks in advance.
Best regards,
Louis Demange
I have a parent project that has an artifact registry configured for docker.
A child project has a cloud run service that needs to pull its image from the parent.
The child project also has a service account that is authorized to access the repository via an IAM role roles/artifactregistry.writer.
When I try to start my service I get an error message:
Google Cloud Run Service Agent must have permission to read the image,
europe-west1-docker.pkg.dev/test-parent-project/docker-webank-private/node:custom-1.
Ensure that the provided container image URL is correct and that the
above account has permission to access the image. If you just enabled
the Cloud Run API, the permissions might take a few minutes to
propagate. Note that the image is from project [test-parent-project], which
is not the same as this project [test-child-project]. Permission must be
granted to the Google Cloud Run Service Agent from this project.
I have tested manually connecting with docker login and using the service account's private key and the docker pull command works perfectly from my PC.
cat $GOOGLE_APPLICATION_CREDENTIALS | docker login -u _json_key --password-stdin https://europe-west1-docker.pkg.dev
> Login succeeded
docker pull europe-west1-docker.pkg.dev/bfb-cicd-inno0/docker-webank-private/node:custom-1
> OK
The service account is also attached to the cloud run service:
You have 2 types of service account used in Cloud Run:
The Google Cloud Run API service account
The Runtime service account.
In your explanation, and your screenshot, you talk about the runtime service account, the identity that will be used by the service when it runs and call Google Cloud API.
BUT before running, the service must be deployed. This time, it's a Google Cloud Run internal process that run to pull the container, create a revision and do all the required internal stuff. To do that job, a service account also exist, it's named "service agent".
In the IAM console, you can find it: the format is the following
service-<PROJECT_NUMBER>#serverless-robot-prod.iam.gserviceaccount.com
Don't forget to tick the checkbox in the upper right corner to include the Google Managed service account
If you want that this deployment service account be able to pull image in another project, grant on it the correct permission, not on the runtime service account.
I have following permissions in google cloud.
BigQuery Admin
Cloud Functions Admin
Cloud Scheduler Admin
Compute Admin
Editor
Source Repository Administrator
Storage Admin
I am creating a cloud run container using cloud repository. But getting following error.
ERROR: build step 2 "gcr.io/google.com/cloudsdktool/cloud-sdk:slim"
failed: step exited with non-zero status: 1 ERROR Finished Step #2 -
"Deploy" Step #2 - "Deploy": ERROR: (gcloud.run.services.update)
PERMISSION_DENIED: Permission 'run.services.get' denied on resource
'namespaces/buypower-mobile-app/services/test-repo' (or resource may
not exist).
If you're using Cloud Build to deploy the Cloud Run service, then the error you’re getting is because the Service Account used by Cloud Build does not have sufficient permissions to update the Cloud Run service, according to the official documentation.
The specific error is that permission is denied on run.services.get. This method is part of either roles/run.admin or roles/run.developer. Both roles include the permission run.services.update, which it'll need.
To get it working, you will need to add that one of those roles to the Service Account that is being used by Cloud Build.
Along with #DazWilkin's answer I was having difficulty determining whether the service account actually had the run.services.update permission. Especially since I was using Github Actions with Workload Identity Pools for auth and impersonating a service account.
I'd recommend doing the following:
Check if your auth is working correctly
- id: 'auth'
name: 'Authenticate to Google Cloud'
uses: 'google-github-actions/auth#v0'
with:
workload_identity_provider: 'projects/xxxxx/locations/global/workloadIdentityPools/my_pool/providers/my_provider'
service_account: 'my-service-account#{PROJECT_ID}.iam.gserviceaccount.com'
# this will make this step fail if auth fails
token_format: 'access_token'
Check if the service account used above and in the google-github-actions/deploy-cloudrun#v0 has the run.services.update permission. There are 2 places you can check.
a) Policy Troubleshooter - Use the service account, select your project, and enter the permission. This will immediately tell you whether you have the permission or not
b) Policy Analyzer - Create Custom query, use Permission as parameter, click continue and Run Query. This will show you all the principals that have the permission you're looking for
If your service account doesn't have the correct permission you need to add it from the IAM & Admin -> IAM page
I'm running tests and pushing my docker images from CircleCi to Google Container Registry. At least I'm trying to.
Which roles does my service account require to be able to pull and push images to GCR?
Even as an account with the role "Project Owner", I get this error:
gcloud --quiet container clusters get-credentials $PROJECT_ID
Fetching cluster endpoint and auth data.
ERROR: (gcloud.container.clusters.get-credentials)
ResponseError: code=403,
message=Required "container.clusters.get" permission(s)
for "projects/$PROJECT_ID/locations/europe-west1/clusters/$CLUSTER".
According to this doc, you will need the storage.admin role to Push (Read & Write), and storage.objectViewer to Pull (Read Only) from Google Container Registry.
On the topic of not being able to get credentials as owner, you are likely using the service account of the machine instead of your owner account. Check which account you are using with the command:
gcloud auth list
You can change the service account the machine is using through the UI by first stopping the instance, then editing the service account. You can also use your Google credentials using the command:
gcloud auth login
Hope this helps
When you get Required "___ANYTHING____" permission message:
go to Console -> IAM -> Roles -> Create new custom role [ROLE_NAME]
add container.clusters.get and/or whatever other permissions you need in order to get the whole thing going (I needed some rights for kubectl for example)
assign that role (Console -> IAM -> Add+) to your service account
I'm trying via Jenkins to push an image to the container repository. It was working at first, but now, I got "access denied"
docker -- push gcr.io/xxxxxxx-yyyyy-138623/myApp:master.1
The push refers to a repository [gcr.io/xxxxxxx-yyyyy-138623/myApp]
bdc3ba7fdb96: Preparing
5632c278a6dc: Waiting
denied: Access denied.
the Jenkinsfile look like :
sh("gcloud docker --authorize-only")
sh("docker -- push gcr.io/xxxxxxx-yyyyy-138623/hotelpro4u:master.1")
Remarks:
Jenkins is running in Google Cloud
If I try in Google Shell or from my computer, it's working
I followed this tutorial : https://github.com/GoogleCloudPlatform/continuous-deployment-on-kubernetes
I'm stuck while 12 hours.... I need help
That error means that the GKE node is not authorized to push to the GCS bucket that is backing your repository.
This could be because:
The cluster does not have the correct scopes to authenticate to GCS. Did you create the cluster w/ --scopes storage-rw?
The service account that the cluster is running as does not have permissions on the bucket. Check the IAM & Admin section on your project to make sure that the service account has the necessary role.
Building on #cj-cullen's answer above, you have two options:
Destroy the node pool and then, from the CLI, recreate it with the missing https://www.googleapis.com/auth/projecthosting,storage-rw scope. The GKE console does not have the capability to change the default scopes when creating a node pool from the console.
Stop each instance in your cluster. In the console, click the edit button for the instance. You should now be able to add the appropriate https://www.googleapis.com/auth/projecthosting,storage-rw scope.