How can my Cloud Run accept traffic only from Cloud scheduler and Google Directoy API - google-cloud-run

How do I setup Cloud Run to only accept requests from Cloud Scheduler and Directory API?
I am using flask on Cloud Run that has 2 primary functions:
Send a watch (HTTP) request (triggered by Cloud Scheduler) to the google Directory API
Receives update (HTTP request) from the google Directory API
Currently my application on Cloud Run works without issue when "allUsers" is given the permission "Invoke Cloud Run."
To make it more secure:
I removed "allUsers" from the Cloud Run permissions
Gave Cloud Scheduler a service account that has the ability to invoke the Cloud Run
Added the Directory API service account to the Cloud Run and gave it the ability to invoke Cloud Run
Cloud scheduler invokes the Cloud Run without issue, but when an update is received from the Directory API, I keep getting: The request was not authenticated. Either allow unauthenticated invocations or set the proper Authorization header. Read more at https://cloud.google.com/run/docs/securing/authenticating
Documentation for the directory api requests I am using: https://developers.google.com/admin-sdk/directory/v1/reference/users/watch

Related

GCP Cloud Run Cannot Pull Image from Artifact Registry in Other Project

I have a parent project that has an artifact registry configured for docker.
A child project has a cloud run service that needs to pull its image from the parent.
The child project also has a service account that is authorized to access the repository via an IAM role roles/artifactregistry.writer.
When I try to start my service I get an error message:
Google Cloud Run Service Agent must have permission to read the image,
europe-west1-docker.pkg.dev/test-parent-project/docker-webank-private/node:custom-1.
Ensure that the provided container image URL is correct and that the
above account has permission to access the image. If you just enabled
the Cloud Run API, the permissions might take a few minutes to
propagate. Note that the image is from project [test-parent-project], which
is not the same as this project [test-child-project]. Permission must be
granted to the Google Cloud Run Service Agent from this project.
I have tested manually connecting with docker login and using the service account's private key and the docker pull command works perfectly from my PC.
cat $GOOGLE_APPLICATION_CREDENTIALS | docker login -u _json_key --password-stdin https://europe-west1-docker.pkg.dev
> Login succeeded
docker pull europe-west1-docker.pkg.dev/bfb-cicd-inno0/docker-webank-private/node:custom-1
> OK
The service account is also attached to the cloud run service:
You have 2 types of service account used in Cloud Run:
The Google Cloud Run API service account
The Runtime service account.
In your explanation, and your screenshot, you talk about the runtime service account, the identity that will be used by the service when it runs and call Google Cloud API.
BUT before running, the service must be deployed. This time, it's a Google Cloud Run internal process that run to pull the container, create a revision and do all the required internal stuff. To do that job, a service account also exist, it's named "service agent".
In the IAM console, you can find it: the format is the following
service-<PROJECT_NUMBER>#serverless-robot-prod.iam.gserviceaccount.com
Don't forget to tick the checkbox in the upper right corner to include the Google Managed service account
If you want that this deployment service account be able to pull image in another project, grant on it the correct permission, not on the runtime service account.

Programmatically check if Cloud Run domain mapping has done

I'm developing a service which will have a subdomain for each customer. So far I've set a DNS rule on Google Domains as
* | CNAME | 3600 | ghs.googlehosted.com.
and then I add the mapping for each subdomain in the Cloud Run console. I want to do all this programmatically everytime a new user registers.
The DNS rule will handle automatically any new subdomain, and to map it to the service I'll use the gcloud command:
gcloud beta run domain-mappings create --service frontend --domain sub.domain.com
Now, how can I check when the Cloud Run provisioning has done so that I can notify the customer that the platform is ready to use? I could CRON every minute the command gcloud beta run domain-mappings describe --domain sub.domain.com, parse the JSON output and check if the status has done. It's expensive, but it should work.
The problem is that even if the gcloud cli or the web console mark the provisioning as done, the platform isn't reachable for another 5-10 minutes, resulting in a ERR_CONNECTION_REFUSED error. The service logs show that a request to the subdomain is being made, but somehow it won't serve it.
I ended up using a load balancer as suggested. I followed this doc "Setting up a load balancer with Cloud Run, App Engine, or Cloud Functions", the only different thing is that I provided my own wildcard certificate (thanks to Let's Encrypt and certbox).
Now I can just use the Google Domains' API to instantly create a subdomain.

Cloud Scheduler has Permission Denied when attempting to run a Cloud Run job

I have created a simple Cloud Run job. I am able to trigger this code via a curl command:
curl -H "Authorization: Bearer $(gcloud auth print-identity-token)" https://sync-<magic>.a.run.app
(Obviously <magic> is actually something else)
Cloud Run is configured for Ingress to Allow All Traffic and with Authentication to be required.
I followed this documentation: https://cloud.google.com/run/docs/triggering/using-scheduler
And created a service account, granted it the Cloud Run Invoker Role and then setup an HTTP scheduled job to GET the same URL I tested with CURL. I have Add OIDC Token selected, and I provide the service account created above and the Audience which is the same URL I used with curl.
When I attempt to trigger this job (or when it triggers based of the native cron) it fails with:
{ "status": "PERMISSION_DENIED", "#type": "type.googleapis.com/google.cloud.scheduler.logging.AttemptFinished", "targetType": "HTTP", "jobName": "projects/<project>/locations/<region>/jobs/sync", "url": "https://sync-<magic>.a.run.app/" }
Again <project>, <region> and <magic> have real values.
I tried using service-YOUR_PROJECT_NUMBER#gcp-sa-cloudscheduler.iam.gserviceaccount.com with YOUR_PROJECT_NUMBER updated appropriately as the service account that runs the scheduled job. It has the same error.
Any advice on how to debug this would be greatly appreciated!
Here is what i did which solved the issue altogether and now I get the success flag when running a secure Cloud Run service via a Cloud Scheduler job -
Create your service on Cloud run - let's call it "hello" and make it secured by removing "allUsers" permission from the list of Permissions PRINCIPALS - you should get an error when going to the endpoint as such - Error: Forbidden
Your client does not have permission to get URL / from this server.
Create an IAM service account for cloud scheduler - let's call it "cloud-scheduler" you will get this: cloud-scheduler#project-ID.iam.gserviceaccount.com now comes the important part :
Give your SA the ability to run Scheduler Jobs by adding the -
Cloud Run Invoker & Cloud Scheduler Job Runner permissions
Create your Cloud scheduler job and add the new SA to it according to google procedure :
Auth header: Add OIDC token
Service account: cloud-scheduler#project-id.iam.gserviceaccount.com
Audience : https://Service.url.from.cloud.run.service/
Add to your cloud run service an additional principal that will let your SA access to cloud run invoker
Run your scheduler and voila - all green !
Enjoy
I have tried to create a new service account, gave it Cloud run invoker role. Disable the Cloud Scheduler API and re-enable it.
The only thing that work for me is changing Auth header from Add OIDC token to None.
For some reason Cloud Scheduler change None back to Add OIDC token and Trigger cloud run normally

Enabling Scheduler for spring cloud data flow server in pcf

We are using PCF to run our applications, To build data pipelines we thought of leveraging the Spring cloud data flow server, which is given as service inside PCF.
We created a DataFlow server by giving SQL server and maven repo details, and for the scheduler, we didn't provide any extra parameters while creating service, so by default, it is disabled.
Got some info from here, how to enable scheduler: https://docs.spring.io/spring-cloud-dataflow/docs/current/reference/htmlsingle/#_enabling_scheduling
So I tried updating the existing Data Flow service with the below command:
cf updat-service my-service -c '{"spring.cloud.dataflow.features.schedules-enabled":true}'
the Data Flow server is restarted, but still the scheduler is not enabled to schedule the jobs.
When I check with this endpoint GET /about from the Data Flow server, I am still getting
"schedulesEnabled": false
in response body.
I am not sure why the SCDF service isn't updated with the schedules enabled property even after you update service (as it is expected to have it enabled).
Irrespective of that you can try setting the following as environment property for SCDF service instance as well:
SPRING_CLOUD_DATAFLOW_FEATURES_SCHEDULES_ENABLED: true
Once the schedule is enabled, you need to make sure that you have the following properties set correctly as well:
SPRING_CLOUD_DEPLOYER_CLOUDFOUNDRY_TASK_SERVICES: <all-the-services-for-tasks-along-with-the-scheduler-service-instance>
SPRING_CLOUD_SCHEDULER_CLOUDFOUNDRY_SCHEDULER_URL: <scheduler-url>

How to invoke a Cloud Run app without having to add the Authorization Token

I have a cloud run app deployed that is for internal use only.
Therefore only users of our cluster should have access to it.
I added the permission for allAuthenticated members giving them the role Cloud Run Invoker.
The problem is that those users (including me) now have to add authorization bearer header everytime I want to access that app.
This is what Cloud Run suggests to do (somehow useless when u wanna simply visit a frontend app)
curl -H \
"Authorization: Bearer $(gcloud auth print-identity-token)" \
https://importer-controlroom-frontend-xl23p3zuiq-ew.a.run.app
I wonder why it is not possible to be realized as authorized member like the GCP figures out. I can access the cluster but have to add the authorization header to access the cloud run app as authorized member? I find this very inconvenient.
Is there any way to make it way more fun to access the deployed cloud run app?
PS: I do not want to place the app in our cluser - so only fully managed is an option here
You currently can't do that without the Authorization header on Cloud Run.
allAuthenticated subject means any Google user (or service account), so you need to add the identity-token to prove you're one.
If you want to make your application public, read this doc.
But this is a timely request! I am currently running an experiment that lets you make requests to http://hello and automatically get routed to the full domain + automatically get the Authorization header injected! (This is for communication between Cloud Run applications.)
GCP now offers a proxy tool for making this easier, although it's in beta as of writing this.
It's part of the gcloud suite, you can run:
gcloud beta run services proxy $servicename --project $project --region $region
It will launch a webserver on localhost:8080, that forwards all requests to the targeted service, injecting the user's GCP token into all requests.
Of course this can only be used locally.

Resources