Unable to deploy a Cloud Run Service using an image from another project - google-cloud-run

Getting this error when trying to deploy a Cloud Run Service using an image from another project on same organization.
"
Google Cloud Run Service Agent must have permission to read the image, gcr.io/my-builds/consultoriaweb#sha256:8c655b2bab..... Ensure that the provided container image URL is correct and that the above account has permission to access the image. If you just enabled the Cloud Run API, the permissions might take a few minutes to propagate. Note that the image is from project [my-builds], which is not the same as this project [my-webapp]. Permission must be granted to the Google Cloud Run Service Agent from this project.
"
I am selecting the image to deploy from container registry on my-builds project using Google Console web interface.
Already added IAM permission on [my-builds] project, tried both:
[my-webapp-project-number]-compute#developer.gserviceaccount.com => role Compute Image User
[my-webapp-project-number]#cloudservices.gserviceaccount.com => role Compute Image User
Google documentation says that I should just give roles/compute.imageUser role to:
[my-webapp-project-number]#cloudservices.gserviceaccount.com on my-builds project, but I can't get it to work.
Google documentation to Using Images from Other Projects, but I don't know if it applies to Cloud Run.
https://cloud.google.com/deployment-manager/docs/configuration/using-images-from-other-projects-for-vm-instances#granting_access_to_images
Thanks in advance for any help on that

You mixed different things. A container image isn't a Compute Engine boot disk image.
So, you need to grant the Cloud Run service agent service account to access to the image to your other project. You can find the documentation here to grant access to GCR image.
Then you need to get your Cloud Run service agent service account which has this pattern
service-<projectNumber>#serverless-robot-prod.iam.gserviceaccount.com
Both combined, you can go to the console of the project hosting the container image; go to the IAM page, click on add
Add the Cloud Run Service agent service account as member
Grant the role: storage object viewer.

Thank you. Get it to work!
I found many different resources/docs about setting permission to cloud Run to pull container images from other projects. So I tested to discover the one that really is really needed:
For Artifacty Registry:
members: serviceAccount:service-<projectNumber>#serverless-robot-prod.iam.gserviceaccount.com
role: roles/artifactregistry.reader
For Container Registry:
members: serviceAccount:service-<projectNumber>#serverless-robot-prod.iam.gserviceaccount.com
role: roles/storage.objectViewer
Thank you again.

I got the same error today for one of my projects and I have found the official docs and wanted to share the steps here.
In the console, open the project for your Cloud Run service.
Check the checkbox labelled Include Google-provided role grants.
Copy the email of the Cloud Run service agent. It has the suffix #serverless-robot-prod.iam.gserviceaccount.com (You can just do Control+F and search for#serverless-robot-prod.iam.gserviceaccount.com)
(You should find the account from this page because the project-number is not the project-id.)
Open the project that owns the container registry you want to use.
Click Add to add a new principal.
In the New principals text box, paste in the email of the service account that you copied earlier.
In the Select a role dropdown list, if you are using Container Registry, select the role Storage -> Storage Object Viewer. If you are using Artifact Registry, select the role Artifact Registry -> Artifact Registry Reader.
Deploy the container image to the project that contains your Cloud Run service.
You can follow the official docs from HERE

To answer to #mzafer, here the Terraform code I use to do it :
resource "google_project_iam_member" "run_gcr" {
project = local.build_project
role = "roles/storage.objectViewer"
member = "serviceAccount:service-${google_project.main.number}#serverless-robot-prod.iam.gserviceaccount.com"
}

Related

Cloud Build docker image unable to write files locally - fail to open file... permission denied

Using Service Account credentials, I am successful at running Cloud Build to spin up gsutil, move files from gs into the instance, then copy them back out. All is good.
One of the Cloud Build steps successfully loads a docker image from outside source, it loads fine and reports its own help info successfully. But when run, it fails with the error message:
"fail to open file "..intermediary_work_product_file." permission denied.
For the app I'm running in this step, this error is typically produced when the file cannot be written to its default location. I've set dir = "/workspace" to confirm the default.
So how do I grant read/write permissions to the app running inside a Cloud Build step to write its own intermediary work product to the local folders? The Cloud Build itself is running fine using Service Account credentials. Have tried adding more permissions including with Storage, Cloud Run, Compute Engine, App Engine admin roles. But the same error.
I assume that the credentials used to create the instance are passed to the run time. Have dug deep into the GCP CloudBuild documentation and examples, but found no answers.
There must be something fundamental I'm overlooking.
This problem was resolved by changing the Dockerfile USER as suggested by #PRAJINPRAKASH in this helpful answer https://stackoverflow.com/a/62218160/4882696
Tried to solve this by systematically testing GCP services and role permissions. All Service Account credentials tested were able to create container instances, and run gcloud or gutil fine. However, the custom apps created containers but failed when doing local write even to the default shared /workspace.
When using GCP Cloud Build, local read/write permissions do not "pass through" from the default service account to the runtime instance. The documentation is not clear on this.
I encountered this problem while building my react app with Cloud Build, i wasn't able to install node-sass globally...
So i tried to chown recursively the /usr directory to nobody:nogroup, and it worked. I have no idea if there is another better solution to this, but, the important thing, it fixed my issue.
I had a similar problem; the snippet I was looking for in my cloudbuild manifest was:
- id: perms
name: "gcr.io/cloud-builders/git"
entrypoint: "chmod"
args: ["-v", "-R", "a+rw", "."]
dir: "path/to/some/dir"

When you assign service account to a Cloud Run service, what does exactly happen?

I am trying to understand what does assigning service account to a Cloud Run service actually do in order to improve the security of the containers. I have multiple processes running within a Cloud Run service and not all of them do need to access the project resources.
A more specific question I have in mind is:
Would I be able to create multiple users and run some processes as a user that does not have access to the service account or does every user have access to the service account?
I run a small experiment on a VM instance (I guess this will be a similar case as with Cloud Run) where I created a new user and after creation, it wasn't authorized to use the service account of the instance. However, I am not sure is there a way to authorize it which would make my method insecure.
Thank you.
EDIT
To perform the test I created a new os user and used "gcloud auth list" from the new user account. However, I should have made a curl request and I would have been able to retrieve credentials as pointed out by an answer below.
Your question is not very clear but I will try to provide you several inputs.
When you run a service on Cloud Run, you have 2 choices for defining its identity
Either it's the compute engine service account which is used (by default, is you specify nothing)
Or it's the service account that you specify at the deployment
This service account is valid for the Cloud Run service (you can have up to 1000 different services per project).
Now, when you run your container, the service account is not really loaded into the container (it's the same thing with compute engine), but there is an API available for requesting the authentication data of this service account. It's name metadata server
It's not restricted to users (I don't know how you perform your test on Compute Engine!), a simple curl is enough for getting the data.
This metadata server is used when you use your libraries, for example, and you use the "default credentials". gcloud SDK also uses it.
I hope you have a better view now. If not, add details in your question or in the comments.
The keyword that's missing from guillaume’s answer is "permissions".
Specifically, if you don't assign a service account, Cloud Run will use the Compute Engine default service account.
This default account has Editor role on your project (in other words, it can do nearly anything on your GCP project, short of creating new service accounts and giving them access, and maybe deleting the GCP project). If you use default service account and your container is compromised, you're probably in trouble. ⚠️
However, if you specify a new --service-account, by default it has no permissions. You have to bind it roles or permissions (e.g. GCS Object Reader, PubSub Publisher...) that your application needs.
Just to add the previous answers, if you are using something like Cloud Build,here is how you can implement it
steps:
- name: gcr.io/cloud-builders/gcloud
args:
- '-c'
- "gcloud secrets versions access latest --secret=[SECRET_NAME] \t --format='get(payload.data)' | tr '_-' '/+' | base64 -d > Dockerfile"
entrypoint: bash
- name: gcr.io/cloud-builders/gcloud
args:
- '-c'
- gcloud run deploy [SERVICE_NAME] --source . --region=[REGION_NAME] --service-account=[SERVICE_ACCOUNT]#[PROJECT_ID].iam.gserviceaccount.com --max-instances=[SPECIFY_REQUIRED_VALUE]
entrypoint: /bin/bash
options:
logging: CLOUD_LOGGING_ONLY
I am using this in a personal project but I will explain what is happening here. The first one is pulling data from my Secret Manager where I am storing a Dockerfile with the secret environment variables. This is optional, if you are not storing any API keys and secrets,you can skip it. But if you have a different folder structure (ie that isn't flat)
The second deploys Cloud Run from the source code. The documentation for that can be found here.
https://cloud.google.com/run/docs/deploying-source-code

IoT Edge : device can't download my module from Azure Container Registry but it can from dockerhub

I followed this azure example to develop my module connectedbarmodule in python for Azure IoT Edge. Then , I followed this link to deploy my module in my device (raspberry pi 3). However, my module can't be downloaded. Then, I executed the following command on my device :
sudo docker logs -f edgeAgent
I have the following error:
Error calling Create module ConnectedBarModule:
Get https://iotedgeregistery.azurecr.io/v2/connectedbarmodule/manifests/0.0.1-amd64:
unauthorized: authentication required)
This is an url regarding my Azure Container Registry where the image of my module is stored. I don't know how to get the credentials for iotedge to download my module.
I tested the case to pu the image not in the Azure Container Registry but in my dockerhub account and it works, my device can download the module.
If someone has an idea, this would be very kind.
Thank you in advance.
Your Azure Container Registry is private. Hence, you need to add the credentials for it in order for the edgeAgent to be download images from private registries:
Through the Azure Portal: In the first step of "Set Modules"
When done through deployments in Visual Studio Code:
"In the VS Code explorer, open the .env file. Update the fields with
the username and password values that you copied from your Azure
container registry." (https://learn.microsoft.com/en-us/azure/iot-edge/tutorial-c-module#add-your-registry-credentials)
For your issue, you can use the command docker login -u <ACR username> -p <ACR password> <ACR login server> which shows in the example you posted. About the authentication of Azure Container Registry, there are two ways you can choose.
One is that use the user and password which shows in your ACR on the Azure portal.
Another is that you can use the Azure Service Principal, you can set the permission for the user. Follow document Azure Container Registry authentication with service principals. I would suggest this way much more than the first because it's safer.
It's just an advice. Hope this will help you and if you need more help please show me the message.

How to use private quay.io images with fleet and CoreOS

I've been trying to deploy containers with fleet on a CoreOS cluster. However, some of the docker images are privately stored on quay.io requiring a login.
Now I could add a docker login as a precondition to every relevant unit file, but that doesn't seem right. I'm sure there must be a way to store the respective registry credentials somewhere docker can find it when trying to download the image.
Any ideas?
The best way to do this is with a Quay "robot account", which is a separate set of credentials than your regular account. This is helpful for two reasons:
they can be revoked if needed
can be limited to a subset of your repositories
When you make a new robot account, if you click "view credentials", you will get the credentials pre-formatted for common use-cases, such as Docker and Kubernetes.
In this case, you want "Docker Configuration", which is placed at ~/.docker/config.json on the server(s). Docker will automatically use this to authenticate with Quay.io.

Sharing docker registry images among gcloud projects

We're hoping to use a google project to share docker images containing microservices across projects.
I was thinking I could do it using the kubernetes run command and pull an image from a project other than the current one:
kubectl run gdrive-service --image=us.gcr.io/foo/gdrive-service
My user credentials have access to both projects. However, it seems like the run command can only pull mages from the current project.
Is there an approach for doing this? It seems like an obvious use case.
There are a few options here.
Use _json_key auth described here with Kubernetes pull secrets.
This describes how to add robots across projects as well, still without needing pull secrets.
In my answer here I describe a way to do this by granting the GKE service account user Storage Object Viewer permission under the project that contains the registry.

Resources