Google Cloud Run - create service task is loadingforever - docker

I'm trying to deploy a node.js application to Google cloud run.
I pushed a docker to container registry - that seems to be successful.
But, when I'm trying to deploy it to google cloud run -> to make it public, and accessible via WAN it fails, for unknown reasons.
While loading:
this step can take 10-15 minutes...
when it fails:
Resource readiness deadline exceeded.

The solution is provided in the GCP documentation:
https://cloud.google.com/run/docs/troubleshooting#service-agent
They suggest that you Verify that the service agent has the Cloud Run Service Agent role. If the service agent does not have the role, grant it.
Additionally, you should check the logs for the run app, you might see clues to what the cause is.

Related

Trigger deployment of Docker container on demand

I have a web application that helps my client launch an API with a button "Launch my API".
Under the hood, I have a Docker image that is ran on two Google Cloud Run services (one for debug environment and one for production).
My challenge is the following: How can I trigger the deployment of new Docker containers on-demand ?
Naively, I would like that this button call an API that trigger the launch of these services based on my Docker image (that is already in Google Cloud or available to download at a certain URL).
Ultimately, I'll need to use Kubernetes to manage all of my container's clients. Maybe I should look into that for triggering new container deployments ?
I tried to glue together (I'm very new to the cloud) a Google Cloud function that trigger a new service on Google Cloud Run based on my docker image but with no success.

Airflow on Google Cloud Composer vs Docker

I can't find much information on what the differences are in running Airflow on Google Cloud Composer vs Docker. I am trying to switch our data pipelines that are currently on Google Cloud Composer onto Docker to just run locally but am trying to conceptualize what the difference is.
Cloud Composer is a GCP managed service for Airflow. Composer runs in something known as a Composer environment, which runs on Google Kubernetes Engine cluster. It also makes use of various other GCP services such as:
Cloud SQL - stores the metadata associated with Airflow,
App Engine Flex - Airflow web server runs as an App Engine Flex application, which is protected using an Identity-Aware Proxy,
GCS bucket - in order to submit a pipeline to be scheduled and run on Composer, all that we need to do is to copy out Python code into a GCS bucket. Within that, it'll have a folder called DAGs. Any Python code uploaded into that folder is automatically going to be picked up and processed by Composer.
How Cloud Composer benefits?
Focus on your workflows, and let Composer manage the infrastructure (creating the workers, setting up the web server, the message brokers),
One-click to create a new Airflow environment,
Easy and controlled access to the Airflow Web UI,
Provide logging and monitoring metrics, and alert when your workflow is not running,
Integrate with all of Google Cloud services: Big Data, Machine Learning and so on. Run jobs elsewhere, i.e. other cloud provider (Amazon).
Of course you have to pay for the hosting service, but the cost is low compare to if you have to host a production airflow server on your own.
Airflow on-premise
DevOps work that need to be done: create a new server, manage Airflow installation, takes care of dependency and package management, check server health, scaling and security.
pull an Airflow image from a registry and creating the container
creating a volume that maps the directory on local machine where DAGs are held, and the locations where Airflow reads them on the container,
whenever you want to submit a DAG that needs to access GCP service, you need to take care of setting up credentials. Application's service account should be created and downloaded as a JSON file that contains the credentials. This JSON file must be linked into your docker container and the GOOGLE_APPLICATION_CREDENTIALS environment variable must contain the path to the JSON file inside the container.
To sum up, if you don’t want to deal with all of those DevOps problem, and instead just want to focus on your workflow, then Google Cloud composer is a great solution for you.
Additionally, I would like to share with you tutorials that set up Airflow with Docker and on GCP Cloud Composer.

When you assign service account to a Cloud Run service, what does exactly happen?

I am trying to understand what does assigning service account to a Cloud Run service actually do in order to improve the security of the containers. I have multiple processes running within a Cloud Run service and not all of them do need to access the project resources.
A more specific question I have in mind is:
Would I be able to create multiple users and run some processes as a user that does not have access to the service account or does every user have access to the service account?
I run a small experiment on a VM instance (I guess this will be a similar case as with Cloud Run) where I created a new user and after creation, it wasn't authorized to use the service account of the instance. However, I am not sure is there a way to authorize it which would make my method insecure.
Thank you.
EDIT
To perform the test I created a new os user and used "gcloud auth list" from the new user account. However, I should have made a curl request and I would have been able to retrieve credentials as pointed out by an answer below.
Your question is not very clear but I will try to provide you several inputs.
When you run a service on Cloud Run, you have 2 choices for defining its identity
Either it's the compute engine service account which is used (by default, is you specify nothing)
Or it's the service account that you specify at the deployment
This service account is valid for the Cloud Run service (you can have up to 1000 different services per project).
Now, when you run your container, the service account is not really loaded into the container (it's the same thing with compute engine), but there is an API available for requesting the authentication data of this service account. It's name metadata server
It's not restricted to users (I don't know how you perform your test on Compute Engine!), a simple curl is enough for getting the data.
This metadata server is used when you use your libraries, for example, and you use the "default credentials". gcloud SDK also uses it.
I hope you have a better view now. If not, add details in your question or in the comments.
The keyword that's missing from guillaume’s answer is "permissions".
Specifically, if you don't assign a service account, Cloud Run will use the Compute Engine default service account.
This default account has Editor role on your project (in other words, it can do nearly anything on your GCP project, short of creating new service accounts and giving them access, and maybe deleting the GCP project). If you use default service account and your container is compromised, you're probably in trouble. ⚠️
However, if you specify a new --service-account, by default it has no permissions. You have to bind it roles or permissions (e.g. GCS Object Reader, PubSub Publisher...) that your application needs.
Just to add the previous answers, if you are using something like Cloud Build,here is how you can implement it
steps:
- name: gcr.io/cloud-builders/gcloud
args:
- '-c'
- "gcloud secrets versions access latest --secret=[SECRET_NAME] \t --format='get(payload.data)' | tr '_-' '/+' | base64 -d > Dockerfile"
entrypoint: bash
- name: gcr.io/cloud-builders/gcloud
args:
- '-c'
- gcloud run deploy [SERVICE_NAME] --source . --region=[REGION_NAME] --service-account=[SERVICE_ACCOUNT]#[PROJECT_ID].iam.gserviceaccount.com --max-instances=[SPECIFY_REQUIRED_VALUE]
entrypoint: /bin/bash
options:
logging: CLOUD_LOGGING_ONLY
I am using this in a personal project but I will explain what is happening here. The first one is pulling data from my Secret Manager where I am storing a Dockerfile with the secret environment variables. This is optional, if you are not storing any API keys and secrets,you can skip it. But if you have a different folder structure (ie that isn't flat)
The second deploys Cloud Run from the source code. The documentation for that can be found here.
https://cloud.google.com/run/docs/deploying-source-code

Create service or container from another container, on Google Cloud Run or Cloud Run on GKE

Can I create a service or container from another container, on Google Cloud Run or Cloud Run on GKE ?
I basically want to manage my containers/services dynamically from another container and not sure how to go about this
Adding more details:
One of my microservices needs to create new isolated containers that will run some user-land code. I would like to have full life-cycle control of these containers, run the code, and then destroy as needed.
I also looked at Cloud Run APIs but not sure how to run something like 'kubectl create ...' through the APIs? Is that the right approach?
Yes, you should be able to deploy Cloud Run services from Cloud Run services.
on Cloud Run (hosted): services by default run with Editor permissions, so this should be possible without any extra configuration
note that if you deploy apps with --allow-unauthenticated which requires setting IAM permissions, the Editor role will not be enough, as you need Owner role on the GCP project for that.
on Cloud Run on GKE: services by default run with limited scopes (as they by default inherit GKE node's permissions/scopes). You should add a service account to the Kubernetes Pod and use it to authenticate.
From there, you have several options:
Use the REST API directly: Since run.googleapis.com behaves like a Kubernetes API server, you can directly apply JSON objects of Knative Services. (You can use gcloud ... --log-http to learn how deployments are made using REST API requests).
Use gcloud: you can ship your container image with gcloud and invoke it from your process.
Use Google Cloud Client Libraries: You can use the client libraries that are available for Cloud Run (for example this Go library) to construct in-memory Service objects and send them to the API using a higher level client library (recommended approach)

cloud run deploy fails with permission error

Running gcloud beta run deploy --image gcr.io/mynippets-dev/web:latest when gcloud project is set to 'mysnippets-dev' returns the following:
ERROR: (gcloud.beta.run.deploy) Google Cloud Run Service Agent must have permission to read the image, gcr.io/mynippets-dev/web:latest. Ensure that the provided container image URL is correct and that the above account has permission to access the image. If you just enabled the Cloud Run API, the permissions might take a few minutes to propagate. Note that [mynippets-dev/web] is not in project [mysnippets-dev]. Permission must be granted to the Google Cloud Run Service Agent from this project
It should be noted that both the GCR image and the Cloud Run account live in project 'mysnippets-dev'. But for some reason, it thinks it's a cross project deployment and maybe thinking it's 'mynippets-dev/web' with the /web (the GCR repository).
I can also repro the same issue in Cloud Run UI.
Deployment should succeed.
This looks like it is most likely a typo with mynippets-dev vs mysnippets-dev (missing an 's')
Cloud Run interprets this as a cross-project deployment, which is allowed, but requires sufficient permissions.
If this isn't intended to be a cross project deployment, this should succeed with this command.
gcloud beta run deploy --image gcr.io/mysnippets-dev/web:latest

Resources