This question already has an answer here:
GCP Cloud run - how do I get CloudRun service parameters run-time
(1 answer)
Closed 4 months ago.
When my service is running on Google Cloud, I would like to do some things differently than when I test it locally - namely use a different logger. I could pack some special file when building the Docker image when deploying and check for that, but perhaps there is a simpler way? I have used Google App Engine before and there was a simple API I could call to check this (see Determine AppEngine for Java environment programmatically). Is there something like available for Cloud Run?
Note: I use Cloud Run and JVM at the moment, i.e. Docker containers, but if the answer is applicable for Kubernetes or other Google Cloud environments, it might help other users in a similar situation.
You can use one of these variables to check if your service running in Cloud Run: https://cloud.google.com/run/docs/container-contract#services-env-vars
I will suggest you to use K_SERVICE - if that has a value, your service running in Cloud Run.
Related
I know this may seem like an opinion-based question but I can't seem to find any answers anywhere. I'm having trouble figuring out how to deploy my flask backend and react front end on google cloud. I am using a docker-compose on my local machine but I can't seem to find a way to deploy that on Google Cloud.
My question is, is there a way to deploy them using a docker-compose file using Cloud Build and Cloud Run? Or do I have to create two different Cloud Run instances to run the frontend and backend? Or is it better to create a VM instance and run the docker-compose container on there (and how would one even do this)? I am very new to deployment so any help is appreciated.
For reference, I saw this but it didn't exactly answer my question. Thanks in advance!
You use docker-compose for multi-container applications. In your case it wouldn't make much sense.
You have a python backend. You can containerize it and deploy to Cloud Run, Cloud Functions, App Engine, Google Kubernetes Engine or even on a Compute Engine VM. In my opinion the most convenient option would be Cloud Run.
If your React frontend is a Single Page App, it communicates with your python backend with HTTP requests. You build the HTML/CSS/JS files and host them somewhere, like a Cloud Storage bucket or Cloud CDN.
The containers that result from the standard Cloud Function build/deploy process sometimes contain security vulnerabilities, and I'm not sure how to resolve these since Cloud Functions don't (as far as I know) offer much control of the execution environment by design. What's the best practice for resolving security vulnerabilities in Google Cloud Functions?
If I can figure out how to extend the build process I think I'll be in good shape, but am not sure how to do that for Cloud Functions in particular.
Situation:
I'm building my functions using the standard gcloud functions deploy command (docs). The deployment is successful and I can successfully run the function - it creates a container in the Container Registry (process overview -- sounds like its built off of the base Ubuntu Docker image).
I'm using Google's container vulnerability scanning, and it detects security issues in these containers, presumably because some of the packages in the Ubuntu base image have released security updates. In other container environments, its straightforward enough to update these packages via apt or similar, but I'm not aware of how to perform the equivalent in a Cloud Function environment since you don't really customize the environment (Dockerfile, etc).
Short answer: you can't. Cloud Functions seeks to be as easy to use as possible by being opinionated about how to build the container. You just provide the code.
If you want control over a serverless container, you should switch to Cloud Run, which lets you deploy the full container. It also gives you a greater degree of control over the amount of concurrent requests it can handle, potentially saving you money by utilizing the virtual machine more fully.
We use the method in the first code block in java, but I don't see a corresponding method in the rails documentation, Only the second code block:
Storage storage = StorageOptions.getDefaultInstance().getService();
storage = Google::Cloud::Storage.new(
project: "my-todo-project",
keyfile: "/path/to/keyfile.json"
)
If we use an application specific service account in the kubernetes cluster. How do we configure the Rails application to work in the local developer environment and also run with a k8s cluster?
Also, I would prefer not to use a project_id and a keyfile to initialize, since I will have to manage multiple such JSON files during the initialization process in dev, qa, staging, production environments.
I would recommend initializing without arguments and using the default discovery of credentials as discussed in the Authentication guide.
When running on Google Cloud Platform (GCP), including Google Compute Engine (GCE), Google Kubernetes Engine (GKE), Google App Engine (GAE), Google Cloud Functions (GCF) and Cloud Run, the credentials will be discovered automatically.
For the local developer environment, we always use environment variables with initializing without arguments and the default discovery.
Before moving your app to multiple environments, you should set up your deployment pipeline which will handle how your app is configured for different environments, including configuration of service accounts.
Below you can find two official google cloud documentations on how to do it, plus one example in gitlab, so you can follow what better suits you.
Continuous deployment to Google Kubernetes Engine using Jenkins
Continuous Delivery Pipelines with Spinnaker and Google Kubernetes Engine
Git Lab - continuous-deployment-on-kubernetes
Also, regarding the parameters of instantiation of the cloud storage object, as you can see on the same documentation you provided at your question, the project parameter is the identifier of your storage in the cloud, so if you do not set that your app will not be able to find it. For the Keyfile, it is what allow your service account to authenticate, so you can't make it work without it as well.
I hope This information helped you.
We deployed a rails app in Google Cloud Run using their managed platform. The app is working fine and it is able to serve requests.
Now we want to get access to the rails console of the deployed app. Can anyone suggest a way to achieve this?
I'm aware that currently, Cloud Run supports only HTTP requests. If no other way is possible I'll have to consider something like rails web console
I think you cannot.
I'm familiar with Cloud Run but I'm not familiar with rails.
I assume you'd need to be able to shell into a container in order to be able to run IRB. Generally, you'd do this by asking the runtime (Docker Engine, Kubernetes, Cloud Run) to connect you to the container so that you could do this.
Cloud Run does not (appear) to permit this. I think it's a potentially useful feature request for the service. For those containers that contain shells, this would be the equivalent of GCE's gcloud compute ssh.
Importantly, your app may be serviced by multiple, load-balanced containers and so you'd want to be able to console into any of these.
However, you may wish to consider alternatives mechanisms for managing your app: monitoring, logging, trace etc. These mechanisms should provide you with sufficient insight into your app's state. Errant container instances should be terminated.
This follows the concept of "pets vs. cattle" whereby, instead of nurturing individual containers (is one failing?), you nurture the containers holistically (is the service comprising many containers failing?)
For completeness, if you think that there's an issue with a container image that you're unable to resolve through other means, you could run the image elsewhere (e.g. locally) where you can use IRB. Since the same container image will behave consistently wherever it's run, you should be able to observe the issue using IRB locally too.
I have a microservice with about 6 seperate components.
I am looking to sell instances of this microservice to people who need dedicated versions of it for their project.
Docker seems to be the solution to doing this as easily as possible.
What is still very unclear to me is, is it possible to use docker to deploy whole instances of microservices within a cloud service like GCP or AWS?
Is this something more specific to the Cloud provider itself?
Basicly in the end, I'd like to be able to, via code, start up a whole new instance of my microservice within its own network having each component be able to speak to eachother.
One big problem I see is assigning IP's to the containers so that they will find each other, independent of which network they are in. Is this even possible or is this not yet feasible with current cloud techonology?
Thanks a lot in advance, I know this is a big one...
They is definitely feasible and is nowadays one of the most popular ways to ship and deploy applications. However, the procedure to deploy varies slightly based on the cloud provider that you choose.
The good news is that the packaging of your microservices with Docker is independent from the cloud provider you use. You basically need to package each component in a Docker image, and deploy these images to a cloud platform.
All popular cloud platforms nowadays support deployment of docker containers. You can use in addition popular frameworks such as Docker swarm or Kubernetes on these cloud platforms to orchestrate the microservices deployment.