How to use GCP Profiler on Google Cloud Run? - google-cloud-run

I tried using the following code but it doesn't seem to be sending data to GCP Profiler
import * as profiler from "#google-cloud/profiler";
if (process.env.NODE_ENV === "production") {
profiler.start();
}
...
I just see this screen when I visit the profiler:
I read here that it supports:
Supported environments:
Compute Engine
Google Kubernetes Engine (GKE)
App Engine flexible environment
App Engine standard environment
Outside of Google Cloud (For information on the additional configuration requirements, see Profiling applications running outside of Google Cloud.)
Do I need to do additional configuration to use Google Cloud Profiler on Google Cloud Run?

Related

How do we deploy a project in Nimbella to any of the public cloud platforms?

As per my knowledge Nimbella is a serverless cloud platfrom which allows a developer to deploy their application in any public cloud platform since it has a cloud agnostic nature and thereby avoid vendor lock-in.
"Nimbella is cloud-agnostic and can run on public and private clouds thus naturally supporting a hybrid or multi-cloud strategy. As a developer, you can code once and run on all clouds or your local machine, because you can deploy the Nimbella platform anywhere." -(from Nimbella official document.)
So my question is,I didn't see any area which connect the application in Nimbella with any of the public cloud services. How can we deploy the application in Nimbella in any of the public cloud services(AWS,Firebase)?
Nimbella's serverless functions are powered by Apache OpenWhisk (github, web) which makes it possible to run your serverless code on any cloud that is powered by OpenWhisk - in addition to Nimbella, there are other cloud providers that offer OpenWhisk as a service: IBM and Adobe I/O Runtime. There is also a "Nimbella lite" Digital Ocean droplet. Nimbella can be deployed on-prem or a cloud of your choice but this is currently offered as an enterprise feature.
Serverless functions for OpenWhisk have one of the purest function signature: JSON dictionary -> JSON dictionary, and the programming model is consistent across languages supported by the project. Additionally, one can run a Container as a function, so that it less necessary for a developer to make a distinction between function and container. The programming model for serverless functions developed for OpenWhisk make them highly portable as well.

How to authenticate with Google Cloud from a Rails application deployed in k8s

We use the method in the first code block in java, but I don't see a corresponding method in the rails documentation, Only the second code block:
Storage storage = StorageOptions.getDefaultInstance().getService();
storage = Google::Cloud::Storage.new(
project: "my-todo-project",
keyfile: "/path/to/keyfile.json"
)
If we use an application specific service account in the kubernetes cluster. How do we configure the Rails application to work in the local developer environment and also run with a k8s cluster?
Also, I would prefer not to use a project_id and a keyfile to initialize, since I will have to manage multiple such JSON files during the initialization process in dev, qa, staging, production environments.
I would recommend initializing without arguments and using the default discovery of credentials as discussed in the Authentication guide.
When running on Google Cloud Platform (GCP), including Google Compute Engine (GCE), Google Kubernetes Engine (GKE), Google App Engine (GAE), Google Cloud Functions (GCF) and Cloud Run, the credentials will be discovered automatically.
For the local developer environment, we always use environment variables with initializing without arguments and the default discovery.
Before moving your app to multiple environments, you should set up your deployment pipeline which will handle how your app is configured for different environments, including configuration of service accounts.
Below you can find two official google cloud documentations on how to do it, plus one example in gitlab, so you can follow what better suits you.
Continuous deployment to Google Kubernetes Engine using Jenkins
Continuous Delivery Pipelines with Spinnaker and Google Kubernetes Engine
Git Lab - continuous-deployment-on-kubernetes
Also, regarding the parameters of instantiation of the cloud storage object, as you can see on the same documentation you provided at your question, the project parameter is the identifier of your storage in the cloud, so if you do not set that your app will not be able to find it. For the Keyfile, it is what allow your service account to authenticate, so you can't make it work without it as well.
I hope This information helped you.

How to use storage FUSE in google cloud run?

How to use storage FUSE in google cloud run ? I saw the examples with Google App Engine, etc. How to use it in google cloud run?
It is not yet possible to use FUSE on Google Cloud Run. I recommend using the Google Cloud Storage client libraries to read and write files to Cloud Storage

I have set up the sample app on google cloud platform succesfully. After running a test, I am now wondering where the assets that I create are stored?

I have created a hyperledger google cloud platform installation.
Secondly I then installed the hyperledger sample network. All this went fine. Also the asset creation went find after I created the static IP on the VM. I now am wondering where my "hello world" asset remained.
I saw that a verification peer should have a /var/hyperledger...
Doing the default google cloud platform installation, what are my peers? This seems all to be hidden. Does that mean that the data is just "out there"?
I am checking how to tweak the google cloud platform installation to have private data storage now.
When you are using the Google cloud platform and using VM to run all your things. then all your information is being stored in the persistant disk you selected while install the platform.
Regarding Assets, you can not see physical the assets in fabric, those are stored in Level DB or CouchDb. default configuration of the fabric is LevelDB.
if you configure CouchDb then you can see the data in URL. Hope, this helps.

Is there a commercially supported option for a standalone Spring Cloud Data Flow?

We're looking at using Spring Cloud Task / Spring Cloud Data Flow for our batch processing needs as we're modernising from a legacy system. We don't want or need the whole microservices offering ... we want to be able to deploy jobs/tasks, kick off batch processes, have them log to a log file, and share a database connection pool and message queue. We don't need the whole PaaS that's provided by Spring Cloud Foundry, and we don't want to pay for that, but we do want the Data Flow / Task framework to be commercially supported. Is such an option available?
Spring Cloud Data Flow (SCDF) builds upon spring-cloud-deployer abstraction to deploy stream/task workloads to a variety of runtimes including Cloud Foundry, Kubernetes, Mesos and Yarn - see this visual.
You'd need a runtime for SCDF to orchestrate these workloads in production setting. If there's no scope for cloud infrastructure, the YARN based deployment could be a viable option for standalone bare-metal installation. Please review the reference guide and Apache Ambari provisioning tools for more details. There's a separate commercial support option available for this type of installation.

Resources