How to use storage FUSE in google cloud run? - google-cloud-run

How to use storage FUSE in google cloud run ? I saw the examples with Google App Engine, etc. How to use it in google cloud run?

It is not yet possible to use FUSE on Google Cloud Run. I recommend using the Google Cloud Storage client libraries to read and write files to Cloud Storage

Related

How to use GCP Profiler on Google Cloud Run?

I tried using the following code but it doesn't seem to be sending data to GCP Profiler
import * as profiler from "#google-cloud/profiler";
if (process.env.NODE_ENV === "production") {
profiler.start();
}
...
I just see this screen when I visit the profiler:
I read here that it supports:
Supported environments:
Compute Engine
Google Kubernetes Engine (GKE)
App Engine flexible environment
App Engine standard environment
Outside of Google Cloud (For information on the additional configuration requirements, see Profiling applications running outside of Google Cloud.)
Do I need to do additional configuration to use Google Cloud Profiler on Google Cloud Run?

Does Google have On Premise Docker Containers for their OCR/Read API similar to MSAzure Cognitive Services

Does google GCP provide on-premise docker containers for their OCR/Handwriting API/solution similar to the way MS Azure has provided access to their cognitive-services-read container. If so, is the google container GPU aware on the on premise hardware.
Thanks,
Simple answer: No, there is no Docker image available for download that containts the model(s) Google use for their OCR service.

I have set up the sample app on google cloud platform succesfully. After running a test, I am now wondering where the assets that I create are stored?

I have created a hyperledger google cloud platform installation.
Secondly I then installed the hyperledger sample network. All this went fine. Also the asset creation went find after I created the static IP on the VM. I now am wondering where my "hello world" asset remained.
I saw that a verification peer should have a /var/hyperledger...
Doing the default google cloud platform installation, what are my peers? This seems all to be hidden. Does that mean that the data is just "out there"?
I am checking how to tweak the google cloud platform installation to have private data storage now.
When you are using the Google cloud platform and using VM to run all your things. then all your information is being stored in the persistant disk you selected while install the platform.
Regarding Assets, you can not see physical the assets in fabric, those are stored in Level DB or CouchDb. default configuration of the fabric is LevelDB.
if you configure CouchDb then you can see the data in URL. Hope, this helps.

Is it possible to use logmet service for Cloud Foundry app?

For VM and Containers (Docker), we can use logmet service (logging and metrics) as described in the Bluemix documentation. I wonder if we can use this service for Cloud Foundry app or not using log drain ( https://docs.cloudfoundry.org/devguide/services/log-management.html ).
Ref: https://developer.ibm.com/bluemix/2015/12/11/sending-logs-to-bluemix-using-logstash-forwarder/
For Cloud Foundry applications the Monitoring & Analytics service in the catalog provides similar functionality.

Within a Pipeline, is it possible to access a Google Cloud Storage bucket in another project?

Within a pipeline, is it possible to do TextIO from/to a cloud storage file in another cloud project?
Accessing a BigQuery table in another project seems possible with "my-project:output.output_table" and setting up service accounts properly.
However, with TextIO, I have not been able to find a way to specify the project ID in conjunction with my file pattern "gs://some/inputData.txt".
Yes, this is possible. You will want to make sure that the appropriate access is present (compute engine account, cloudservices account, detailed below).
To change bucket permissions, you can use gsutil. You will want to add these accounts:
[project-number]#cloudservices.gserviceaccount.com
Google Compute Engine service account
You can use this command:
gsutil acl ch -r -u <email address of service account>:FC gs://<BUCKET>
To check bucket permissions:
gsutil getacl gs://<your bucket>
Note that Cloud Storage buckets exist in a global namespace: https://cloud.google.com/storage/docs/bucket-naming#requirements
Permission Details:
When you run Cloud Dataflow locally (using a DirectPipelineRunner) your pipeline runs as the Google Cloud account that you configured with the gcloud executable (using gcloud auth login). Hence, locally-run Cloud Dataflow SDK operations have access to the files and resources that your Google Cloud account has access to.
When a Cloud Dataflow pipeline runs in the cloud (using a DataflowPipelineRunner or BlockingDataflowPipelineRunner), it runs as a cloudservices account ([project-number]#cloudservices.gserviceaccount.com). This account is automatically created when a Cloud Dataflow project is created, and it defaults to having read/write access to the project's resources. The cloudservices account performs “metadata” operations: those that don’t run on your local client or on Google Compute Engine workers, such as determining input sizes, accessing Cloud Storage files, and starting Compute Engine workers. For example, if your project is the owner of a Cloud Storage bucket (has read/write access to the bucket), then the cloudservices account associated with your project also has owner (read/write) access to the bucket.
Google Compute Engine (GCE) instances (or workers) perform the work of executing Dataflow SDK operations in the cloud. These workers use your project’s Google Compute Engine service account to access your pipeline’s files and other resources. A GCE service account (-compute#developer.gserviceaccount.com) is automatically created when you enable the Google Compute Engine API for your project (from the Google Developers Console APIs and auth page for your project).

Resources