I have set up the sample app on google cloud platform succesfully. After running a test, I am now wondering where the assets that I create are stored? - storage

I have created a hyperledger google cloud platform installation.
Secondly I then installed the hyperledger sample network. All this went fine. Also the asset creation went find after I created the static IP on the VM. I now am wondering where my "hello world" asset remained.
I saw that a verification peer should have a /var/hyperledger...
Doing the default google cloud platform installation, what are my peers? This seems all to be hidden. Does that mean that the data is just "out there"?
I am checking how to tweak the google cloud platform installation to have private data storage now.

When you are using the Google cloud platform and using VM to run all your things. then all your information is being stored in the persistant disk you selected while install the platform.
Regarding Assets, you can not see physical the assets in fabric, those are stored in Level DB or CouchDb. default configuration of the fabric is LevelDB.
if you configure CouchDb then you can see the data in URL. Hope, this helps.

Related

Google cloud vision Python client times out when request comes from from Cloud run service

The bounty expires in 18 hours. Answers to this question are eligible for a +50 reputation bounty.
okonomichiyaki is looking for an answer from a reputable source.
I have a Python application (using Flask) which uses the Google Cloud Vision API client (via pip package google-cloud-vision) to analyze images for text using OCR (via TEXT_DETECTION feature in the API). This works fine when run locally providing Google credentials on the command line via GOOGLE_APPLICATION_CREDENTIALS environment variable pointing to the JSON file I got from a service account in my project with access to the Vision API. It also works fine locally in a Docker container, when the same JSON file is injected via a volume (following the recommendations in the Cloud run docs).
However, when I deploy my application to Cloud run, the same code fails to successfully make a request to the Cloud Vision API in a timely manner, and eventually times out. (the Flask app returns an HTTP 504) Then the container seems to become unhealthy: all subsequent requests (even those not interacting with the Vision API) also time out.
In the Cloud run logs, the last thing logged appears to be related to Google cloud authentication:
DEBUG:google.auth.transport.requests:Making request: GET http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/?recursive=true
I believe my project is configured correctly to access this API: as already stated I can use the API locally via the environment variable. And the service is running in Cloud Run using this same service account (at least I believe it is, serviceAccountName field in the YAML tab matches, and I'm deploying it via gcloud run deploy --service-account ...)
Furthermore, the application can access the same Vision API without using the official Python client (locally and in Cloud run), when accessed using an API key and a plain HTTP POST via requests package. So the Cloud run deploy seems to be able to make this API call and the project has access. But there is something wrong with the project in the context of the combination of Cloud run and the official Python client.
I admit this is basically a duplicate of this 4 year old question. But aside from being old that has no resolution and I hope I can provided more details that might help get a good answer. I would much prefer to use the official client

Finding deployed Google Tag Manager server-side version in GCP

I've recently joined a new company which already has a version of Google Tag Manager server-side up and running. I am new to Google Cloud Platform (GCP), and I have not been able to find the supposed docker image in the image repository for our account. Or, at least I am trying to figure out how to check if there is one and how do I correlate its digest to what image we've deployed that is located at gcr.io/cloud-tagging-10302018/gtm-cloud-image.
I've tried deploying it both automatically provisioned in my own cloud account and also running the manual steps and got it working. But I can't for the life of me figure out how to check which version we have deployed at our company as it is already live.
I suspect it is quite a bit of an old version (unless it auto-updates?), seeing as the GTM server-side docker repository has had frequent updates.
Being new to the whole container imaging with docker, I figured I could use Cloud shell to check it that way, but it seems when setting up the specific Appengine instance with the shell script provided (located here), it doesn't really "load" a docker image as if you'd deployed it yourself. At least I don't think so, because I can't find any info using docker commands in the Cloud shell of said GCP project running the flexible Appengine environment.
Any guidance on how to find out which version of GTM server-side is running in our Appengine instance?
To check what docker images your App Engine Flex uses is by ssh to the instance. To ssh to your App Engine instances is by going to the instance tab then choosing the correct service and version then click the ssh button or you can access it by using this gcloud command on your terminal or cloud shell:
gcloud app instances ssh "INSTANCE_ID" --service "SERVICE_NAME" --version "VERSION_ID" --project "PROJECT_ID"
Once you have successfully ssh to your instance, run docker images command to list your docker images

Airflow on Google Cloud Composer vs Docker

I can't find much information on what the differences are in running Airflow on Google Cloud Composer vs Docker. I am trying to switch our data pipelines that are currently on Google Cloud Composer onto Docker to just run locally but am trying to conceptualize what the difference is.
Cloud Composer is a GCP managed service for Airflow. Composer runs in something known as a Composer environment, which runs on Google Kubernetes Engine cluster. It also makes use of various other GCP services such as:
Cloud SQL - stores the metadata associated with Airflow,
App Engine Flex - Airflow web server runs as an App Engine Flex application, which is protected using an Identity-Aware Proxy,
GCS bucket - in order to submit a pipeline to be scheduled and run on Composer, all that we need to do is to copy out Python code into a GCS bucket. Within that, it'll have a folder called DAGs. Any Python code uploaded into that folder is automatically going to be picked up and processed by Composer.
How Cloud Composer benefits?
Focus on your workflows, and let Composer manage the infrastructure (creating the workers, setting up the web server, the message brokers),
One-click to create a new Airflow environment,
Easy and controlled access to the Airflow Web UI,
Provide logging and monitoring metrics, and alert when your workflow is not running,
Integrate with all of Google Cloud services: Big Data, Machine Learning and so on. Run jobs elsewhere, i.e. other cloud provider (Amazon).
Of course you have to pay for the hosting service, but the cost is low compare to if you have to host a production airflow server on your own.
Airflow on-premise
DevOps work that need to be done: create a new server, manage Airflow installation, takes care of dependency and package management, check server health, scaling and security.
pull an Airflow image from a registry and creating the container
creating a volume that maps the directory on local machine where DAGs are held, and the locations where Airflow reads them on the container,
whenever you want to submit a DAG that needs to access GCP service, you need to take care of setting up credentials. Application's service account should be created and downloaded as a JSON file that contains the credentials. This JSON file must be linked into your docker container and the GOOGLE_APPLICATION_CREDENTIALS environment variable must contain the path to the JSON file inside the container.
To sum up, if you don’t want to deal with all of those DevOps problem, and instead just want to focus on your workflow, then Google Cloud composer is a great solution for you.
Additionally, I would like to share with you tutorials that set up Airflow with Docker and on GCP Cloud Composer.

How to authenticate with Google Cloud from a Rails application deployed in k8s

We use the method in the first code block in java, but I don't see a corresponding method in the rails documentation, Only the second code block:
Storage storage = StorageOptions.getDefaultInstance().getService();
storage = Google::Cloud::Storage.new(
project: "my-todo-project",
keyfile: "/path/to/keyfile.json"
)
If we use an application specific service account in the kubernetes cluster. How do we configure the Rails application to work in the local developer environment and also run with a k8s cluster?
Also, I would prefer not to use a project_id and a keyfile to initialize, since I will have to manage multiple such JSON files during the initialization process in dev, qa, staging, production environments.
I would recommend initializing without arguments and using the default discovery of credentials as discussed in the Authentication guide.
When running on Google Cloud Platform (GCP), including Google Compute Engine (GCE), Google Kubernetes Engine (GKE), Google App Engine (GAE), Google Cloud Functions (GCF) and Cloud Run, the credentials will be discovered automatically.
For the local developer environment, we always use environment variables with initializing without arguments and the default discovery.
Before moving your app to multiple environments, you should set up your deployment pipeline which will handle how your app is configured for different environments, including configuration of service accounts.
Below you can find two official google cloud documentations on how to do it, plus one example in gitlab, so you can follow what better suits you.
Continuous deployment to Google Kubernetes Engine using Jenkins
Continuous Delivery Pipelines with Spinnaker and Google Kubernetes Engine
Git Lab - continuous-deployment-on-kubernetes
Also, regarding the parameters of instantiation of the cloud storage object, as you can see on the same documentation you provided at your question, the project parameter is the identifier of your storage in the cloud, so if you do not set that your app will not be able to find it. For the Keyfile, it is what allow your service account to authenticate, so you can't make it work without it as well.
I hope This information helped you.

Created Assets are not persisted between Fabric server reboots

I followed the Hyperledger-composer tutorial
to install all components and everything went well.
I was able to create assets and participants and can interact with them via REST API Services.
However, after I rebooted the Fabric network all the assets and participants were gone and I have to recreate them.
Did I miss some settings in docker-composer.yaml or others for data persistence?
I did follow the instruction on page 16 about the "a Note on Data Persistence" to mount a dir into the container.
If you shutdown all Fabric nodes then you will lose all data. AFAIK Fabric does not yet have tools to replay the transaction logs from disk on restart of the entire Fabric. You should use Fabric support channels to confirm this however.

Resources