Connect external workers to Cloud Composer airflow - google-cloud-composer

Is it possible to connect an external worker that is not part of the Cloud Composer Kubernetes cluster? Use case would be connecting a box in a non-cloud data center to a Composer cluster.

Hybrid clusters are not currently supported in Cloud Composer. If you attempt to roll your own solution on top of Composer, I'd be very interested in hearing what did or didn't work for you.

Related

serverless framework: local kafka as event source

I'm trying to build a local development environment in order to make my local tests.
I need to use kafka as event-source.
I've deployed a self-managed cluster into my local environment using docker.
An issue is running in my mind according to documentacion, I need to provide authentication.
Here there's no problem, the issue is which kind of values documentation is required I provide, AWS secrets.
What do those kind of secret, AWS secrets, have to do with my self-managed self-deployed kafka cluster?
How could I provide my kafka cluster as local event source?
I mean, I thought I only need to provide bootstrap servers, consumer group and topic... Something like knative serverless documentation says.
Any ideas about how to connect to my local kafka?

Integrate cloud run with exiting service mesh

We have an existing service mesh built using Envoy and internal service control and discovery stack. We want to offer cloud run to our developers. How can we integrate the cloud run into the mesh network so that:
1, The cloud run containers can talk to the mesh services.
2, The services built using cloud run can be discovered and used by other mesh services (each has a Envoy sidecar)
The GCP docs cover this with the Cloud Run for Anthos services using Istio doc.
In a nutshell, you will need to:
Create a GKE cluster with Cloud Run enabled.
Deploy a sample service to Cloud Run for Anthos on Google Cloud.
Create an IAM service account.
Create an Istio authentication policy.
Create an Istio authorization policy.
Test the solution.
But things change depending on how your existing service mesh is configured. Elaborating on that portion can allow the community to be able to better assist you.

Connect 2 separate laptops using docker swarm?

I want to build a small blockchain network between 2 laptops (as an initial step). I am using Hyperledger Fabric and Hyperledger Composer on each laptop.
Can I use docker swarm to connect these 2 laptops then use Hyperledger Fabric and Hyperledger Composer to my blockchain network?
If the answer of question 1 is yes, can I do these without any cloud account (like amz, etc.) and without paying money?
If the answer of questions 1 and 2 is no, how can do my target?
Yes you can use docker swarm to connect 2 laptops and use hyperledger frameworks on them. Setup hyperledger fabric on multiple hosts using docker swarm
Since you are doing this on your laptop locally, you don't need to pay anything to anyone.

Does Cloud Composer have failover?

I've read the Cloud Composer overview (https://cloud.google.com/composer/) and documentation (https://cloud.google.com/composer/docs/).
It doesn't seem to mention failover.
I'm guessing it does, since it runs on Kubernetes cluster. Does it?
By failover I mean if the airflow webserver or scheduler stops for some reason, does it get started automatically again?
Yes, since Cloud Composer is built on Google Kubernetes Engine, it benefits from all the fault tolerance of any other service running on Kubernetes Engine. Pod and machine failures are automatically healed.

Hyperledger Fabric v1.0 setup on multiple machines

I am working with balance transfer example I did setup in single machine I want to do that example in two machines.I am following the below link https://github.com/hyperledger/fabric-samples/tree/release/balance-transfer can anyone tell me what are the steps or ways I have to do for implementing that example in multiple machines.
I was able to host hyperledger fabric network using docker swarm mode. Swarm mode provides a network across multiple hosts/machines for the communication of the fabric network components.
This post explains the deployment process https://medium.com/#wahabjawed/hyperledger-fabric-on-multiple-hosts-a33b08ef24f

Resources