The network configuration that is provided with the e2e_cli example has only one "orderer" container and a set of kafka/zookeeper containers.
My questions are:
Q1: Is the single "orderer" some kind of architectural restriction of
HLFv1.0 when single channel need to be created ?
Q2: Is it possible
to run multiple "orderers" for HA purposes when only one channel is
used ?
the documentation suggests that multiple orderers can be used, but my understanding is that each "orderer" provide ordering service for single channel - is it right ?
Q1: Is the single "orderer" some kind of architectural restriction of
HLFv1.0 when single channel need to be created ?
No, you can have more than 1 ordering service node.
Q2: Is it possible to run multiple "orderers" for HA purposes when
only one channel is used ?
Yes. That was the intention of the kafka-based orderer - to have multiple instances of ordering service nodes all connect to a single fault tolerant service (kafka) that would do the ordering, and have them act as mediators to that service.
the documentation suggests that multiple orderers can be used, but my
understanding is that each "orderer" provide ordering service for
single channel - is it right ?
You can submit a transaction or pull a block from any of the orderers.
They would go to the same kafka node that is the leader of that channel for that time.
Also - orderers are multi-tenant regarding channels - orderers can service multiple channels.
Related
In distributed set up using consistent hashing,eg. distributed cache implementation using consistent hashing, how can we manage the multiple nodes? By managing I mean, monitoring health and adjusting loads when one of server dies or new added and similar stuff.
Here we dont have any master as all peer nodes are same. So gossip protocol is on way. But I want to understand can we use zookeeper to manage nodes here? Or zookeeper can only be used where we need master-slave coordination ?
I think in your case, zookeeper can be used for leader election and assigning the right token range to the node when a new node joins. In very similar case previous version of Fb Cassandra used to use zookeeper for same reason however later community got rid of it. Read the Replication and Bootstrapping section of this.
I was exploring the channels feature of Hyperledger Fabric. I understand that separate channels is only for same Blockchain (DLT) network. I am just exploring the possibility in Docker Containers, for building 2 PoCs.
If a node/peer is configured for an organisation in one DLT (multi-org) network, can it be re-used for a different organisation in another DLT (multi-org) network?
Eg. Lets say a peer is configured for a Supplier in one network (1st Use case), can it be re-used as a peer for Logistics Company in another network (2nd use case)?
Please note that both networks should be up and running at the same-time.
I haven't tried it practically but theoretically, it is possible. Why?
The criteria of a peer joining a channel to another network is dependent upon whether its root CA certificates are in that channel's configuration block or not, along with its MSP-ID.
And network itself is nothing but the information about the organization, that decided whether a peer can join a channel or not. If that information is in the configuration block, and it matches the peer, it is possible.
To my understanding, The network you keep referring to is actually a consortium of some organizations.
First things first, A peer is always associated with a particular organization and the crypto files and organization certificates are bootstrapped to the docker container of that peer.
However, The policies governing the network can be varied across the consortiums and/or channels in each consortium. The organization may have read/write access in one consortium and in others it may have separate permissions.
The peer cannot be decoupled from an organization but the organization itself can be decoupled from the consortium.
I Just start my customized hyperledger composer network with 3 organizations. Each organization is include 1 peers(total 3 peers). My question is
What is the use of more than one peers in single organization...???
What is the best practice to follow the number of organization and number of peer in production level...???
Please correct me if I am wrong.
Fabric : 1.1.0
Composer: 0.19.16
node :8.11.3
Os: Ubuntu 16.04
Multiple peers per org provide increased resilience (via redundancy) as well as improved throughput under load. You can start your network with a single peer and add more later. However, in a production system, you would typically want some resilience, and therefore at least 2 endorsing peers per org.
The same goes for the orderer. You would typically use Kafka for production to provide improved throughput and resilience.
Also, unless you have a strong reason for using Fabric 1.1, you should be at least using 1.2, which is supported by Composer and has several new features.
Is it possible to have a centralized storage/volume that can be shared between two pods/instances of an application that exist in different worker nodes in Kubernetes?
So to explain my case:
I have a Kubernetes cluster with 2 worker nodes. In each one of these I have 1 instance of app X running. This means I have 2 instances of app X running totally at the same time.
Both instances subscribe on the topic topicX, that has 2 partitions, and are part of a consumer group in Apache Kafka called groupX.
As I understand it the message load will be split among the partitions, but also among the consumers in the consumer group. So far so good, right?
So to my problem:
In my whole solution I have a hierarchy division with the unique constraint by country and ID. Each combination of country and ID has a pickle model (python Machine Learning Model), which is stored in a directory accessed by the application. For each combination of a country and ID I receive one message per minute.
At the moment I have 2 countries, so to be able to scale properly I wanted to split the load between two instances of app X, each one handling its own country.
The problem is that with Kafka the messages can be balanced between the different instances, and to access the pickle-files in each instance without know what country the message belongs to, I have to store the pickle-files in both instances.
Is there a way to solve this? I would rather keep the setup as simple as possible so it is easy to scale and add a third, fourth and fifth country later.
Keep in mind that this is an overly simplified way of explaining the problem. The number of instances is much higher in reality etc.
Yes. It's possible if you look at this table any PV (Physical Volume) that supports ReadWriteMany will help you accomplish having the same data store for your Kafka workers. So in summary these:
AzureFile
CephFS
Glusterfs
Quobyte
NFS
VsphereVolume - (works when pods are collocated)
PortworxVolume
In my opinion, NFS is the easiest to implement. Note that Azurefile, Quobyte, and Portworx are paid solutions.
I am looking for solutions how we can build a highly available AMQ Topic subscribers.
Scenario: Given that I have an AMQ broker with failover nodes, and I have two services subscribed to Topic 1 and Topic 2 respectively. Is there a easy way to make each subscribers redundant? Something like active/passive subscribers? Than when an instance failed, system will continue to work though at reduced throughput?
I am looking if Docker Swarm can do this but it doesn't seem to be fitting as its best suited for microservices receiving web requests.
Solutions I have considered:
Setup VM vSphere with HA where I have two nodes hosting 1 TopicSubscriber. This is ridiculously expensive setup IMO.
Deploy as container in docker swarm with replica = 3. Each message needs to be validated by each instance if the event has already been processed by the other nodes.
Find a leader-election way like Zookeper. Sounds like a lot of work and plumbing.
Appreciate your input. Is this possible with Docker Swarm or Kubernetes?
TIA/RD.