I Just start my customized hyperledger composer network with 3 organizations. Each organization is include 1 peers(total 3 peers). My question is
What is the use of more than one peers in single organization...???
What is the best practice to follow the number of organization and number of peer in production level...???
Please correct me if I am wrong.
Fabric : 1.1.0
Composer: 0.19.16
node :8.11.3
Os: Ubuntu 16.04
Multiple peers per org provide increased resilience (via redundancy) as well as improved throughput under load. You can start your network with a single peer and add more later. However, in a production system, you would typically want some resilience, and therefore at least 2 endorsing peers per org.
The same goes for the orderer. You would typically use Kafka for production to provide improved throughput and resilience.
Also, unless you have a strong reason for using Fabric 1.1, you should be at least using 1.2, which is supported by Composer and has several new features.
Related
The bounty expires in 2 days. Answers to this question are eligible for a +500 reputation bounty.
Jasper Blues wants to draw more attention to this question.
Has anyone ever had the following error, when installing neo4j cluster with 3 nodes, I followed the indication in the site https://neo4j.com/docs/operations-manual/current/clustering/setup/deploy/: Caused by: com.neo4j.causalclustering.seeding.FailedValidationException: The seed validation failed with response [RemoteSeedValidationResponse{status=FAILURE, remote=XXXX:6000
I follower the indication in the site https://neo4j.com/docs/operations-manual/current/clustering/setup/deploy/
Clustering is a tricky subject, but neo4j is using several ports for clustering. Port 6000, specifically, is for clustering transactions and in this case, it's failing the seed validation process. Now I can see a couple of potential reasons for this.
The network's control plane doesn't allow you to establish a connection. (check the reachability of ports)
Your cluster configuration is not compatible with the environment you are running (check what interface the server listens to and that the cluster DNS resolves your addresses, and check your cluster is not using a loopback address as the interface, setting it to 0.0.0.0 would listen on all interfaces)
or you are trying to start a cluster with pre-existing data that is out of sync (term). (Drain your nodes, remove all instances (and volumes) and if you can, terminate the namespace and try again)
Your scheduler does not have enough cluster resources to start all cluster servers with the given constraint, failing the minimum required amount of nodes available to run clustering. (There should not be more than one db instance per node). Scale-out you cluster or check you settings for Taints, Tolerances or Topology configuration
Hope this helps you troubleshoot your issue.
I have created a PoC application using hyperledger fabric for three organizations.
Most of the content on internet is instructing to use cloud for the deployment of the solution.
https://juarezjunior.medium.com/how-to-deploy-your-first-hyperledger-fabric-components-with-azure-kubernetes-service-aks-760563d9d543
https://medium.com/google-cloud/fabric-on-google-cloud-platform-97525323457c
https://www.youtube.com/watch?v=ubrA3W1JMk0
If peers, orderers,ca server of all organizations are deployed in a
cloud then how come hyperledger fabric is distribued?
Can this setup can be made in distributed , "in premise"
infrastructure?
Is there any source for reference/links to do this sort of setup?
any suggestions/references will be very helpful.
If peers, orderers,ca server of all organizations are deployed in a cloud then how come hyperledger fabric is distribued?
Each service (orderer, peer, etc) would be run on a different (virtual) machine within the same environment/cloud provider. Those machines could be distributed across various data-centers globally, or across different cloud providers, or even on many varying organisation's hardware. It makes relatively little difference, so long as they're not all on the same box, under the control of one organisation/person. Then it will be distributed.
Can this setup can be made in distributed , "in premise" infrastructure?
Yes - it can be set up to run however you like, although you may run into access issues, from firewalls, etc. Perhaps you'd have different departments run their own peers within a single organisation across many offices.
Is there any source for reference/links to do this sort of setup?
I believe most people start using the scripts from the hyperledger fabric codebase. Theres documentation here https://hyperledger-fabric.readthedocs.io/en/release-1.4/build_network.html about spinning up the "build your first network" script - byfn.sh (looks like this may have been removed)
https://github.com/hyperledger/fabric-samples has a ci folder. Within it there's some steps to build/run/test the codebase examples. Eg. https://github.com/hyperledger/fabric-samples/blob/main/ci/scripts/run-test-network-basic.sh
calls network.sh which is here: https://github.com/hyperledger/fabric-samples/blob/main/test-network/network.sh
If you really want to understand the necessary steps you'll need to work through it all. There's also some good documentation on what the various parts of the hyperledger-fabric system do here: https://github.com/hyperledger/fabric/tree/345a08b5cd30ef937a8270d1722462e4cad1caa9/docs/source - you'll need to navigate through the directories to the parts you're interested in and locate the .md files which contain the documentation : eg.
Peer docs/source/peers/peers.md
Orderer docs/source/orderer/ordering_service.md
Smart Contracts (aka Chaincode) - docs/source/smartcontract/smartcontract.md
Channels - docs/source/create_channel/channel_policies.md
You may also find some Dockerfiles in various repositories with different setups.
In distributed set up using consistent hashing,eg. distributed cache implementation using consistent hashing, how can we manage the multiple nodes? By managing I mean, monitoring health and adjusting loads when one of server dies or new added and similar stuff.
Here we dont have any master as all peer nodes are same. So gossip protocol is on way. But I want to understand can we use zookeeper to manage nodes here? Or zookeeper can only be used where we need master-slave coordination ?
I think in your case, zookeeper can be used for leader election and assigning the right token range to the node when a new node joins. In very similar case previous version of Fb Cassandra used to use zookeeper for same reason however later community got rid of it. Read the Replication and Bootstrapping section of this.
Could anyone kindly help me with this:
I understand that there is no "compiling" and "deploying of contracts in hyperledger sawtooth as such. I tried working with the tuna-chain supply chain and there it seemed like one command $docker-composer up did it all. But how exactly does that work?
I mean, say if I was making my own network on sawtooth, and I have written all the business logic (ie transaction processors), what are the docker files I need to make, how to make them?
The tuna supply chain code can be found here: https://github.com/hyperledger/education/tree/master/LFS171x/sawtooth-material
THANKS!
The analogy of contracts in sawtooth are called Transaction Processors (TPs). Sawtooth can be deployed to a native machine as executables or interpreted code, or deployed as docker images. Docker compose files allow for bringing up a network of docker images that may/may not interact with each other.
There are a number of language development kits for sawtooth. For information on developing sawtooth TPs you should read through and understand the architecture, components, API, etc.: https://sawtooth.hyperledger.org/docs/core/releases/latest/
There is also the github repo that is chock full of example TPs: https://github.com/hyperledger/sawtooth-core
As Frank rightly said, Sawtooth provides interfaces to write your smart contract in any language. You can define how the transaction will be defined and how will they be processed to change the state of blockchain.
These smart contracts will be executed by transaction processors, which depends on transaction families. All the validation part will be handled by validators, Once validation is done, validator will forward the contracts for a particular transaction family to their transaction processor.
All the serialization is done on Protobuf to save space and add speed.
There is a great example which will clear your concepts.
https://github.com/askmish/sawtooth-simplewallet
In order to understand above repo, you need to have clear understanding of Transaction families, Transaction processors, Apply method.
The post is old however if you have discovered a better solution please share, otherwise here is what I discovered.
You need to run the transaction processor and connect it to validator in order to submit your transactions.
In reality it will be rare to have all the validators on the same machine which is the case in most of the examples available on Hyperledger Sawtooth documentation for Docker, Ubuntu and AWS. In a real life scenario, on a business network companies will be running their own systems within their own networks with a couple of validators, settings-tp, rest-api and custom transaction processors. These validators will be connecting to other validators on the business network and that’s why only validator’s port is advised to open for world.
I wish there could be an easy way to register custom transaction processor on a running network possiby something like a cli similar to azure or aws, a native sawtooth cli that could connect to any sawtooth validator, upload transaction processor using a certificate and that transaction family become available for all future transactions.
Our company is developing an application which runs in 3 seperate kubernetes-clusters in different versions (production, staging, testing).
We need to monitor our clusters and the applications over time (metrics and logs). We also need to run a mailserver.
So basically we have 3 different environments with different versions of our application. And we have some shared services that just need to run and we do not care much about them:
Monitoring: We need to install influxdb and grafana. In every cluster there's a pre-installed heapster, that needs to send data to our tools.
Logging: We didn't decide yet.
Mailserver (https://github.com/tomav/docker-mailserver)
independant services: Sentry, Gitlab
I am not sure where to run these external shared services. I found these options:
1. Inside each cluster
We need to install the tools 3 times for the 3 environments.
Con:
We don't have one central point to analyze our systems.
If the whole cluster is down, we cannot look at anything.
Installing the same tools multiple times does not feel right.
2. Create an additional cluster
We install the shared tools in an additional kubernetes-cluster.
Con:
Cost for an additional cluster
It's probably harder to send ongoing data to external cluster (networking, security, firewall etc.).
3) Use an additional root-server
We run docker-containers on an oldschool-root-server.
Con:
Feels contradictory to use root-server instead of cutting-edge-k8s.
Single point of failure.
We need to control the docker-containers manually (or attach the machine to rancher).
I tried to google for the problem but I cannot find anything about the topic. Can anyone give me a hint or some links on this topic?
Or is it just no relevant problem that a cluster might go down?
To me, the second option sound less evil but I cannot estimate yet if it's hard to transfer data from one cluster to another.
The important questions are:
Is it a problem to have monitoring-data in a cluster because one cannot see the monitoring-data if the cluster is offline?
Is it common practice to have an additional cluster for shared services that should not have an impact on other parts of the application?
Is it (easily) possible to send metrics and logs from one kubernetes-cluster to another (we are running kubernetes in OpenTelekomCloud which is basically OpenStack)?
Thanks for your hints,
Marius
That is a very complex and philosophic topic, but I will give you my view on it and some facts to support it.
I think the best way is the second one - Create an additional cluster, and that's why:
You need a point which should be accessible from any of your environments. With a separate cluster, you can set the same firewall rules, routes, etc. in all your environments and it doesn't affect your current workload.
Yes, you need to pay a bit more. However, you need resources to run your shared applications, and overhead for a Kubernetes infrastructure is not high in comparison with applications.
With a separate cluster, you can setup a real HA solution, which you might not need for staging and development clusters, so you will not pay for that multiple times.
Technically, it is also OK. You can use Heapster to collect data from multiple clusters; almost any logging solution can also work with multiple clusters. All other applications can be just run on the separate cluster, and that's all you need to do with them.
Now, about your questions:
Is it a problem to have monitoring-data in a cluster because one cannot see the monitoring-data if the cluster is offline?
No, it is not a problem with a separate cluster.
Is it common practice to have an additional cluster for shared services that should not have an impact on other parts of the application?
I think, yes. At least I did it several times, and I know some other projects with similar architecture.
Is it (easily) possible to send metrics and logs from one kubernetes-cluster to another (we are running kubernetes in OpenTelekomCloud which is basically OpenStack)?
Yes, nothing complex there. Usually, it does not depend on the platform.