I am working with balance transfer example I did setup in single machine I want to do that example in two machines.I am following the below link https://github.com/hyperledger/fabric-samples/tree/release/balance-transfer can anyone tell me what are the steps or ways I have to do for implementing that example in multiple machines.
I was able to host hyperledger fabric network using docker swarm mode. Swarm mode provides a network across multiple hosts/machines for the communication of the fabric network components.
This post explains the deployment process https://medium.com/#wahabjawed/hyperledger-fabric-on-multiple-hosts-a33b08ef24f
Related
I have been struggling while trying to build a fabric 2.0 network with organizations spread in multiple hosts. The official documentation explains how to deploy two organizations (org1 and org2) using docker, and using configtxlator tool to add new orgs and peers.
The issue here is that in all documentation examples, organizations run in the same docker-engine host, which misses the whole point of distributed systems. Recently I found this blog post that endorses everything I am struggling with:
https://medium.com/#wahabjawed/hyperledger-fabric-on-multiple-hosts-a33b08ef24f
In this post, the author recommends using docker-swarm to create an overlay network that creates a distributed network among multiple Docker daemon hosts.
However, this post is from 2018, and I am wondering if this is still the best solution available? Or if Kubernetes, nowadays, would be the go for choice to create this overlay network?
ps: this network I am building is for academic purposes and research only, related to my PhD. studies.
Yes, you can use docker-swarm to deploy the network. docker-swarm is quite easy when compared to kubernetes. Since you mentioned that it is for academic purpose and research only then docker-swarm is fine.
Or you if want to deploy the production-grade hyperledger fabric you can use open source tool BAF, Blockchain Automation Framework which is an automation framework for rapidly and consistently deploying production-ready DLT platforms to cloud infrastructure.
I want to build a small blockchain network between 2 laptops (as an initial step). I am using Hyperledger Fabric and Hyperledger Composer on each laptop.
Can I use docker swarm to connect these 2 laptops then use Hyperledger Fabric and Hyperledger Composer to my blockchain network?
If the answer of question 1 is yes, can I do these without any cloud account (like amz, etc.) and without paying money?
If the answer of questions 1 and 2 is no, how can do my target?
Yes you can use docker swarm to connect 2 laptops and use hyperledger frameworks on them. Setup hyperledger fabric on multiple hosts using docker swarm
Since you are doing this on your laptop locally, you don't need to pay anything to anyone.
Single business network to be deployed on multiple machines and should be able to interact across different machines.
I am interested in deploying and accessing network in another machine. I am trying to do this by changing the connection profile (with ip and ports) of my desired host machine, but I am facing the issue:
Unable to find the response from the peers.
Is there any standard method for the same?
You can review these articles
https://discourse.skcript.com/t/running-hyperledger-composer-with-multiple-organization-on-different-host-machines/635
and
Hyperledger fabric deployment (real network)
How should composer setup to be done for hyperledger fabric network deployment for multiple orgs on multiple physical machine
You'll need to ensure that your dockerized nodes (on either physical machines etc) can 'see' each other on the IP network (standard networking stuff) and the connection profiles are configured accordingly. See also the multi-org tutorial (replace localhosts with real hosts and consider what crypto material (certs/keys) is required 'where' - but at least it gives you an indication of the sequence from a Composer standpoint https://hyperledger.github.io/composer/tutorials/deploy-to-fabric-multi-org.html (ie if using Composer v0.16.x) - if using Composer v0.17.x (and Fabric v1.1.x) then you need these docs https://hyperledger.github.io/composer/next/tutorials/deploy-to-fabric-multi-org.html (the connection profiles are a different format).
How to setup hyperledger fabric v1 network on physical peers instead of docker peers?
You can take a look at https://github.com/yacovm/fabricDeployment
It deploys automatically to linux virtual machines / physical hosts:
A few peers, according to your configuration
A solo orderer
Everything with TLS
Creates a channel and installs and invokes example02 chaincode for sanity testing
The docker containers provide a mechanism to take care of a lot of configs that happens behind the curtain, and that is the preferred way. If you choose to use fabric directly over a server without Docker, one way would be to build the binaries yourself via the make command and take a look at the 1) shell script in getting started and the 2) Docker-compose file (in http://hyperledger-fabric.readthedocs.io/en/latest/build_network.html) to deconstruct the steps and configs, but this will be a pretty involved process.
From what I understand, Kubernetes/Mesosphere is a cluster manager and Docker Swarm is an orchestration tool. I am trying to understand how they are different? Is Docker Swarm analogous to the POSIX API in the Docker world while Kubernetes/Mesosphere are different implementations? Or are they different layers?
Disclosure: I'm a lead engineer on Kubernetes
Kubernetes is a cluster orchestration system inspired by the container orchestration that runs at Google. Built by many of the same engineers who built that system. It was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems.
Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: https://github.com/docker/docker/pull/8859 and here: https://github.com/docker/docker/issues/8781
Swarm is a very simple add-on to Docker. It currently does not provide all the features of Kubernetes. It is currently hard to predict how the ecosystem of these tools will play out, it's possible that Kubernetes will make use of Swarm.