Deploying Hyperledger Fabric organizations in multiple hosts - docker

I have been struggling while trying to build a fabric 2.0 network with organizations spread in multiple hosts. The official documentation explains how to deploy two organizations (org1 and org2) using docker, and using configtxlator tool to add new orgs and peers.
The issue here is that in all documentation examples, organizations run in the same docker-engine host, which misses the whole point of distributed systems. Recently I found this blog post that endorses everything I am struggling with:
https://medium.com/#wahabjawed/hyperledger-fabric-on-multiple-hosts-a33b08ef24f
In this post, the author recommends using docker-swarm to create an overlay network that creates a distributed network among multiple Docker daemon hosts.
However, this post is from 2018, and I am wondering if this is still the best solution available? Or if Kubernetes, nowadays, would be the go for choice to create this overlay network?
ps: this network I am building is for academic purposes and research only, related to my PhD. studies.

Yes, you can use docker-swarm to deploy the network. docker-swarm is quite easy when compared to kubernetes. Since you mentioned that it is for academic purpose and research only then docker-swarm is fine.

Or you if want to deploy the production-grade hyperledger fabric you can use open source tool BAF, Blockchain Automation Framework which is an automation framework for rapidly and consistently deploying production-ready DLT platforms to cloud infrastructure.

Related

Kubernetes Architecture / Design /?

I’m trying to figure out and learn the patterns and best practices on moving a bunch of Docker containers I have for an application into Kubernetes. Things like, pod design, services, deployments, etc. For example, I could create a Pod with the single web and application containers in them, but that’d not be a good design.
Searching for things like architecture and design with Kubernetes just seems to yield topics on the product’s architecture or how to implement a Kubernetes cluster, and not the overlay of designing the pods, services, etc.
What does the community generally refer to this application later design in the Kubernetes world, and can anyone refer me to a 101 on this topic please?
Thanks.
Kubernetes is a complex system, and learning step by step is the best way to gain expertise. What I recommend you is documentation about Kubernetes, from where you can learn about each of components.
Another good option is to review 70 best K8S tutorials, which are categorized in many ways.
Designing and running applications with scalability, portability, and robustness in mind can be challenging. Here are great resources about it:
Architecting applications for Kubernetes
Using Kubernetes in production, lessons learned
Kubernetes Design Principles from Google
Well, there's no Kubernetes approach but rather a Cloud Native one: I would suggest you Designing Distributed Systems: patterns and paradigms by Brendan Burns.
It's really good because it provides several scenarios along with pattern approached and related code.
Most of the examples are obviously based on Kubernetes but I think that the implementation is not so important, since you have to understand why and when to use an Ambassador pattern or a FaaS according to the application needs.
The answer to this can be quite complex and that's why it is important that software/platform architects understand K8s well.
Mostly you will find an answer on that which tells you "put each application component in a single pod". And basically that's correct as the main reason for K8s is high availability, fault tolerance of the infrastructure and things like this. This leads us to, if you put every single component to a single pod and make it with a replica higher than 2 its will reach a batter availability.
But you also need to know why you want to go to K8s. At the moment it is a trending topic. But if you don't want to Ops a cluster and actually don't need HA or so, why you don't run on stuff like AWS ECS, Digital Ocean droplets and co?
Best answers you will currently find are all around how to design and cut microservices as each microservice could be represented in a pod. Also, a good starting point is from RedHat Principles of container-based Application Design
or InfoQ.
Un kubernetes cluster is composed of:
A master server called control plane
Nodes: nodes which execute the applications / Containers or pods
By design, a production kubernetes cluster must have at least a master server and 2 nodes according to the kubernetes documentation.
Here is a summary of the components of a kubernetes cluster:
Master = control plane:
kube-api-server: expose the kubernetes api
etcd: key values store ​​for the cluster
kube-scheduler: distributed the pods on the nodes
kube-controller-manager: controller of nodes, pods, cluster components.
Nodes = Servers that run applications
Kubelet: runs on each node, It makes sure that the containers are running in a pod.
kube-proxy: Allows the pods to communicate in the cluster and outside
Runtine container: allows to run the containers / pods
Complementary modules = addons
DNS: DNS server that serves DNS records for Kubernetes services.
Webui: Graphical dashboard for the cluster
Container Resource Monitoring: Records metrics on containers in a central DB, provides UI to browse them
Cluster-level Logging: Records container logs in a central log with a search / browse interface.

Does it makes sense to manage Docker containers of a/few single hosts with Kubernetes?

I'm using docker on a bare metal server. I'm pretty happy with docker-compose to configure and setup applications.
Still some features are missing, like configuration management and monitoring maybe there are other solutions to solve this issues but I'm a bit overwhelmed by the feature set of Kubernetes and can't judge if it would help me here.
I'm also open for recommendations to solve the requirements separately:
Configuration / Secret management
Monitoring of my docker hostes applications (e.g. having some kind of dashboard)
Remot container control (SSH is okay with only one Server)
Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup
I'm sure Kubernetes covers some of these features, but I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)
I hope the questions scope is not too broad, else please use the comment section and help me to narrow down the question.
Thanks.
I think the Kubernetes is absolutely much your requests and it is what you need.
Let's start one by one.
I have the feeling that it's too much focused on Cloud platforms where Machines are created on the fly (since I only have at most few bare metal Servers)
No, it is not focused on Clouds. Kubernates can be installed almost on any bare-metal platform (include ARM) and have many tools and instructions which can help you to do it. Also, it is easy to deploy it on your local PC using Minikube, which will prepare local cluster for you within VMs or right in your OS (only for Linux).
Configuration / Secret management
Kubernates has a powerful configuration and management based on special objects which can be attached to your containers. You can read more about configuration management in that article.
Moreover, some tools like Helm can provide you more automation and range of preconfigured applications, which you can install using a single command. And you can prepare your own charts for it.
Monitoring of my docker hostes applications (e.g. having some kind of dashboard)
Kubernetes has its own dashboard where you can get many kinds of information: current applications status, configuration, statistics and many more. Also, Kubernetes has great integration with Heapster which can be used with Grafana for powerful visualization of almost anything.
Remot container control (SSH is okay with only one Server)
Kubernetes controlling tool kubectl can get logs and connect to containers in the cluster without any problems. As an example, to connect a container "myapp" you just need to call kubectl exec -it myapp sh, and you will get sh session in the container. Also, you can connect to any application inside your cluster using kubectl proxy command, which will forward a port you need to your PC.
Being ready to scale my environment (based on multiple different Dockerized applications) to more than one server in future - already thinking about networking/service discovery issues with a pure docker-compose setup
Kubernetes can be scaled up to thousands of nodes. Or can have only one. It is your choice. Independent of a cluster size, you will get production-grade networking, service discovery and load balancing.
So, do not afraid, just try to use it locally with Minikube. It will make many of operation tasks more simple, not more complex.

Hyperledger Fabric v1.0 setup on multiple machines

I am working with balance transfer example I did setup in single machine I want to do that example in two machines.I am following the below link https://github.com/hyperledger/fabric-samples/tree/release/balance-transfer can anyone tell me what are the steps or ways I have to do for implementing that example in multiple machines.
I was able to host hyperledger fabric network using docker swarm mode. Swarm mode provides a network across multiple hosts/machines for the communication of the fabric network components.
This post explains the deployment process https://medium.com/#wahabjawed/hyperledger-fabric-on-multiple-hosts-a33b08ef24f

Azure Service Fabric Cluster on a single machine

As an ISV we have an enterprise solution that extents our existing software for our big customers, they must install and configure an Azure SF Cluster on-premises or even in Azure. Our software works mostly with stateless services and only a couple statefull onces. It is also multi-tenant so we can run the software ourself in a cloud environment.
But we also have a third way of using it: We need to ship our software to non-enterprise customers that have our other software on-premises. This is an issue since Service Fabric requires multiple machines that those small customers do not have and certainly do not want to have. Sometimes they are a single user of the software and running it all on a single laptop.
I see several solutions c.q. options:
1. Rewrite the software.
Maintaining the same code base somehow, host as a windows service or something. with topshelf, which is relativly easy to host OWIN / Katana based programs.
Pros
No Service Fabric cluster
Easier installation, for example a windows service
Cons
No statefull services
Multiple visual studio solutions
Developers have to think about way of hosting and Service Fabric being available or not
No reliability and scalability
2. Host on a Single node cluster
Install a cluster as single node on a machine as production environment. Knowing that reliability and scalability is lost, but thatis also with option 1.
Pros
One visual studio solution
Only one codebase, require no modifications to the code, which is easy for developers
Cons
Not supported by Azure Service Fabric for production
No reliability and scalability
3. Ship a cluster inside a single docker container
I know not much about docker, but perhaps it is easy to ship a pre-configured service fabric cluster?
What do you guys (and girls) think? I would love option two or three, but some of our developers are even thinking about option 1 being the better one which I doubt.
Some related links I found:
Option 2: Azure Service Fabric Single Virtual Machine
Option 3: https://github.com/Azure/service-fabric-issues/issues/409
You could investigate using a single server and use that to run 3 to 5 virtual machines, and run your cluster on that. You won't have the ultimate high availability, but you can still enjoy many SF features (stateful services, rolling upgrades, replication). No need to rewrite any software.

What is the difference between Docker Swarm and Kubernetes/Mesophere?

From what I understand, Kubernetes/Mesosphere is a cluster manager and Docker Swarm is an orchestration tool. I am trying to understand how they are different? Is Docker Swarm analogous to the POSIX API in the Docker world while Kubernetes/Mesosphere are different implementations? Or are they different layers?
Disclosure: I'm a lead engineer on Kubernetes
Kubernetes is a cluster orchestration system inspired by the container orchestration that runs at Google. Built by many of the same engineers who built that system. It was designed from the ground up to be an environment for building distributed applications from containers. It includes primitives for replication and service discovery as core primitives, where-as such things are added via frameworks in Mesos. The primary goal of Kubernetes is a system for building, running and managing distributed systems.
Swarm is an effort by Docker to extend the existing Docker API to make a cluster of machines look like a single Docker API. Fundamentally, our experience at Google and elsewhere indicates that the node API is insufficient for a cluster API. You can see a bunch of discussion on this here: https://github.com/docker/docker/pull/8859 and here: https://github.com/docker/docker/issues/8781
Swarm is a very simple add-on to Docker. It currently does not provide all the features of Kubernetes. It is currently hard to predict how the ecosystem of these tools will play out, it's possible that Kubernetes will make use of Swarm.

Resources