What are hardware requirements to run Hyperledger Fabric peer? - hyperledger

What are minimum hardware requirements to run a Hyperledger Fabric v1 peer?

It can run on a RaspberryPi, so technically it does not need much if you aren't planning on doing much with it. However, to achieve the performance results you might expect, you'll need to look to achieving the right balance of network, processor, disk and CPU speeds. Additionally, as the peer is essentially managing a database, you'll need to take into consideration the data storage needs over time.
You'll also need to consider such factors as number of chaincode smart contracts, the number of expected channels and the size of the network. IOW, the hardware requirements will really depend on many other factors than simply what the peer (or orderer) process requires to minimally function.
If you are merely interested in running a development/test cluster of 4 peer nodes, an orderer and CA, keep in mind that this can all be easily handled on a Macbook Pro with 16G memory, and with slightly less ease at 8G memory. You can use that as a yardstick for cloud instances to run a development/test cluster.
Finally, there's a LOT of crypto processing, so you will want to consider hardware crypto acceleration to yield the optimal performance.

Related

Spark in standalone mode on a single computer : is it worth splitting it in masters and workers through docker containers (or another way)?

I currently own only one computer, and I won't have another.
I run Spark on its CPU cores : master=local[5], using it directly : I set spark-core and spark-sql for dependencies, do quite no other configuration, and my programs start immediately. It's confortable, of course.
But should I attempt to create an architecture with a master and some workers by the mean of Docker containers or minikube (Kubernetes) on my computer ?
Will solution #2 - with all the settings it requires - reward me with better performances, because Spark is truly designed to work that way, even on a single computer,
or will I loose some time, because the mode I'm currently running it, without network usage, without need of data locality will always give me better performances, and solution #1 will always be the best on a single computer ?
My hypothesis is that #1 is fine. But I have no true measurement for that. No source of comparison. Who have experienced the two manners of doing things on a sigle computer ?
It really depends on your goals - if you always will run your Spark code on the single node with local master, then just use it. But if you intend to run your resulting code in the distributed mode on multiple machines, then emulating cluster with Docker could be useful, as you'll get your code running in truly distributed manner, and you'll able to find problems that not always are found when you run your code with the local master.
Instead of direct Docker usage (that could be tricky to setup, although it's still possible), maybe you can consider to use Spark on Kubernetes, for example, via minikube - there is a plenty of articles found by Google on this topic.
Having done testing on this with executor size, the cutover from when it makes sense to use more multiple executors is # CPUs > 32. AWS EMR spark runtime defaults to at least 4 CPUs per executor and Databricks always uses fat executors which means > 32CPUS on the 8xl instances. Your greatest limitation tends to be the JVMs garbage collection which caps the size of the heap. Local mode has a couple performance advantages compared to cluster mode.
full stage code gen has to be run on both the drive and every single executor. For short queries this can add several 100MS per stage.
driver <-> executor communication has latency.
shared memory between driver and executors. This reduces the chance of OOM and reduces the amount of spilling to disk.
People end up choosing to go with multiple executors/instances not because it would be faster than a single instance but because it is the only way to scale up in terms of data volume and parallization. (also for failure recovery)
If you're feeling ambitious there's a performance testing tool called TPC-DS that runs a set of dataprocessing queries against a standardized dataset
https://github.com/databricks/spark-sql-perf
https://github.com/maropu/spark-tpcds-datagen
Also if you're feeling adventurous the spark code has a script to fire up a mini cluster on minikube if you want a quick and easy way to test this.

Does containerization always lead to cpu, ram and storage cost savings as compared to VMs?

Being a naive in the world of containers, and after reading a lot of literature online, I was wondering if someone could render some guidance.
I wanted to know if containers always lead to cost savings in terms of cpu, memory and storage when compared with the same application running inside a VM.
I can think of a scenario when it won’t when the scaleset in case of VM running inside an orchestrator like kubernetes is a high number leading to more consumption of compute.
I was wondering what is the general understanding here
Containerization is not about cost savings in terms of CPU/RAM/Storage, but a lot more.
When an app gets deployed on a VM, you need to have specific tools like Ansible/Chef/Puppet to optimize deployments, and you also need additional tools to monitor the load to increase/decrease the number of VMs running, you also need additional tools to provide WideIP support across the running services in case of a REST API, and the list goes on.
With Containers running on Kubernetes, you have all these features built in to some extent, and when you deploy Servicemesh framework like Istio, you get additional features which add lot of value with minimum effort including Circuit Breakers, retries, authentication, etc.

What is the Impact of having more replicas in Docker Swarm mode?

I understand the use of replicas in Docker Swarm mode. It is mainly to eliminate points of failure and reduce the amount of downtime. It is well explained in this post.
Since having more replicas is more useful for a system as a whole, why don't companies just initialise as many replicas as possible e.g 1000 replicas for a docker service? I can imagine a large corporation running a back-end system may face multiple points of failures at any given time and they would benefit from having more instances of the particular service.
I would like to know how many replicas are considered TOO MUCH and what are the factors affecting the performance of a Docker Swarm?
I can think of hardware overhead being a limiting factor.
Lets say your running Rails app. Each instance required 128Mb of RAM, and 10% CPU usage. 9 instances is a touch over 1Gb of memory and 1 entire CPU.
While that does not sounds like a lot, image an organization has 100 + teams each with 3,4,5 applications each. The hardware requirements to operation an application at acceptable levels quickly ramp up.
Then there is network chatter. 10MB/s is typical in big org/corp settings. While a heartbeat check for a couple instances is barely noticeable, heartbeat on 100's of instances could jam up the network.
At the end of the day it comes down the constraints. What are the boundaries within the software, hardware, environment, budgetary, and support systems? It is often hard to imagine the pressures present when (technical) decisions are made.

Why would one chose many smaller machine types instead of fewer big machine types?

In a clustering high-performance computing framework such as Google Cloud Dataflow (or for that matter even Apache Spark or Kubernetes clusters etc), I would think that it's far more performant to have fewer really BIG machine types rather than many small machine types, right? As in, it's more performant to have 10 n1-highcpu-96 rather than say 120 n1-highcpu-8 machine types, because
the cpus can use shared memory, which is way way faster than network communications
if a single thread needs access to lots of memory for a single threaded operation (eg sort), it has access to that greater memory in a BIG machine rather than a smaller one
And since the price is the same (eg 10 n1-highcpu-96 costs the same as 120 n1-highcpu-8 machine types), why would anyone opt for the smaller machine types?
As well, I have a hunch that for the n1-highcpu-96 machine type, we'd occupy the whole host, so we don't need to worry about competing demands on the host by another VM from another Google cloud customer (eg contention in the CPU caches
or motherboard bandwidth etc.), right?
Finally, although I don't think the google compute VMs correctly report the "true" CPU topology of the host system, if we do chose the n1-highcpu-96 machine type, the reported CPU topology may be a touch closer to the "truth" because presumably the VM is using up the whole host, so the reported CPU topology is a little closer to the truth, so any programs (eg the "NUMA" aware option in Java?) running on that VM that may attempt to take advantage of the topology has a better chance of making the "right decisions".
It will depend on many factors if you want to choose many instances with smaller machine type or a few instances with big machine types.
The VMs sizes differ not only in number of cores and RAM, but also on network I/O performance.
Instances with small machine types have are limited in CPU and I/O power and are inadequate for heavy workloads.
Also, if you are planning to grow and scale it is better to design and develop your application in several instances. Having small VMs gives you a better chance of having them distributed across physical servers in the datacenter that have the best resource situation at the time the machines are provisioned.
Having a small number of instances helps to isolate fault domains. If one of your small nodes crashes, that only affects a small number of processes. If a large node crashes, multiple processes go down.
It also depends on the application you are running on your cluster and the workload.I would also recommend going through this link to see the sizing recommendation for an instance.

Mirrored queue performance factors

We operate two dual-node brokers, each broker having quite different queues and workloads. Each box has 24 cores (H/T) worth of Xeon E5645 # 2.4GHz with 48GB RAM, connected by Gigabit LAN with ~150μs latency, running RHEL 5.6, RabbitMQ 3.1, Erlang R16B with HiPE off. We've tried with HiPE on but it made no noticeable performance impact, and was very crashy.
We appear to have hit a ceiling for our message rates of between 1,000/s and 1,400/s both in and out. This is broker-wide, not per-queue. Adding more consumers doesn't improve throughput overall, just gives that particular queue a bigger slice of this apparent "pool" of resource.
Every queue is mirrored across the two nodes that make up the broker. Our publishers and consumers connect equally to both nodes in a persistant way. We notice an ADSL-like asymmetry in the rates too; if we manage to publish a high rate of messages the deliver rate drops to high double digits. Testing with an un-mirrored queue has much higher throughput, as expected. Queues and Exchanges are durable, messages are not persistent.
We'd like to know what we can do to improve the situation. The CPU on the box is fine, beam takes a core and a half for 1 process, then another 80% each of two cores for another couple of processes. The rest of the box is essentially idle. We are using ~20GB of RAM in userland with system cache filling the rest. IO rates are fine. Network is fine.
Is there any Erlang/OTP tuning we can do? delegate_count is the default 16, could someone explain what this does in a bit more detail please?
This is difficult to answer without knowing more about how your producers and consumers are configured, which client library you're using and so on. As discussed on irc (http://dev.rabbitmq.com/irclog/index.php?date=2013-05-22) a minute ago, I'd suggest you attempt to reproduce the topology using the MulticastMain java load test tool that ships with the RabbitMQ java client. You can configure multiple producers/consumers, message sizes and so on. I can certainly get 5Khz out of a two-node cluster with HA on my desktop, so this may be a client (or application code) related issue.

Resources