In a distributed system, does the Coordinator communicate with all Participants at the same time or tell one of them, who has to inform the others? - communication

I'm reading up on the communication in a distributed system but just can't find an answer to that specific questions. All the diagrams I'm given are vague about it.

The coordinator communicates with all other participants. This is a common design paradigm for distributed sequencing like a distributed chat system.
The leader is usually elected by a consensus algorithm

Related

What is the maximum (practical) number of nodes in an Erlang system

I wish to create a platform as a service in the financial markets using Erlang/Elixir. I will provide AWS lambda-style functions in financial markets, but rather than being accessible via web/rest/http, I plan to distribute my own ARM-based hardware terminals to clients (Nvidia Jetson TX2-based or similar, so decent hardware). They will access the functions from these terminals. I want said terminals to be full nodes in the system. So they will use the actor model to message pass to my central servers, and indeed, the terminals might message pass amongst each other if terminal users decide to put their own functions online.
Is this a viable model? Could I run 1000 terminals like this? 100 000? What kinds of limitations might I start bumping into? Is Erlang message routing scalable enough to imagine such a network still being performant if we had soft-real time financial markets streaming data flowing around? (mostly from central servers to terminals, but a good proportion possible moving directly around from terminal to terminal). We could have a system where up to 100k or more different "subscription" data channel processes were available, many of them taking input and producing output every second.
Basically I'd like a canonical guide to the scalability capabilities of an Erlang system something like the above. Ideally I'd also like some guide to the security implications of such a system ie. would global routing tables or any other part of the system be compromisable by a rogue terminal user, or can edge nodes be partly "sealed off" from sensitive parts of the rest of the Erlang network?
Note that I'd want to make heavy use of ports/NIFs for high-compute processes.
I would not pursue this avenue for various reasons, all of which hark back to the sort of systems that Erlang's distribution mechanism was developed for - a set of boards on a passive backplane: "free" local bandwidth and the whole machine sits in the same security domain. The Erlang distribution protocol is probably too chatty to work well on widely spread and large networks, and it is certainly too insecure. Unless you want nodes to be able to execute :os.cmd("rm -rf /") on each other, of course.
Use the Erlang distribution protocol in your central system to your heart's content, and have these terminals talk something that's data-only-over-SSL to that system and each other. On top of that, you can quite simply build a sort of overlay network to do whatever you want.
I recommend read this carefully and i recommend divide your service to little Micro-Services too.
Another benchmark is Investigating the Scalability Limits of
Distributed Erlang.
In the Joe Armstorng's book programming Erlang, he said:
"A few years ago, when I had my research hat on, I was working with PlanetLab. I had access to the PlanetLab a network, so I installed empty Erlang servers on all the PlanetLab machines (about 450 of them).
I didn’t really know what I would do with the machines, so I just set up the server infrastructure to do something later."
Do not use External ports, use internal drivers which are written in C or C++ instead.
You will find a lot of information regarding erlang Architectures is this answer: How scalable is distributed Erlang?
Short answer is, there is a pratical limitation of nodes in a cluster, but this limitation can be breach with federations fairly easily.
EDIT 1/ Further more I would recommend to read this book : Designing for scalability with Erlang/OTP

How to align the BPMN models with the Technology Architecture?

I stuck how to proceed further and need some new ideas to align these BPMN models which I have drawn for Customer Relationship Management(CRM) and Human Resources(HR).
As far as BPM model is considered it's mainly used for Business Architecture(BA) and then for Technical Architecture(TA) I could possibly use Rational Unified Process(RUP) but when I researched I could only find IBM Rational Rose Software which is not free...
My Question:-
Is there, open Source RUP tools which I can use? I looked up OpenUp but I could not make it work(which is a different issue).
Is this the right approach; for BA -> BPM and TA -> RUP ?
The scope of BPMN (BPMN specification 1. Scope ) describes
The primary goal of BPMN is to provide a notation that is readily understandable by all business users, from the business
analysts that create the initial drafts of the processes, to the technical developers responsible for implementing the
technology that will perform those processes, and finally, to the business people who will manage and monitor those
processes. Thus, BPMN creates a standardized bridge for the gap between the business process design and process
implementation.
There are Business process management(BPM) software's which provides process modeling and process execution conformance. Thus effectively making the models executable [at least to a certain depth].
In the free/ open source world you can find jBPM, Activiti etc...
I have tried out jBPM, is pretty much mature and has standard notations compliance. Also it supports modeling, execution and operational functionalities.

Couchbase or VoltDB for billion monitoring data storage and analysis?

I have a distributed monitoring system that collects and gathers monitoring data like CPU utilization, database performance metrics, network performance into a backend store. Other applications need to consume these data like real-time calculating(for a resource scheduler) , for system monitoring(to system administrator using monitoring dashboard), for historical analytic(to operation and analyzer program to modeling the resource using pattern for future capacity planning and business system activity analysis).
The dataset size is about 1.2 billion entries in the data store for 9 months. (all in OpenTSDB like format)
Previously I used an Elasticsearch cluster as the backend data store solution and decide to find a better one.
I am looking at Couchbase or VoltDB cluster but still in investigation stage so need some input from here who has the similar experience.
Major questions are as below:
Which backend store solution is good for my scenario? (Couchbase or VoltDB)?
I have to rewrite my data aggregator code (which is in golang). Couchbase provide a good golang SDK client but VoltDB's go driver is only in community level with limited function. So are there any better implementation to communicate with voltdb in golang?
Any suggestion or best practice on it?
There isn't too much in the way of usage patterns here, but it sounds like the kind of app people use VoltDB for.
As for the Golang client, we'd love some feedback as to how to make it more suitable if it's specifically missing something you need. You can also use the HTTP/JSON query interface from any language, including Golang. More info on that here:
http://docs.voltdb.com/UsingVoltDB/ProgLangJson.php
If you would like to leverage your existing model, take a look at Axibase Time-Series Database. It supports both tcollector network and http protocols. Rule engine and visualization are built-in.
The fact that ATSD is based on HBase may be an asset or a liability depending on your prior experience with it :)
URL to tcollector integation: http://axibase.com/products/axibase-time-series-database/writing-data/tcollector/
Disclosure: I work for the company developing ATSD.

LoopBack Vs Breeze.JS to use with Node.JS

Are both LoopBack and Breeze.JS to do the same to persist the data from the Node.JS client applications to database like MSSQL, Oracle etc...?
LoopBack is an exciting company with smart people.
I don't know much about LoopBack's data management offerings and, as one of the principals behind breeze, I don't feel comfortable making comparisons anyway.
I do think it's wise to consider every technology choice in the context of your broad business and application needs.
Let me suggest some areas to investigate:
What database(s) do you care about? How does the product actually work with those databases?
Do you need/want support on the server for
transactions
validation during save
batch save requests (mix of entity types and insert/update/delete)
batch query requests
Do you need/want support on the client for
client-side caching and events
tracking entity change state
property change notification
property and entity validation
automated entity graphs (e.g. just-in-time population and maintenance of entity navigation properties)
typed representation of sub-documents ("complex types")
original value tracking (ability to cancel pending changes)
serialization of entity graphs with circular references.
Will you need to consider integration with non-Node client or server technologies? With legacy applications and services?
Is there both developer and API documentation? How much? Does it provide the kind and quality of guidance you'll need?
How many questions have been asked on StackOverflow? What questions are people asking about the technology on StackOverflow? Are they the kinds of questions you'll be asking as you build your app? What do you think of the answers?
What do you require of customer support for the specific technologies you seek?
How established and stable is the technology you seek? Is the vendor well-versed in the issues and concerns of someone trying to solve these problems? Does this matter to you?
What are people actually doing with this technology? Look beyond the customer list. Find out whether these customers are doing what you want to do.
What is the vendor's experience designing and supporting the kinds of applications you intend to build with its technology?

Where is Erlang used and why? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I would like to know a list of the most common application/websites/solutions where Erlang is used, successfully or not.
Explaining why it is used into a specific solution instead of others programming languages would be very much appreciated, too.
Listing BAD Erlang case studies (cases in which Erlang is misused) it would be interesting, as well.
From Programming Erlang:
(source: google.com)
Many companies are using Erlang in their production systems:
• Amazon uses Erlang to implement SimpleDB, providing database services as a part
of the Amazon Elastic Compute Cloud (EC2).
• Yahoo! uses it in its social bookmarking service, Delicious, which has more than
5 million users and 150 million bookmarked URLs.
• Facebook uses Erlang to power the backend of its chat service, handling more than
100 million active users.
• WhatsApp uses Erlang to run messaging servers, achieving up to 2 million connected users per server.
• T-Mobile uses Erlang in its SMS and authentication systems.
• Motorola is using Erlang in call processing products in the public-safety industry.
• Ericsson uses Erlang in its support nodes, used in GPRS and 3G mobile networks
worldwide.
The most popular open source Erlang applications include the following:
• The 3D subdivision modeler Wings 3D, used to model and texture polygon
meshes.
• The Ejabberd system, which provides an Extensible Messaging and Presence Protocol
(XMPP) based instant messaging (IM) application server.
• The CouchDB “schema-less” document-oriented database, providing scalability
across multicore and multiserver clusters.
• The MochiWeb library that provides support for building lightweight HTTP servers.
It is used to power services such as MochiBot and MochiAds, which serve
dynamically generated content to millions of viewers daily.
• RabbitMQ, an AMQP messaging protocol implementation. AMQP is an emerging
standard for high-performance enterprise messaging.
ejabberd is one of the most well know erlang application and the one I learnt erlang with.
I think it's the one of most interesting project for learning erlang because it is really building on erlang's strength. (However some will argue that it's not OTP, but don't worry there's still a trove of great code inside...)
Why ?
An XMPP server (like ejabberd) can be seen as a high level router, routing messages between end users. Of course there are other features, but this is the most important aspect of an instant messaging server. It has to route many messages simultaneously, and handle many a lot of TCP/IP connections.
So we have 2 features:
handle many connections
route messages given some aspects of the message
These are examples where erlang shines.
handle many connections
It is very easy to build scalable non-blocking TCP/IP servers with erlang. In fact, it was designed to solve this problem.
And given it can spawn hundreds of thousand of processes (and not threads, it's a share-nothing approach, which is simpler to design), ejabberd is designed as a set of erlang processes (which can be distributed over several servers) :
client connection process
router process
chatroom process
server to server processes
All of them exchanging messages.
route messages given some aspects of the message
Another very lovable feature of erlang is pattern matching.
It is used throughout the language.
For instance, in the following :
access(moderator, _Config)-> rw;
access(participant, _Config)-> rw;
access(visitor, #config{type="public"})-> r;
access(visitor, #config{type="public_rw"})-> rw;
access(_User,_Config)-> none.
That's 5 different versions of the access function.
Erlang will select the most appropriate version given the arguments received. (Config is a structure of type #config which has a type attribute).
That means it is very easy and much clearer than chaining if/else or switch/case to make business rules.
To wrap up
Writing scalable servers, that's the whole point of erlang. Everything is designed it making this easy. On the two previous features, I'd add :
hot code upgrade
mnesia, distributed relational database (included in the base distribution)
mochiweb, on which most http erlang servers are built on
binary support (decoding and encoding binary protocol easy as ever)
a great community with great open source projects (ejabberd, couchdb but also webmachine, riak and a slew of library very easy to embed)
Fewer LOCs
There is also this article from Richard Jones. He rewrote an application from C++ to erlang: 75% fewer lines in erlang.
The list of most common applications for Erlang as been covered (CouchDb, ejabberd, RabbitMQ etc) but I would like to contribute the following.
The reason why it is used in these applications comes from the core strength of Erlang: managing application availability.
Erlang was built from ground up for the telco environment requiring that systems meet at least 5x9's availability (99.999% yearly up-time). This figure doesn't leave much room for downtime during a year! For this reason primarily, Erlang comes loaded with the following features (non-exhaustive):
Horizontal scalability (ability to distribute jobs across machine boundaries easily through seamless intra & inter machine communications). The built-in database (Mnesia) is also distributed by nature.
Vertical scalability (ability to distribute jobs across processing resources on the same machine): SMP is handled natively.
Code Hot-Swapping: the ability to update/upgrade code live during operations
Asynchronous: the real world is async so Erlang was built to account for this basic nature. One feature that contributes to this requirement: Erlang's "free" processes (>32000 can run concurrently).
Supervision: many different strategies for process supervision with restart strategies, thresholds etc. Helps recover from corner-cases/overloading more easily whilst still maintaining traces of the problems for later trouble-shooting, post-mortem analysis etc.
Resource Management: scheduling strategies, resource monitoring etc. Note that the default process scheduler operates with O(1) scaling.
Live debugging: the ability to "log" into live nodes at will helps trouble-shooting activities. Debugging can be undertaken live with full access to any process' running state. Also the built-in error reporting tools are very useful (but sometimes somewhat awkward to use).
Of course I could talk about its functional roots but this aspect is somewhat orthogonal to the main goal (high availability). The main component of the functional nature which contributes generously to the target goal is, IMO: "share nothing". This characteristic helps contain "side effects" and reduce the need for costly synchronization mechanisms.
I guess all these characteristics help extending a case for using Erlang in business critical applications.
One thing Erlang isn't really good at: processing big blocks of data.
We built a betting exchange (aka prediction market) using Erlang. We chose Erlang over some of the more traditional financial languages (C++, Java etc) because of the built-in concurrency. Markets function very similarly to telephony exchanges. Our CTO gave a talk on our use of Erlang at CTO talk.
We also use CouchDB and RabbitMQ as part of our stack.
Erlang comes from Ericsson, and is used within some of their telecoms systems.
Outside telecoms, CouchDb (a document-oriented database) is possibly the best known Erlang application so far.
Why Erlang ? From the overview (worth reading in full):
The document, view, security and
replication models, the special
purpose query language, the efficient
and robust disk layout and the
concurrent and reliable nature of the
Erlang platform are all carefully
integrated for a reliable and
efficient system.
I came across this is in the process of writing up a report: Erlang in Acoustic Ray Tracing.
It's an experience report on a research group's attempt to use Erlang for Acoustic Ray Tracing. They found that while it was easier to write the program, less buggy, etc. It scaled worse, and performed 10x slower than a comparable C program. So one spot where it may not be well suited is CPU intensive scenarios.
Do note though, that the people wrote the paper were in the stages of first learning Erlang, and may not have known the proper development procedures for CPU intensive Erlang.
Apparently, Yahoo used Erlang to make something it calls Harvester. Article about it here: http://www.ddj.com/architect/220600332
What is erlang good for?
http://beebole.com/en/blog/erlang/why-erlang/
http://www.aquabu.com/2008/2/15/erlang-pragmatic-studio-day-3-notes
http://www.reddit.com/r/programming/comments/9q0lr/erlang_and_highfrequency_trading/
(jerf's answer)
It's important to realize that Erlang's 4 parts: the language itself, the VMs(BEAM, hipe) standard libs (plus modules on github, CEAN, etc.) and development environment are being steadily updated / expanded/improved. For example, i remember reading that the floating point performance improved when Wings3d's author realized it needed to improve (I can't find a source for this). And this guy just wrote about it:
http://marian-dan.com/wordpress/?p=324
A couple years ago, Tim Bray's Wide Finder publicity and all the folks starting to do web app frameworks and HTTP servers lead (at least in part) to improved regex and binaries handling. And there's all the work integrating HiPE and SMP, the dialyzer project, multiple unit testing and build libs springing up, ..
So its sweet spot is expanding, The difficult thing is that the official docs can't keep up very well, and the mailing list and erlang blogosphere volume are growing quickly
We are using Erlang to provide the back-end muscle power for our really real-time browser-based multi-player game Pixza. We don't use Flash or any other third-party plugins, though the game is real-time multi-player. We use pure JS and COMET techniques instead. And Erlang supports the "really realtimeliness" of Pixza.
I'm working for wooga, a social game company and we use Erlang for some of our game backends (basically http apis for millions of daily users) and auxiliary services like ios push notification provider, payment etc.
I think it really shines in network related tasks and it makes it kind of straight forward to structure and implement simple and complex network services alike in it. Distribution, fault tolerance and performance are easy to achieve because Erlang already has some of the key ingredients built in and they are being used for a long time in critical production infrastructure. So its not like "the new hip technology thing 0.0.2 alpha".
I know that other game companies use Erlang as well. You should be able to find presentations on slideshare about that.
Erlang draws its strength from being a functional language with no shared memory. Hence IMO, Erlang won't be suitable for applications that require in place memory manipulations. Image editing for example.

Resources