How scalable is ZeroMQ? I'm especially interested in understanding its potential for running on a large number (10,000 - 15,000) of cores.
We've tried to make it as scalable as possible but I personally tested only on up to 16 core boxes. Up to that limit we've seen almost linear scaling.
You don't mention whether your 10k or 15k cores are on the same box or not.
Let's assume they are. Every two years the number of cores on a box can, theoretically, double. So if we have 16-core boxes today, it'll be 16K cores in 20 years.
So now, your question is maybe, "will ZeroMQ help my application scale to such huge numbers of cores, so that it will scale over the next 20+ years?" The answer is "yes, but only if you use it properly". This means designing your application using inproc sockets and patterns that properly divide the work, and flow of data. You will need to adjust the architecture over time.
If your question is, "can I profitably use that many cores between multiple applications", the answer lies with your O/S more than ZeroMQ. Can your I/O layer handle the load? Probably, yes.
And if your question is, "can I use ZeroMQ across a cloud of 10K-16K boxes", then the answer is "yes, this has already been proven in practice".
Note that although ZeroMQ is multithreaded internally, it may not be wise to rely solely on that to scale it up to large numbers of cores. However, because ZeroMQ uses the same API for inter-machine, inter-process and inter-thread communication, it is easy to write application using ZeroMQ that can move seamlessly into a one-process-per-core scenario or into a grid fabric of many, many machines.
ZeroMQ already has a reputation for being the fastest structured messaging protocol around so if you were going to do benchmarks to choose a technology, ZeroMQ should definitely be one of them.
The two big reasons for using ZeroMQ are its easy-to-use cross-language API (see all the examples on the ZeroMQ Guide site) and its low overhead both in terms of bytes on the wire, and in terms of latency. For instance ZeroMQ can leverage UDP multicast to run faster than any TCP protocol, but the application programmer doesn't need to learn a new API. It is all included.
Related
I wish to create a platform as a service in the financial markets using Erlang/Elixir. I will provide AWS lambda-style functions in financial markets, but rather than being accessible via web/rest/http, I plan to distribute my own ARM-based hardware terminals to clients (Nvidia Jetson TX2-based or similar, so decent hardware). They will access the functions from these terminals. I want said terminals to be full nodes in the system. So they will use the actor model to message pass to my central servers, and indeed, the terminals might message pass amongst each other if terminal users decide to put their own functions online.
Is this a viable model? Could I run 1000 terminals like this? 100 000? What kinds of limitations might I start bumping into? Is Erlang message routing scalable enough to imagine such a network still being performant if we had soft-real time financial markets streaming data flowing around? (mostly from central servers to terminals, but a good proportion possible moving directly around from terminal to terminal). We could have a system where up to 100k or more different "subscription" data channel processes were available, many of them taking input and producing output every second.
Basically I'd like a canonical guide to the scalability capabilities of an Erlang system something like the above. Ideally I'd also like some guide to the security implications of such a system ie. would global routing tables or any other part of the system be compromisable by a rogue terminal user, or can edge nodes be partly "sealed off" from sensitive parts of the rest of the Erlang network?
Note that I'd want to make heavy use of ports/NIFs for high-compute processes.
I would not pursue this avenue for various reasons, all of which hark back to the sort of systems that Erlang's distribution mechanism was developed for - a set of boards on a passive backplane: "free" local bandwidth and the whole machine sits in the same security domain. The Erlang distribution protocol is probably too chatty to work well on widely spread and large networks, and it is certainly too insecure. Unless you want nodes to be able to execute :os.cmd("rm -rf /") on each other, of course.
Use the Erlang distribution protocol in your central system to your heart's content, and have these terminals talk something that's data-only-over-SSL to that system and each other. On top of that, you can quite simply build a sort of overlay network to do whatever you want.
I recommend read this carefully and i recommend divide your service to little Micro-Services too.
Another benchmark is Investigating the Scalability Limits of
Distributed Erlang.
In the Joe Armstorng's book programming Erlang, he said:
"A few years ago, when I had my research hat on, I was working with PlanetLab. I had access to the PlanetLab a network, so I installed empty Erlang servers on all the PlanetLab machines (about 450 of them).
I didn’t really know what I would do with the machines, so I just set up the server infrastructure to do something later."
Do not use External ports, use internal drivers which are written in C or C++ instead.
You will find a lot of information regarding erlang Architectures is this answer: How scalable is distributed Erlang?
Short answer is, there is a pratical limitation of nodes in a cluster, but this limitation can be breach with federations fairly easily.
EDIT 1/ Further more I would recommend to read this book : Designing for scalability with Erlang/OTP
I am a bit confused about this. If you're building a distributed application, which in some cases may perform parallel operations (although not necessarily mathematical), should you use ASIO or something like MPI? I take it MPI is a higher level than ASIO, but it's not clear where in the stack one would begin.
I know nothing about ASIO but from a quick Google it looks to me to be a lot lower level than MPI. For me the whole point of MPI is so that I can program against a higher level of abstraction from the messaging than, it seems, ASIO provides. Where you begin depends on your needs. For mine, parallelising scientific codes for high-performance, the obvious answer is MPI. I'm not sure I'd use it, or at least not sure it would be my default choice, if I were writing more general-purpose distributed, as opposed to parallel, applications. Well, actually, it probably would be my default choice to avoid learning another approach (most of which are less portable and less long-lived than MPI) but I'll admit it might not be the best choice if starting from an equal footing.
As far as I know MPI is currently incapable of handling the situation, when the new distributed nodes want to join the already started group. The problems also may occur if one of the nodes goes offline.
MPI does not reveal any network related machinery that is underneath. Thus if you would ever need something on the lower level -- you're in trouble. If you on the other hand do not aticipate such a need, then you'll save yourself a lot of time using MPI.
I've heard that people say that they've made a scalable web application..
What really is scaling?
What can be done by developers to make their application scalable?
What are the factors that are looked after by developers during scaling?
Any tips and tricks about scaling web applications with asp.net and sql server...
What really is scaling?
Scaling is the increasing in capacity and/or usage of your application.
What do developers do to make their application scalable?
Either allow their applications to scale vertically or horizontally.
Horizontal scaling is about doing things in parallel.
Vertical scaling is about doing things faster. This typically means more powerful hardware.
Often when people talk about horizontal scalability the ideal is to have (near-)linear scalability. This means that if one $5k production box can handle 2,000 concurrent users then adding 4 more should handle 10,000 concurrent users. The closer it is to that figure the better.
The ideal for highly scalable apps is to have near-limitless near-linear horizontal scalability such that you can just plug in another box and your capacity increases by an expected amount with little or no diminishing returns.
Ideally redundancy is part of the equation too but that's typically a separate issue.
The poster child for this kind of scalability is, of course, Google.
What are the factors that are looked after by developers during scaling?
How much scale should be planned for? There's no point spending time and money on a problem you'll never have;
Is it possible and/or economical to scale vertically? This is the preferred option as it is typically much, much cheaper (in the short term);
Is it worth the (often significant) cost to enable your application to scale horizontally? Distributed/multithreaded apps are significantly more difficult and expensive to write.
Any tips and tricks about scaling web applications...
Yes:
Don't worry about problems you'll never have;
Don't worry much about problems you're unlikely to have. Chances are things will have changed long before you have them;
Don't be afraid to throw away code and start again. Having automated tests makes this far easier; and
Think in terms of developer time being expensive.
(4) is a key point. You might have a poorly written app that will require $20,000 of hardware to essentially fix. Nowadays $20,000 buys a lot of power (64+GB of RAM, 4 quad core CPUs, etc), probably more than 99% of people will ever need. Is it cheaper just to do that or spend 6 months rewriting and debugging a new app to make it faster?
It's easily the first option.
So I'll add another item to my list: be pragmatic.
My 2c definition of "scalable" is a system whose throughput grows linearly (or at least predictably) with resources. Add a machine and get 2x throughput. Add another machine and get 3x throughput. Or, move from a 2p machine to a 4p machine, and get 2x throughput.
It rarely works linearly, but a well-designed system can approach linear scalability. Add $1 of HW and get 1 unit worth of additional performance.
This is important in web apps because the potential user base is ~1b people.
Contention for resources within the app, when it is subjected to many concurrent requests, is what causes scalability to suffer. The end result of such a system is that no matter how much hardware you use, you cannot get it to deliver more throughput. It "tops out". The HW-cost versus performance curve goes asymptotic.
For example, if there's a single app-wide in-memory structure that needs to be updated for each web transaction or interaction, that structure will become a bottleneck, and will limit scalability of the app. Adding more CPUs or more memory or (maybe) more machines won't help increase throughput - you will still have requests lining up to lock that structure.
Often in a transactional app, the bottleneck is the database, or a particular table in the database.
What really is scaling?
Scaling means accommodating increases in usage and data volume, and ideally the implementation should be maintainable.
What developers do to make their application scalable?
Use a database, but cache as much as possible while accommodating user experience (possibly in the session).
Any tips and tricks about scaling web applications...
There are lots, but it depends on the implementation. What programming language(s), what database, etc. The question needs to be refined.
Scalable means that your app is prepared for (and capable of handling) future growth. It can handle higher traffic, more activity, etc. Making your site more scalable can entail various things. You may work on storing more in cache, rather than querying the database(s) unnecessarily. It may entail writing better queries, to keep connections to a minimum, and resources freed up.
Resources:
Seattle Conference on Scalability (Video)
Improving .NET Application Performance and Scalability (Video)
Writing Scalable Applications with PHP
Scalability, on Wikipedia
Books have been written on this topic. An excellent one which targets internet applications but describes principles and practices that can be applied in any development effort is Scalable Internet Architectures
May I suggest a "User-Centric" definition;
Scalable applications provide a consistent level of experience to each user irrespective of the number of users.
For web applications this means 24/7 anywhere in the world. However, given the diversity of the available bandwidth around the world and developer's lack of control over its performance and availability, we may re-define it as follows:
Scalable web applications provide a consistent response time, measured at the server TCP port in use, irrespective of the number of requests.
To achieve this the developer must avoid or remove all performance bottle-necks. Currently the most challenging issue is the scalability of distributed RDBMS systems.
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I would like to know a list of the most common application/websites/solutions where Erlang is used, successfully or not.
Explaining why it is used into a specific solution instead of others programming languages would be very much appreciated, too.
Listing BAD Erlang case studies (cases in which Erlang is misused) it would be interesting, as well.
From Programming Erlang:
(source: google.com)
Many companies are using Erlang in their production systems:
• Amazon uses Erlang to implement SimpleDB, providing database services as a part
of the Amazon Elastic Compute Cloud (EC2).
• Yahoo! uses it in its social bookmarking service, Delicious, which has more than
5 million users and 150 million bookmarked URLs.
• Facebook uses Erlang to power the backend of its chat service, handling more than
100 million active users.
• WhatsApp uses Erlang to run messaging servers, achieving up to 2 million connected users per server.
• T-Mobile uses Erlang in its SMS and authentication systems.
• Motorola is using Erlang in call processing products in the public-safety industry.
• Ericsson uses Erlang in its support nodes, used in GPRS and 3G mobile networks
worldwide.
The most popular open source Erlang applications include the following:
• The 3D subdivision modeler Wings 3D, used to model and texture polygon
meshes.
• The Ejabberd system, which provides an Extensible Messaging and Presence Protocol
(XMPP) based instant messaging (IM) application server.
• The CouchDB “schema-less” document-oriented database, providing scalability
across multicore and multiserver clusters.
• The MochiWeb library that provides support for building lightweight HTTP servers.
It is used to power services such as MochiBot and MochiAds, which serve
dynamically generated content to millions of viewers daily.
• RabbitMQ, an AMQP messaging protocol implementation. AMQP is an emerging
standard for high-performance enterprise messaging.
ejabberd is one of the most well know erlang application and the one I learnt erlang with.
I think it's the one of most interesting project for learning erlang because it is really building on erlang's strength. (However some will argue that it's not OTP, but don't worry there's still a trove of great code inside...)
Why ?
An XMPP server (like ejabberd) can be seen as a high level router, routing messages between end users. Of course there are other features, but this is the most important aspect of an instant messaging server. It has to route many messages simultaneously, and handle many a lot of TCP/IP connections.
So we have 2 features:
handle many connections
route messages given some aspects of the message
These are examples where erlang shines.
handle many connections
It is very easy to build scalable non-blocking TCP/IP servers with erlang. In fact, it was designed to solve this problem.
And given it can spawn hundreds of thousand of processes (and not threads, it's a share-nothing approach, which is simpler to design), ejabberd is designed as a set of erlang processes (which can be distributed over several servers) :
client connection process
router process
chatroom process
server to server processes
All of them exchanging messages.
route messages given some aspects of the message
Another very lovable feature of erlang is pattern matching.
It is used throughout the language.
For instance, in the following :
access(moderator, _Config)-> rw;
access(participant, _Config)-> rw;
access(visitor, #config{type="public"})-> r;
access(visitor, #config{type="public_rw"})-> rw;
access(_User,_Config)-> none.
That's 5 different versions of the access function.
Erlang will select the most appropriate version given the arguments received. (Config is a structure of type #config which has a type attribute).
That means it is very easy and much clearer than chaining if/else or switch/case to make business rules.
To wrap up
Writing scalable servers, that's the whole point of erlang. Everything is designed it making this easy. On the two previous features, I'd add :
hot code upgrade
mnesia, distributed relational database (included in the base distribution)
mochiweb, on which most http erlang servers are built on
binary support (decoding and encoding binary protocol easy as ever)
a great community with great open source projects (ejabberd, couchdb but also webmachine, riak and a slew of library very easy to embed)
Fewer LOCs
There is also this article from Richard Jones. He rewrote an application from C++ to erlang: 75% fewer lines in erlang.
The list of most common applications for Erlang as been covered (CouchDb, ejabberd, RabbitMQ etc) but I would like to contribute the following.
The reason why it is used in these applications comes from the core strength of Erlang: managing application availability.
Erlang was built from ground up for the telco environment requiring that systems meet at least 5x9's availability (99.999% yearly up-time). This figure doesn't leave much room for downtime during a year! For this reason primarily, Erlang comes loaded with the following features (non-exhaustive):
Horizontal scalability (ability to distribute jobs across machine boundaries easily through seamless intra & inter machine communications). The built-in database (Mnesia) is also distributed by nature.
Vertical scalability (ability to distribute jobs across processing resources on the same machine): SMP is handled natively.
Code Hot-Swapping: the ability to update/upgrade code live during operations
Asynchronous: the real world is async so Erlang was built to account for this basic nature. One feature that contributes to this requirement: Erlang's "free" processes (>32000 can run concurrently).
Supervision: many different strategies for process supervision with restart strategies, thresholds etc. Helps recover from corner-cases/overloading more easily whilst still maintaining traces of the problems for later trouble-shooting, post-mortem analysis etc.
Resource Management: scheduling strategies, resource monitoring etc. Note that the default process scheduler operates with O(1) scaling.
Live debugging: the ability to "log" into live nodes at will helps trouble-shooting activities. Debugging can be undertaken live with full access to any process' running state. Also the built-in error reporting tools are very useful (but sometimes somewhat awkward to use).
Of course I could talk about its functional roots but this aspect is somewhat orthogonal to the main goal (high availability). The main component of the functional nature which contributes generously to the target goal is, IMO: "share nothing". This characteristic helps contain "side effects" and reduce the need for costly synchronization mechanisms.
I guess all these characteristics help extending a case for using Erlang in business critical applications.
One thing Erlang isn't really good at: processing big blocks of data.
We built a betting exchange (aka prediction market) using Erlang. We chose Erlang over some of the more traditional financial languages (C++, Java etc) because of the built-in concurrency. Markets function very similarly to telephony exchanges. Our CTO gave a talk on our use of Erlang at CTO talk.
We also use CouchDB and RabbitMQ as part of our stack.
Erlang comes from Ericsson, and is used within some of their telecoms systems.
Outside telecoms, CouchDb (a document-oriented database) is possibly the best known Erlang application so far.
Why Erlang ? From the overview (worth reading in full):
The document, view, security and
replication models, the special
purpose query language, the efficient
and robust disk layout and the
concurrent and reliable nature of the
Erlang platform are all carefully
integrated for a reliable and
efficient system.
I came across this is in the process of writing up a report: Erlang in Acoustic Ray Tracing.
It's an experience report on a research group's attempt to use Erlang for Acoustic Ray Tracing. They found that while it was easier to write the program, less buggy, etc. It scaled worse, and performed 10x slower than a comparable C program. So one spot where it may not be well suited is CPU intensive scenarios.
Do note though, that the people wrote the paper were in the stages of first learning Erlang, and may not have known the proper development procedures for CPU intensive Erlang.
Apparently, Yahoo used Erlang to make something it calls Harvester. Article about it here: http://www.ddj.com/architect/220600332
What is erlang good for?
http://beebole.com/en/blog/erlang/why-erlang/
http://www.aquabu.com/2008/2/15/erlang-pragmatic-studio-day-3-notes
http://www.reddit.com/r/programming/comments/9q0lr/erlang_and_highfrequency_trading/
(jerf's answer)
It's important to realize that Erlang's 4 parts: the language itself, the VMs(BEAM, hipe) standard libs (plus modules on github, CEAN, etc.) and development environment are being steadily updated / expanded/improved. For example, i remember reading that the floating point performance improved when Wings3d's author realized it needed to improve (I can't find a source for this). And this guy just wrote about it:
http://marian-dan.com/wordpress/?p=324
A couple years ago, Tim Bray's Wide Finder publicity and all the folks starting to do web app frameworks and HTTP servers lead (at least in part) to improved regex and binaries handling. And there's all the work integrating HiPE and SMP, the dialyzer project, multiple unit testing and build libs springing up, ..
So its sweet spot is expanding, The difficult thing is that the official docs can't keep up very well, and the mailing list and erlang blogosphere volume are growing quickly
We are using Erlang to provide the back-end muscle power for our really real-time browser-based multi-player game Pixza. We don't use Flash or any other third-party plugins, though the game is real-time multi-player. We use pure JS and COMET techniques instead. And Erlang supports the "really realtimeliness" of Pixza.
I'm working for wooga, a social game company and we use Erlang for some of our game backends (basically http apis for millions of daily users) and auxiliary services like ios push notification provider, payment etc.
I think it really shines in network related tasks and it makes it kind of straight forward to structure and implement simple and complex network services alike in it. Distribution, fault tolerance and performance are easy to achieve because Erlang already has some of the key ingredients built in and they are being used for a long time in critical production infrastructure. So its not like "the new hip technology thing 0.0.2 alpha".
I know that other game companies use Erlang as well. You should be able to find presentations on slideshare about that.
Erlang draws its strength from being a functional language with no shared memory. Hence IMO, Erlang won't be suitable for applications that require in place memory manipulations. Image editing for example.
Erlang is getting a reputation for being untouchable at handling a large volume of messages and requests. I haven't had time to download and try to get inside Mr. Erlang's understanding of switching theory... so I'm wondering if someone can teach me (or point to a good instructional site.)
Say as a thought-experiment I wanted to port the Erlang ejabberd to a combination of Python and C, in a way that gave me the same speed and scalability. What structures or patterns would I have to understand and implement? (Does Python's Twisted already do this?)
How/why do functional languages (specifically Erlang) scale well? (for discussion of why)
http://erlang.org/course/course.html (for a tutorial chain)
As far as porting to other languages, a message passing system would be easy to do in most modern languages. Getting the functional style can be done in Python easily enough, although you wouldn't get the internal dispatching features of Erlang "for free". Stackless Python can replicate much of Erlang's concurrency features, although I can't speak to details as I haven't used it much. If does appear to be much more "explicit" (in that it requires you to define the concurrency in code in places that Erlang's design will allow concurrency to happen internally).
Erlang is not only about scalability but mostly about
reliability
soft real-time characteristics (enabled by soft real-time GC which is possible because immutability [no cycles] and share nothing and so)
performance in concurrent tasks (cheap task switch, cheap process spawn, actors model, ...)
scalability - debatable in current state , but rapidly evolving (about 32 cores well, it is better than most competitors but should be better in near future).
Another of the features of erlang that have an impact on scalability is the the lightweight cheap processes. Since processes have so little overhead erlang can spawn far more of them than most other languages. You get more bang for your buck with erlang processes than many other languages give you.
I think the best choice for Erlang is Network bound applications - makes communication much simpler between nodes and things like heartbeat monitoring, auto restart using supervisor are built into OTP.