How is Erlang fault tolerant, or help in that regard? - erlang

How is Erlang fault tolerant, or help in that regard?

I think I covered part of the answer in this reply to another thread.

Erlang is fault tolerant with the following things in mind:
Erlang knows that errors WILL happen, and things will break, so instead of guarding against errors, Erlang lets you have strong tools to minimize impact of errors and recover from them as they happen.
Erlang encourages you to program for success case, and crash if anything goes wrong without trying to recover partially broken data. The idea behind this is that partially incorrect data may propagate further in your system and may get written to database, and thus presents risk to your system. Better to get rid of it early and only keep fully correct data.
Process isolation in Erlang helps with minimizing impact of partially wrong data when it appears and then leads to process crash. System cleans up the crashed code and its memory but keeps working as a whole.
Supervision and restart strategies help keep your system fully functional if parts of it crashed by restarting vital parts of your system and bringing them back into service. If something goes very wrong such that restarts happen too much, the system is considered broken beyond repair and thus is shut down.

Caveat: I am an Erlang noob.
#Daniel's answer is essentially correct. I strongly suggest that you take the time to read Erlang creator Joe Armstrong's thesis (Making reliable distributed systems in the presence of software errors). The thesis provides a good explanation of the need for, and the solution to, developing robust distributed systems. I believe the paper will answer your question satisfactorily.

Erlang makes it easy to create many, small processes, and to monitor those processes. When one of those processes crashes, it may be possible to restart that part of the system without needing to bring the whole thing down.
You may have seen something like this in modern versions of Windows: the system can restart the graphics driver if it crashes; it doesn't kill the whole system.
To make it easier to write fault-tolerant applications, Erlang provides the concept of supervisor processes. These processes monitor a number of child processes, and know how to respond if a child dies. You might create a whole supervision tree, so that you have fine control about how different parts of the application behave. You can read more in the Erlang documentation.

Related

What makes erlang scalable?

I am working on an article describing fundamentals of technologies used by scalable systems. I have worked on Erlang before in a self-learning excercise. I have gone through several articles but have not been able to answer the following questions:
What is in the implementation of Erlang that makes it scalable? What makes it able to run concurrent processes more efficiently than technologies like Java?
What is the relation between functional programming and parallelization? With the declarative syntax of Erlang, do we achieve run-time efficiency?
Does process state not make it heavy? If we have thousands of concurrent users and spawn and equal number of processes as gen_server or any other equivalent pattern, each process would maintain a state. With so many processes, will it not be a drain on the RAM?
If a process has to make DB operations and we spawn multiple instances of that process, eventually the DB will become a bottleneck. This happens even if we use traditional models like Apache-PHP. Almost every business application needs DB access. What then do we gain from using Erlang?
How does process restart help? A process crashes when something is wrong in its logic or in the data. OTP allows you to restart a process. If the logic or data does not change, why would the process not crash again and keep crashing always?
Most articles sing praises about Erlang citing its use in Facebook and Whatsapp. I salute Erlang for being scalable, but also want to technically justify its scalability.
Even if I find answers to these queries on an existing link, that will help.
Regards,
Yash
Shortly:
It's unmutable. You have no variables, only terms, tuples and atoms. Program execution can be divided by breakpoint at any place. Fully transactional model.
Processes are even lightweight than .NET threads and isolated.
It's made for communications. Millions of connections? Fully asynchronous? Maximum thread safety? Big cross-platform environment, which built only for one purpose — scale&communicate? It's all Ericsson language — first in this sphere.
You can choose some impersonators like F#, Scala/Akka, Haskell — they are trying to copy features from Erlang, but only Erlang born from and born for only one purpose — telecom.
Answers to other questions you can find on erlang.com and I'm suggesting you to visit handbook. Erlang built for other aims, so it's not for every task, and if you asking about awful things like php, Erlang will not be your language.
I'm no Erlang developer (yet) but from what I have read about it some of the features that makes it very scalable is that Erlang has its own lightweight processes that are using message passing to communicate with each other. Because of this there is no such thing as shared state and locking which is the case when using for example a multi threaded Java application.
Another difference compared to Java is that the Erlang VM does garbage collection on every little process that is running which does not take any time at all compared to Java which does garbage collection only per VM.
If you get problem with bottlenecks from database connection you could start by using a database pooling app running against maybe a replicated PostgreSQL cluster or if you still have bottlenecks use a multi replicated NoSQL setup with Mnesia, Riak or CouchDB.
I think process restarts can be very useful when you are experiencing rare bugs that only appear randomly and only when specific criteria is fulfilled. Bugs that cause the application to crash as soon as you restart the app should optimally be fixed or taken care of with a circuit breaker so that it does not spread further.
Here is one way process restart helps. By not having to deal with all possible error cases. Say you have a program that divides numbers. Some guy enters a zero to divide by. Instead of checking for that possible error (and tons more), just code the "happy case" and let process crash when he enters 3/0. It just restarts, and he can figure out what he did wrong.
You an extend this into an infinite number of situations (attempting to read from a non-existent file because the user misspelled it, etc).
The big reason for process restart being valuable is that not every error happens every time, and checking that it worked is verbose.
Error handling is verbose typically, so writing it interspersed with the logic handling doing a task can make it harder to understand the code. Moving that logic outside of the task allows you to more clearly distinguish between "doing things" code, and "it broke" code. You just let the thing that had a problem fail, and handle it as needed by a supervising party.
Since most errors don't mean that the entire program must stop, only that that particular thing isn't working right, by just restarting the part that broke, you can keep operating in a state of degraded functionality, instead of being down, while you repair the problem.
It should also be noted that the failure recovery is bounded. You have to lay out the limits for how much failure in a certain period of time is too much. If you exceed that limit, the failure propagates to another level of supervision. Each restart includes doing any needed process initialization, which is sometimes enough to fix the problem. For example, in dev, I've accidentally deleted a database file associated with a process. The crashes cascaded up to the level where the file was first created, at which point the problem rectified itself, and everything carried on.

Erlang OTP based application - architecture ideas

I'm trying to write an Erlang application (OTP) that would parse a list of users and then launch workers that will work 24X7 to collect user-data (using three different APIs) from remote servers and store it in ets.
What would be the ideal architecture for this kind of application. Do I launch a bunch of workers - one for each user (assuming small number users)? What will happen if number of users increases very rapidly?
Also, to call different APIs I need to put up a Timer mechanism in the worker process.
Any hint will be really appreciated.
Spawning new process for each user is not a such bad idea. There are http servers that do this for each connection, and they doing quite fine.
First of all cost of creating new process is minimal. And cost of maintaining processes is even smaller. If one of the has nothing to do, it won't do anything; there is none (almost) runtime overhead from inactive processes, which in the end means that you are doing only the work you have to do (this is in fact the source of Erlang systems reactivity).
Some issue might be memory usage. Each process has it's own memory stack, and in use-case when they actually do not need to store any internal data, you might be allocating some unnecessary memory. But this also could be modified (even during runtime), and in most cases such memory will be garbage collected.
Actually I would not worry about such things too soon. Issues you might encounter might depend on many things, mostly amount of outside data or user activity, and you can not really design this. Most probably you won't encounter any of them for quite some time. There's no need for premature optimization, especially if you could bind yourself to design that would slow down rest of your development process. In Erlang, with processes being main source of abstraction you can easily swap this process-per-user with pool-of-workers, and ets with external service. But only if you really need it.
What's most important is fact that representing "user" as process would be closest to problem domain. "Users" are independent entities, and deserve separate processes (they have their own state, and they can act or react independent to each other). It is quite similar to using Objects and Classes in other languages (it is over-simplification, but it should get you going).
If you were writing this in Python or C++ would you worry about how many objects you were creating? Only in extreme cases. In Erlang the same general rule applies for processes. Don't worry about how many you are creating.
As for architecture, the only element that is an architectural issue in your question is whether you should design a fixed worker pool or a 1-for-1 worker pool. The shape of the supervision tree would be an outcome of whichever way you choose.
If you are scraping data your real bottleneck isn't going to be how many processes you have, it will be how many network requests you are able to make per second on each API you are trying to access. You will almost certainly get throttled.
(A few months ago I wrote a test demonstration of a very similar system to what you are describing. The limiting factor was API request limits from providers like fb, YouTube, g+, Yahoo, not number of processes.)
As always with Erlang, write some system first, and then benchmark it for real before worrying about performance. You will usually find that performance isn't an issue, and the times that it is you will discover that it is much easier to optimize one small part of an existing system than to design an optimized system from scratch. So just go for it and write something that basically does what you want right now, and worry about optimization tweaks after you have something that basically does what you want. After getting some concrete performance data (memory, request latency, etc.) is the time to start thinking about performance.
Your problem will almost certainly be on the API providers' side or your network latency, not congestion within the Erlang VM.

Erlang supervisor processes

I have been learning Erlang intensively, and after finishing 'Programming Erlang' from Joe Armstrong, there is one thing that I keep coming back to.
In my mind a Supervisor spawns One process per child handler. So each declared gen_server type handler will run as a separate process.
What happens if you are building a tiny web server and you want each requests to be its own process. Do you still conform to OTP principles and use a gen_server somehow (how ?), or do you create your own behaviour?
How does Cowboy handle this for eg. ? Does it still use gen_server ?
tl;dr: I find that trying to figure out the "correct" supervision structure a the beginning of a project is a form of premature optimization.
The "right" way to design your supervision tree depends on what the worker parts of your application are doing. In the case of a web server I would probably first explore something along the lines of:
top supervisor (singular)
data service supervisor (one per service type)
worker pool (all workers under the service sup)
client connection supervisor (one)
connection worker pool (or one per connection, have to play with it to decide)
logical supervisor (as appropriate -- massive variance here, depending on problem domain)
workers or supervisors (as appropriate -- have to explore/know the problem domain to have any idea how this should be structured)
So that's several workers per supervisor type at the lower level. I haven't used Cowboy so I don't know how it is organized. The point I'm trying to make is that while the mechanics of handling data services serving web pages are relatively trivial, the part of the system that actually does the core problem-solving work might not be and this is going to dictate everything interesting about the system.
It is a bad thing to have your problem-solving bits mixed in the same module as your web-displaying or connection handling bits. Ideally you should be able to use the same logic units in a native application, a web application and a networked service without any changes.
Ultimately the answer to whether you should have 1:1 supervisors to workers or 1:n depends on what you're doing and what restart strategy gives you the best balance among recovery to a known consistent state, latency felt by the user, and resource usage.
One of my favorite things about Erlang is that I can start with a naive supervisor structure like the one above, play with it until I see where its not so good, and rather easily switch things around and experiment with alternatives without fundamentally altering my system much. (The same goes for playing with alternative data representations if you write proper abstractions around them.) So first, get something that works in testing. Then load it up and see if you can break it. Then start worrying about the details, after you understand where the problems actually are.
It is a common pattern to spawn one server per client in erlang, You will then use a supervisor using the simple_one_to_one strategy for the children servers. This allows to ask the server to start a server on_demand. Generally this is used when you don't know how many processes you will need, and when the processes are independent (a crash of one process should not impacts the other).
There is a very good information in the site learningyousomeerlang.com (LYSE supervisor chapter). the whole site is worth to read.

Identify core an Erlang process

Any way to identify the specific core an Erlang process is scheduled on?
Let's say you spawn a bunch of processes to simply print out the core the process is running on, and then exit. Any way to do this?
I spent some time reading docs and googling but couldn't find anything.
Thanks.
EDIT: "core" = CPU core number (or if not number, another identifier that identifies the CPU core).
There is erlang:system_info(scheduler_id) that in most cases is maped to a logical core. But this information is pretty ephemeral because the process may be suspended and resumed on any other scheduler.
What is your use case that you really need that kind of information?
No there is not. If you spawn 2000 processes and they terminate quickly, chances are that you will finish the job before rebalancing occurs. In this case you would only have a single core operating all the time.
You could take a look at the scheduler utilization calls however, see erlang:statistics(scheduler_wall_time). It will tell you how much work each scheduler is really doing.

Learning Erlang? speedbump thread, common, small problems

I just want know all the small problems that got between you and your final solution when you were new to Erlang.
For example, here are the first speedbumps I had:
Use controlling_process(Socket, Pid) if you spawn off in multiple threads. Right packet to the right thread.
You going to start talking to another server? Remember to net_adm:ping('car#bsd-server'). in the shell. Else no communication will get through.
Timer:sleep(10), if you want to do nothing. Always useful when debugging.
Learning to browse the standard documentation
Once you learn how the OTP documentation is organised it becomes much easier to find what you're looking for (you tend to need to learn which applications provide which modules or kinds of modules).
Also just browsing the documentation for applications is often quite rewarding - I've discovered lots of really useful code this way - sys, dbg, toolbar, etc.
The difference between shell erlang and module erlang
Shell erlang is a slightly different dialect to module erlang. You can't define module functions (only funs), you need to load record definitions in order to work with records (rr/1) and so on. Learning how to write erlang code in terms of anonymous functions is somewhat tricky, but is essential for working on production systems with a remote shell.
Learning the interaction between the shell and {start,spawn}_link ed processes - when you run some shell code that crashes (raises an exception), the shell process exits and will broadcast exit signals to anything you linked to. This will in turn shut down that new gen_server you're working on. ("Why does my server process keep disappearing?")
The difference between erlang expressions and guard expressions
Guard expressions (when clauses) are not Erlang expressions. They may look similar, but they're quite different. Guards cannot call arbitrary erlang functions, only guard functions (length/1, the type tests, element/2 and a few others specified in the OTP documentation). Guards succeed or fail and don't have side effects. Erlang expressions on the other hand can do what they like.
Code loading
Working out when and how code upgrades work, the incantation to get a gen_server to upgrade to the latest version of a callback module (code:load(Mod), sys:suspend(Pid), sys:change_code(Pid, Mod, undefined, undefined), sys:resume(Pid).).
The code server path (code:get_path/0) - I can't count how many times I ran into undefined function errors that turned out to be me forgetting to add an ebin directory to the code search path.
Building erlang code
Working out a useful combination of emake (make:all/0 and erl -make) and gnu make took quite a long time (about three years so far :).
My current favourite makefiles can be seen at http://github.com/archaelus/esmtp/tree/master
Erlang distribution
Getting node names, dns, cookies and all the rest right in order to be able to net_adm:ping/1 the other node. This takes practise.
Remote shell IO intricacies
Remembering to pass group_leader() to io:format calls run on the remote node so that the output appears in your shell rather than mysteriously disappearing (I think the SASL report browser rb still has a problem with sending some of its output to the wrong node when used over a remote shell connection)
Integrating it into msvc 6, so I could use the editor, and see the results in the output window.
I created a tool, with
command - path to erlc
arguments - +debug_info $(FileName)$(FileExt)
Initial Directory - $(fileDir)
Checked Use Output Window.
Debugging is hard. All I know to do is to stick calls to "error_logger:info_msg" in my code.
Docs have been spotty -- they're correct, but very very terse.
This is my own fault, but: I started coding before I understood eunit, so a lot of my code is harder to test than it should be.
controlling_process()
Use controlling_process(Socket, Pid) if you spawn off in multiple threads. Right packet to the right thread.
net_adm:ping()
You going to start talking to another server? Remember to net_adm:ping('car#bsd-server'). in the shell. Else no communication will get through.
timer:sleep()
Pause for X ms.
The thing that took me the most time to get my head around was just the idea of structuring my code entirely around function calls and message passing. The rest of it either just fell out from there (spawning, remote nodes) or felt like the usual stuff you've got to learn in any new language (syntax, stdlib).

Resources