Erlang: When is it logical to spawn a new process? When not? - erlang

If we have really heavy-processes system where process spawning is made for some kind of distribution of load - that's clear.
If we are talking about web-server : it's a good idea to spawn a new proccess for each connection, because then can be distributed. But what else? A single process for Model, View and Controller? Sounds strange, because they all run in a "liner" way, so it can not be good paralleled and we only get overhead on swapping. Also, those "Model, View and Controller" are so light, so they can stay in a single process, isn't it?
So, where is it good to spawn a new process excepting "new connection" situation.
Thank you in advice.

In general, it's anywhere you have a shared resource to manage. It may be a socket, or a database connection, but it may also be some shared in-memory data, or a state machine of some kind.
You may also want to do parallel processing of a list of values (see pmap).
To your "swapping" point you should know that Erlang processes do no use op-sys facilities for scheduling, and scheduling is all but free.
In the specific case of a web-application server, I understand your question. If you are writing a conventional web application with very little share state. Your web framework probably already handles caching and session state and such (these facilities will spawn process).
We are all highly indoctrinated into this stateless web application model. We have all been told since we were pups the stateful systems are hard to develop and they don't scale. I think you will find that there are those that are challenging that. As browser support for WebSockets improve, and with server-side language like Erlang and Clojure providing scalable platforms with safe state management, there will be those who are able to make more interactive web-applications. As an extreme example, could you image WoW as a web application?

One reason to spawn a new process for each connection is that it makes programming the connections much simpler. As a process only handles one connection doing things like having blocking access to data-bases, long polling or streaming becomes much easier. That this process blocks will not affect any other connections.
In Erlang the general "rule" is that you use processes to model concurrent activity and to manage shared resources. Processes are the fundamental way for structuring your system.

Related

Is Erlang bad language for this app?

I am building framework for realtime web applications. I started to do it in Elixir, because
it is modern way how to develop application for Erlang VM. Erlang should be good if you need concurrency, fault tolerant, scalable apps (something like web server etc.). That is exactly what i need.
Question: Realtime framework always need for instance keep information about who is interested in what. This will be accomplished by using publish/subscribe pattern. So i will have 1000 clients subscribing to topic "newest-message". I need to save those clients (pid of process representing each client) somewhere to later access them if content for topic "newest-message" appears.
This is where i am confused if Erlang is really good for my framework.
ETS is probably the only option where to store shared data, but ETS is always copying everything if you save/access records. So that means copy 1000 pids always when i need to access them (instead of just iterating over some list, if i will do it for instance in c/java/python).
This will be probably great bottleneck if still copying many and many records from ETS (many clients, many subscriptions etc), i am right?
Sharing the state may be a sign of bad design. You can for example have process for each queue/topic and it will store its own list of subscribers. You send a message to that topic process and it in turn sends the message to clients. This way, you don't copy entire subscriber list.
If you need to process them in parallel, you can split the subscriber list between more processes.
The fault tolerance of Erlang is achieved, because it doesn't let you share state and you have to put more thought to the design, that will not involve state sharing, but will be efficient. This will pay off in the long run, so Erlang/Elixir is definitely good language for this kind of apps. Just look at RabbitMQ.
In my opnion, if you plan to save states like "who is interested in what" Erlang alone may not be a good idea. Of course, sometimes it is very convenient to pass everything in signals (like you'd do in Erlang), but when there is much content to store - lack of state in Erlang starts to hinder you rather than help.
On the other hand, you can keep a broad piece of convenience of Erlang and use it with a Java application, for example. Erlangs interface for Java enables you to connect both technologies quite easily, and at the same time you can use a Java app to store information for you (and save them somewhere, when necessary) and Erlang for the whole concurrent signaling real time part. Even better than that: you can still implement OTP with architecture like that, so you can create quite a lightweight application (because real-time logic is done by Erlang for you) being able to access stored data easily (because Java helps you here).

Distributing an Erlang Chat system

I just finished Erlang in Practice screencasts (code here), and have some questions about distribution.
Here's the is overall architecture:
Here is how to the supervision tree looks like:
Reading Distributed Applications leads me to believe that one of the primary motivations is for failover/takeover.
However, is it possible, for example, the Message Router supervisor and its workers to be on one node, and the rest of the system to be on another, without much changes to the code?
Or should there be 3 different OTP applications?
Also, how can this system be made to scale horizontally? For example if I realize now that my system can handle 100 users, and that I've identified the Message Router as the main bottleneck, how can I 'just add another node' where now it can handle 200 users?
I've developed Erlang apps only during my studies, but generally we had many small processes doing only one thing and sending messages to other processes. And the beauty of Erlang is that it doesn't matter if you send a message within the same Erlang VM or withing the same Computer, same LAN or over the Internet, the call and the pointer to the other process looks always the same for the developer.
So you really want to have one application for every small part of the system.
That being said, it doesn't make it any simpler to construct an application which can scale out. A rule of thumb says that if you want an application to work on a factor of 10-times more nodes, you need to rewrite, since otherwise the messaging overhead would be too large. And obviously when you start from 1 to 2 you also need to consider it.
So if you found a bottleneck, the application which is particularly slow when handling too many clients, you want to run it a second time and than you need to have some additional load-balancing implemented, already before you start the second application.
Let's assume the supervisor checks the message content for inappropriate content and therefore is slow. In this case the node, everyone is talking to would be simple router application which would forward the messages to different instances of the supervisor application, in a round robin manner. In case those 1 or 2 instances are not enough, you could have the router written in a way, that you can manipulate the number of instances by sending controlling messages.
However for this, to work automatically, you would need to have another process monitoring the servers and discovering that they are overloaded or under utilized.
I know that dynamically adding and removing resources always sounds great when you hear about it, but as you can see it is a lot of work and you need to have some messaging system built which allows it, as well as a monitoring system which can monitor the need.
Hope this gives you some idea of how it could be done, unfortunately it's been over a year since I wrote my last Erlang application, and I didn't want to provide code which would be possibly wrong.

Sandboxing user code with Erlang

As far as I know Erlang provides advanced features for error handling and isolation of processes.
I'm building a system that allow user to submit their code to be executed on the shared server environment and need to make it safe.
Requirements are:
limit CPU and Memory usage individually for each user-process.
forbid user-process to communicate with other processes (except some processes specially designed for such purpose).
forbid access to all sytem resources (shell, file system, ...).
terminate user-process in case of errors or high resource consumption.
Is it possible to to all this with Erlang and keep it performance efficient?
In general, Erlang doesn't provide means to sandbox code which a user can inject. You can try writing your own piece of protection code, but it is rather hard.
A better choice would probably be a language like "safe haskell":
http://www.haskell.org/ghc/docs/7.4.2/html/users_guide/safe-haskell.html
which is specifically built to do this kind of thing.
The isolation provided by Erlang is not intended to protect against malicious modules being injected. In fact, there is no such protection in the distributed case either. As soon as two machines are connected, there is no limit to what you can do to the other machine.
There has been work done on Safe Erlang in the past and you can find several papers about it.
The ErlHive project addresses the problem in an interesting way.

To inetd or not to inetd... when should I use inetd for my network server program?

Can anyone give a concise set of real-world considerations that would drive the choice of whether or not to use inetd to manage a program that acts as a network server?
(If inetd is used, I think it alters the requirements around networking code in the program, so I think it's definitely programming-related and not general IT)
The question is based around an implementation I've seen that uses a control program managed by inetd to start a network listener that then runs forever and takes constant and heavy load. It didn't seem like a good fit with the textbook inetd usage profile (on-demand, infrequently used, lightweight) and got me interested in the more general question.
It depends on the usage pattern for your service. If the startup time for your daemon is low, and you expect it to be used infrequently, then inted might be a good fit. It reduces or even eliminates the need to write any additional networking code.
If your daemon is more heavyweight or more frequently used, you're probably better off writing it standalone. You can just as easily write an init.d script and some conf.d configuration to go with it and it will be no harder for an admin to manage. Most programming languages these days have easy to use socket libraries so in many cases the networking code may not even be that difficult.
I've found in my experience that few admins these days are familiar with inetd. Most daemons just provide their own init script. In fact, of the few hundred systems which I manage I can't think of a single one that launches anything through inetd at all. That's something worth considering.
Hooking into inetd will make your service slightly easier to manage from an operational point of view as inetd allows a sysadmin to control practically all of how network communications with your program happens. However it will require you to make a few code changes to your program. Also, it may not be as efficient as just making your program run as a daemon to begin with.
EDIT: I personally never use inetd and always opt to write server processes as standalone daemons.
I think another factor worth considering when deciding to use inetd is how much memory the process handling the request consumes on an average? If this is fairly high, then under high load you risk running out of memory (since inetd forks). The same server might be implementable in a multithreaded or select-poll manner possibly allowing for higher load / less memory per connection.
You can use inetd to give TCP/UDP capabilities to simple programs that operate over stdin/stdout.
Without inetd, your program will need to manage a slew of concerns, including network interfaces, sockets, forking, resource limits, etc.
What alternative strategy are you considering?
inetd is a good way of ensuring your server starts when the OS boots in the appropriate run level. Even if you do design your server to have some other management mechanism, inetd could still wrap all the commands quite simply. It's only shell scripts after all.

What are the requirements for an application health monitoring system?

What, at a minimum, should an application health-monitoring system do for you (the developer) and/or your boss (the IT Manager) and/or the operations (on-call) staff?
What else should it do above the minimum requirements?
Is monitoring the 'infrastructure' applications (ms-exchange, apache, etc.) sufficient or do individual user applications, web sites, and databases also need to be monitored?
if the latter, what do you need to know about them?
ADDENDUM: thanks for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
Whether the application is running.
Unusual cpu/memory/network usage.
Report any unhandled exceptions.
Status of various modules (if applicable).
Status of external components (databases, webservices, fileservers, etc.)
Number of pending background tasks (if applicable).
Maybe track usage of the application and report statistics on most/less used functionalities so you know where optimizations are most beneficial.
The answer is 'it depends'. Why do you need to monitor? How large is your operations staff? Do you need reporting? What is the application environment? Who cares if the application fails? Who cares if an exception happens? Are any of the errors recoverable? I could ask questions like these for a long time.
Great question.
We've been looking for some application-level monitoring solution for our needs some time ago without any luck. Popular monitoring solution are mostly addressed to monitor infrastrcture and - in my opinion - they are too complicated for a requirements of most of small and mid-sized companies.
We required (mainly) following features:
alerts - we wanted to know about
incident as fast as possible
painless management - hosted service wouldbe
the best
visualizations - it's good to know what is going on and take some knowledge from the data
Because we didn't find suitable solution we started to write our own. Finally we've ended with up-and-running service called AlertGrid. (You can check it for free of course.)
The idea behind it is to provide an easy way to handle custom monitoring scenarios. Integration API is very simple (one function with two required parameters). At the momment we and others are using it for:
monitor scheduled tasks (cron jobs)
monitor entire application logic execution
alert on errors in applications
we are also working on examples of basic infrastructure monitoring using AlertGrid
This is such an open ended question, but I would start with physical measurements.
1. Are all the machines I think are hosting this site pingable?
2. Are all the machines which should be serving content actually serving some content? (Ideally this would be hit from an external network.)
3. Is each expected service on each machine running?
3a. Have those services run recently?
4. Does each machine have hard drive space left? (Don't forget the db)
5. Have these machines been backed up? When was the last time?
Once one lays out the physical monitoring of the systems, one can address those specific to a system?
1. Can an automated script log in? How long did it take?
2. How many users are live? Have there been a million fake accounts added?
...
These sorts of questions get more nebulous, and can be very system specific. They also usually can be derived reactively when responding to phsyical measurements. Hard drive fill up, maybe the web server logs got filled up because a bunch of agents created too many fake users. That kind of thing.
While plan A shouldn't necessarily be reactive, it is the way many a site setup a monitoring system.
Minimum: make sure it is running :)
However, some other stuff would be very useful. For example, the CPU load, RAM usage and (in multiuser systems) which user is running what. Also, for applications that access network, a list of network connections for each app. And (if you have access to client computer(s)) it would be cool to be able to see the 'window title' of the app - maybe check each 2-3 minutes if it changed and save it. Also, a list of files open by the application could be very useful, but it is not a must.
I think this is fairly simple - monitor so that you can be warned early enough before something goes wrong. That means monitor dependencies and the application itself.
It's really hard to provide specifics if you're not going to give details on the application you're monitoring, so I'd say use that as a general rule.
At a minimum you want to know that the system is healthy. This is subjective in what defines your system is healthy. Is it computers are up, the needed resources exist, the data is flowing through the system, the data is properly producing results, etc, etc.
In my project we do monitoring of most of this and then some. It really comes down to what is the highest level that you can use to analyze that everything is working. In our case we need to know down to the data output. If you just need to know down to the are these machines up it saves you on trying to show an inexperienced end user what is wrong.
There are also "off the shelf" tools that will do a lot of the hard work for you if you are just looking too hard into data results. I particularly liked Nagios when I was looking around but we needed more than it could easily show so I wrote our own monitoring system. Basically we also watch for "peculiarities" in the system, memory / cpu spikes, etc...
thanks everyone for the input, i was really looking for application-level monitoring not infrastructure monitoring, but it is good to know about both
the difference is:
infrastructure monitoring would be servers plus MS Exchange Server, Apache, IIS, and so forth
application monitoring would be user machines and the specific programs that they use to do their jobs, and/or servers plus the data-moving/backend applications that they run to keep the data flowing
sometimes it's hard to draw the line - an oversimplified definition might be "if your team wrote it, it's an application; if you bought it, it's infrastructure"
i think in practice it is best to monitor both
What you need to do is to break down the business process of the application and then have the software emit events at major business components. In addition, you'll need to create end to end synthetic transactions (eg. emulating end users clicking on a website). All that data would be fed into an monitoring tool. In the past, I've done JMX for applications of which flowed into Tivoli Monitoring's JMX Adapter and then I've done scripts that implement a "fake user" and then pipe in the results into Tivoli Monitoring's Script Adapter. Tivoli Monitoring takes the data and then creates application health and performance charts from that raw data.

Resources