In Elixir, what's the difference between a node and a process? - erlang

This question is tagged "Erlang" as well, because these Elixir modules more or less just wrap Erlang functionality.
Nodes seem like named processes. They can execute functions concurrently, link to other nodes, and act like process supervisors. Many of the functions in each module appear to be the same, strengthening the similarities.
What is value of the Node module? What does it offer that Process doesn't?

Nodes seem like named processes.
It seems you've misunderstood what a Node is. A Node is an instance of the Erlang VM, running as one Operating System process. An Erlang Process is a unit executing code, similar to an Operating System thread but lighter. An Erlang Process runs on an Erlang Node, just like Operating System processes run on an Operating System. An Erlang Process cannot run without an Erlang Node.

It's two distinct concepts. A node is an instance of an Erlang virtual machine and a process is a very lightweight thread running inside the virtual machine.
Here is the definition of an Elixir process, according to the documentation :
In Elixir, all code runs inside processes. Processes are isolated
from each other, run concurrent to one another and communicate via
message passing. Processes are not only the basis for concurrency in
Elixir, but they also provide the means for building distributed and
fault-tolerant programs.
Elixir’s processes should not be confused with operating system processes. Processes in Elixir are extremely lightweight in terms of
memory and CPU (unlike threads in many other programming languages).
And a node is the representation of an Erlang virtual machine. Here is some examples of functions of the node module :
alive?()
Returns true if the local node is alive
connect(node)
Establishes a connection to node
disconnect(node)
Forces the disconnection of a node

Related

Running the erlang vm inside a process

It's possible to run the erlang VM inside a process?
I'm asking this because I'm trying to use some code using the erl_nif, witch is very cool indeed, but I have to send information back to the process that could possibily spawn the VM. The only approach I've thinked is to create some IPC communication, like pipes or reading from COUT, but this imposes the need of some protocol, and would be cool if I could call what I need directly from the function response.
Even don't mention that Erlang VM manage OS threads and has event loop, how do you want it will be stable and predictable when running inside an unpredictable OS process? No, you can't run Erlang VM inside an OS process.
Think about Erlang VM as about operating system:
Write all your code in Erlang;
Use NIFs/Port drivers only if you really need more speed. But be aware - you're in "kernel mode" now!
Use Ports/Erl_interface/C Nodes if you have many code written in some other language;

What's the best way to run a gen_server on all nodes in an Erlang cluster?

I'm building a monitoring tool in Erlang. When run on a cluster, it should run a set of data collection functions on all nodes and record that data using RRD on a single "recorder" node.
The current version has a supervisor running on the master node (rolf_node_sup) which attempts to run a 2nd supervisor on each node in the cluster (rolf_service_sup). Each of the on-node supervisors should then start and monitor a bunch of processes which send messages back to a gen_server on the master node (rolf_recorder).
This only works locally. No supervisor is started on any remote node. I use the following code to attempt to load the on-node supervisor from the recorder node:
rpc:call(Node, supervisor, start_child, [{global, rolf_node_sup}, [Services]])
I've found a couple of people suggesting that supervisors are really only designed for local processes. E.g.
Starting processes at remote nodes
how: distributed supervision tree
What is the most OTP way to implement my requirement to have supervised code running on all nodes in a cluster?
A distributed application is suggested as one alternative to a distributed supervisor tree. These don't fit my use case. They provide for failover between nodes, but keeping code running on a set of nodes.
The pool module is interesting. However, it provides for running a job on the node which is currently the least loaded, rather than on all nodes.
Alternatively, I could create a set of supervised "proxy" processes (one per node) on the master which use proc_lib:spawn_link to start a supervisor on each node. If something goes wrong on a node, the proxy process should die and then be restarted by it's supervisor, which in turn should restart the remote processes. The slave module could be very useful here.
Or maybe I'm overcomplicating. Is directly supervising nodes a bad idea, instead perhaps I should architect the application to gather data in a more loosely coupled way. Build a cluster by running the app on multiple nodes, tell one to be master, leave it at that!
Some requirements:
The architecture should be able to cope with nodes joining and leaving the pool without manual intervention.
I'd like to build a single-master solution, at least initially, for the sake of simplicity.
I would prefer to use existing OTP facilities over hand-rolled code in my implementation.
Interesting challenges, to which there are multiple solutions. The following are just my suggestions, which hopefully makes you able to better make the choice on how to write your program.
As I understand your program, you want to have one master node where you start your application. This will start the Erlang VM on the nodes in the cluster. The pool module uses the slave module to do this, which require key-based ssh communication in both directions. It also requires that you have proper dns working.
A drawback of slave is that if the master dies, so does the slaves. This is by design as it probably fit the original use case perfectly, however in your case it might be stupid (you may want to still collect data, even if the master is down, for example)
As for the OTP applications, every node may run the same application. In your code you can determine the nodes role in the cluster using configuration or discovery.
I would suggest starting the Erlang VM using some OS facility or daemontools or similar. Every VM would start the same application, where one would be started as the master and the rest as slaves. This has the drawback of marking it harder to "automatically" run the software on machines coming up in the cluster like you could do with slave, however it is also much more robust.
In every application you could have a suitable supervision tree based on the role of the node. Removing inter-node supervision and spawning makes the system much simpler.
I would also suggest having all the nodes push to the master. This way the master does not really need to care about what's going on in the slave, it might even ignore the fact that the node is down. This also allows new nodes to be added without any change to the master. The cookie could be used as authentication. Multiple masters or "recorders" would also be relatively easy.
The "slave" nodes however will need to watch out for the master going down and coming up and take appropriate action, like storing the monitoring data so it can send it later when the master is back up.
I would look into riak_core. It provides a layer of infrastructure for managing distributed applications on top of the raw capabilities of erlang and otp itself. Under riak_core, no node needs to be designated as master. No node is central in an otp sense, and any node can take over other failing nodes. This is the very essence of fault tolerance. Moreover, riak_core provides for elegant handling of nodes joining and leaving the cluster without needing to resort to the master/slave policy.
While this sort of "topological" decentralization is handy, distributed applications usually do need logically special nodes. For this reason, riak_core nodes can advertise that they are providing specific cluster services, e.g., as embodied by your use case, a results collector node.
Another interesting feature/architecture consequence is that riak_core provides a mechanism to maintain global state visible to cluster members through a "gossip" protocol.
Basically, riak_core includes a bunch of useful code to develop high performance, reliable, and flexible distributed systems. Your application sounds complex enough that having a robust foundation will pay dividends sooner than later.
otoh, there's almost no documentation yet. :(
Here's a guy who talks about an internal AOL app he wrote with riak_core:
http://www.progski.net/blog/2011/aol_meet_riak.html
Here's a note about a rebar template:
http://lists.basho.com/pipermail/riak-users_lists.basho.com/2011-March/003632.html
...and here's a post about a fork of that rebar template:
https://github.com/rzezeski/try-try-try/blob/7980784b2864df9208e7cd0cd30a8b7c0349f977/2011/riak-core-first-multinode/README.md
...talk on riak_core:
http://www.infoq.com/presentations/Riak-Core
...riak_core announcement:
http://blog.basho.com/2010/07/30/introducing-riak-core/

When I make a C plugin for erlang will it take full advantage of the spawning system? Does it block?

eg I have a program that eats a lot of CPU. I make a C plugin that can interact with erlang. I spawn 16 threads with SMP +16. Will it give me a similar performance compared to something like pthreads on a multicore? The threads do not need to communicate with each other.
"C plugin" is not clearly defined in the erlang context.
Either you are writing a port which basically forks a system process.
Or you are writing a linked in driver which runs in the same context as the Erlang vm.
In both cases you can take advantage of multicore cpu's. The first case just relies on the OS to place the OS processes on different CPU's (which any decent SMP OS should be capable of).
In the second case I'm not so sure but I would expect the drivers to run on different CPU cores also. Unless you have a strong cause for using linked drivers and you know exactly what you are doing I recommend against them for complexity and stability reasons. If a port crashes Erlang is notified and can restart it or take other precautions. If a driver crashes the whole Erlang vm is taken down hard.
The main question is what part of the problem you want to solve in Erlang, if you use erlang only to start your "plugins" this can be much easier be solved just starting processes from the shell, since your "threads" don't need to communicate, why not pass the parameters on the commandline and fork working processes from a shell script?

Is it better to start multiple erlang nodes per machine, or just one per machine?

Preface: When I say "machine" below, I mean either a physical dedicated server, or a virtual private server. When I say "node" I mean, an instance of the erlang virtual machine, of which there could be multiple running as separate processes under a single unix kernel.
I've got a project that involves multiple erlang/OTP applications. The applications will be running together and talking to each other on the same machine. They will all be hitting the disk, using memory and spawning erlang processes. They will also be using network resources because they will be talking to similar machines with the same set of applications running on them in a cluster.
Almost all of this communication is via HTTP. Thus I could separate each erlang OTP application into a separate instance of the erlang VM on the same machine and they could still talk to each other.
My question is: Is it better to have them running all under one erlang VM so that this erlang VM process can allocate access to resources among them, and schedule the execution of the various erlang processes.
Or is it better to have separate erlang nodes on a given server?
If one is better than the other, why?
I'm assuming running all of these apps in a single erlang vm which is given, essentially, full run of the server, will result in better performance. The OS is just managing the disk and ram at the low level, and only has one significant process (the erlang VM) to switch with... and the erlang VM is probably smarter about allocating resources when it has the holistic view of all the erlang processes.
This may be something that I need to test, but I'm not in a position to do so effectively in the near term.
The answer is: it depends.
Advantages of using a single node:
Memory is controlled by a single Erlang VM. It is way easier.
Inter-application communication (if using erlang-messaging) is faster.
Less operating system context switches happens
Advantages of using multiple nodes:
If the system is linking in C code to the VM, death of one node due to a bug in C will not kill the others.
Agree with #I GIVE CRAP ANSWERS
I would go with one VM. Here is why:
dynamic handling of run time queues belonging to schedulers (with varied origin of CPU load its important)
fewer VMs to monitor
better understanding of memory allocation and easier to spot malicious process (can compare all of them at once)
much easier inter app supervision
I wouldn't care about VM crash - you need to be prepared any way. Heart works especially well in the cluster of equal units.
We've always used one VM per application because it's easier to manage.
The scheduler and SMP support in Erlang have come a long way in the past few years, so there isn't as much reason as there used to be to run multiple VMs on the same node.
I Agree with previous answers but there is a case scenario where having multiple nodes per cpu is the answer: When a heavy task hits the node. A task may take multiple minutes to complete and in such case a gen server will hold the node until completion of the task.

Erlang Documentation/SMP: single-node and multi-node per machine or per application, and the confusion that may follow

I'm studying Erlang's process model at the moment. I have hit a snag in a tech report (section 3, paragraph 2) on Erlang:
This explains why it in some cases can be more efficient to run several SMP VM's
with one scheduler each instead on one SMP VM with several schedulers. Of course
the running of several VM's require that the application can run in many parallel tasks
which has no or very little communication with each other.
Now this paragraph is confusing me; I can see the uni-process multiple scheduler scenario, but I am failing to see multiple processes with a single scheduler; Presumably each process would have a different node name, and this would mean a certain application, without modification, cannot be used with this model; the virtue of not requiring modification has been mentioned as a key feature of SMP in the report. If the multiple processes have the same node names, than performance would be disastrous due to inter-Erlang-process messaging storms -- this assume the use of in-memory amnesia. Is there some process model that is not introduced in the article and that I am missing here ?
What is the author trying say here ? is he trying to suggest that an application would have to be rewritten (to take multiple unique node-names into account) for the multi-process single-scheduler case ?
-- edit 1: Clarification of Source of Problem --
The question has been answered through discussion; the following is an outline of the trouble I had.
The issue for this question has been that the documentation, as I recall, does not touch on a scenario of running multiple Erlang emulators per physical machine -- it has always been shown that the emulator represents your physical machine (in industrial usage); also, the scenario of having to explicitly partition a program for computational efficiency has never been considered. This sudden introduction has been the source of my woe.
The convention is still biased towards creating LOTS of processes and that the future holds many improvements for the SMP emulator for Erlang, and this means that single node per machine is still a very viable option assuming favourable application design.
Rewrite after reading article:
This explains why it in some cases can
be more efficient to run several SMP
VM's with one scheduler each instead
on one SMP VM with several schedulers.
Non-SMP VM has no-lock so runs fast.
Single scheduler SMP VM 10% slower, due to cost of checking locks
Multiple scheduler SMP VM slower again due to using/waiting for locks
Of course the running of several VM's
require that the application can run
in many parallel tasks which has no or
very little communication with each
other.
I think: Nodes on the same server have to have different names.
Inter process messaging while by slower due to the inter-process nature verse intra process messaging of a VM node.
If you have multiple schedulers in a single VM, they will inevitably contend over various resources (e.g. ets meta table, atom-table, scheduler run-queue during migration, etc.) because of the inner architecture. If you have a single scheduler, contention will obviously not occur. Lock checking and acquiring will still be done though, so running a non SMP VM instead shall yield even better performance (but requires a rebuilding of the VM from source).
Take a four-core machine for example. Option one means that you run four instances of the Erlang VM, each with a single scheduler, affinity set to different processor cores. Option two means running a single Erlang VM with four schedulers, each scheduler's affinity set to different processor cores.
If you have a whole lot of independent processes to run, option two will result in better performance, because the four cores will be fully utilized (theoretically). In contrast, in option one, this won't be possible, because the lock contention will make execution on cores wait for each other every now and then.
On the other hand if your processes need to chatter a lot, option one is the way to go because the inter-process communication is way cheaper than communication between different VMs. You gain more with this than you lose with lock contention.
I believe the answer is in the preceding paragraph:
The SMP VM with only one scheduler is slightly slower (10%) than the non
SMP VM.
This is because the SMP VM need to use locks for all shared
datastructures. But as
long as there are no lock-conflicts the overhead caused by
locking is not that high (it
is the lock conflicts that takes time).
Scheduler's reliance on locks for shared data structures can impose an overhead on a given system. It seems to follow that having multiple schedulers on one SMP VM imposes a collectively greater overhead.
There are some advatanges with several nodes on one physical machine.
1) Resource locking overhead as mentioned.
2) Fail-over. In telecom products you really don't want to have the beam come crashing down on you. If you have NIFs or linked-in drivers in your system this might occur.
3) Memory locality. Few nodes gives you a poor-mans way to force processes to a few cores. This could be a big boost for NUMA archs typically but also for SMP. The scheduler don't take NUMA into account (yet). You can spawn a process to a specific scheduler and lock it to it, it won't migrate but that is an undocumented feature ... or it was removed all together. I forget.
With several nodes you will need a load balancer between the nodes of course but that is the usual way to do it anyways. Some logic that supervises the nodes.
However, the numbers from the EUC papers are over a year old [#] and I wouldn't recommend a multi-node approach if you don't really need it. The runtime system is much better at handling these types of problems today. A lot of lock overhead has been removed and the mrq-scheduler has been improved.
# 2009's numbers look like this.
Edit:
Regarding 3) the spawn feature i mentioned is,
spawn_opt(fun() -> ... end, [{scheduler, Id}]) -> pid(),
where Id is an integer and refers to a specific scheduler.
I wouldn't recommend using it since it undocumented.

Resources