How does an Erlang process bind to a specific scheduler?
Currently processes does not get bound to specific schedulers (though you can force it via undocumentet functions, not recommended). Scheduler threads may be bound to logical processors using cpu topology and binding types. The vm does use some of this information to enhance performance in its normal scheduling scheme.
Reading from an old mail from Kenneth Lundin:
The Erlang VM without SMP support has 1 scheduler which runs in the
main process thread. The scheduler picks runnable Erlang processes
and IO-jobs from the run-queue and there is no need to lock data
structures since there is only one thread accessing them.
The Erlang VM with SMP support can have 1 to many schedulers which are
run in 1 thread each. The schedulers pick runnable Erlang processes
and IO-jobs from one common run-queue. In the SMP VM all shared data
structures are protected with locks, the run-queue is one example of
a data structure protected with locks.
From OTP R12B the SMP version of the VM is automatically started as
default if the OS reports more than 1 CPU (or Core) and with the same
number of schedulers as CPU's or Cores.
Not sure if this answer your question. Could you expand a bit more?
Related
This question is tagged "Erlang" as well, because these Elixir modules more or less just wrap Erlang functionality.
Nodes seem like named processes. They can execute functions concurrently, link to other nodes, and act like process supervisors. Many of the functions in each module appear to be the same, strengthening the similarities.
What is value of the Node module? What does it offer that Process doesn't?
Nodes seem like named processes.
It seems you've misunderstood what a Node is. A Node is an instance of the Erlang VM, running as one Operating System process. An Erlang Process is a unit executing code, similar to an Operating System thread but lighter. An Erlang Process runs on an Erlang Node, just like Operating System processes run on an Operating System. An Erlang Process cannot run without an Erlang Node.
It's two distinct concepts. A node is an instance of an Erlang virtual machine and a process is a very lightweight thread running inside the virtual machine.
Here is the definition of an Elixir process, according to the documentation :
In Elixir, all code runs inside processes. Processes are isolated
from each other, run concurrent to one another and communicate via
message passing. Processes are not only the basis for concurrency in
Elixir, but they also provide the means for building distributed and
fault-tolerant programs.
Elixir’s processes should not be confused with operating system processes. Processes in Elixir are extremely lightweight in terms of
memory and CPU (unlike threads in many other programming languages).
And a node is the representation of an Erlang virtual machine. Here is some examples of functions of the node module :
alive?()
Returns true if the local node is alive
connect(node)
Establishes a connection to node
disconnect(node)
Forces the disconnection of a node
i am writing a concurrent program and i need to know the number of cores of the system so then the program will know how many processes to open.
Is there command to get this inside Erlang code?
Thnx.
You can use
erlang:system_info(logical_processors_available)
to get the number of cores that can be used by the erlang runtime system.
There is also:
erlang:system_info(schedulers_online)
which tells you how many scheduler threads are actually running.
To get the number of available cores, use the logical_processors flag to erlang:system_info/1:
1> erlang:system_info(logical_processors).
8
There are two companion flags to this one: logical_processors_online shows how many are in use, and logical_processors_available show how many are available (it will return unknown when all logical processors available are online).
To know how to parallelize your code, you should rely on schedulers_online which will return the number of actual Erlang schedulers that are available in your current VM instance:
1> erlang:system_info(schedulers_online).
8
Note however that parallelizing on this value alone might not be enough. Sometimes you have other processes running that need some CPU time and sometimes your algorithm would benefit from even more parallelism (waiting on IO for example). A rule of thumb is to use the value obtained from schedulers_online as a multiplier for parallelism, but always test with different multiples to see what works best for your application.
How this information is exposed will be very operating system specific (unless you happen to be writing an operating system of course).
You didn't say what operating system you're working on. In the case of Linux, you can get the data from /proc/cpuinfo, however there are subtleties with the meaning of hyperthreading and the issue of multiple cores on the same die using a shared L2 cache (effectively you've got a NUMA architecture).
eg I have a program that eats a lot of CPU. I make a C plugin that can interact with erlang. I spawn 16 threads with SMP +16. Will it give me a similar performance compared to something like pthreads on a multicore? The threads do not need to communicate with each other.
"C plugin" is not clearly defined in the erlang context.
Either you are writing a port which basically forks a system process.
Or you are writing a linked in driver which runs in the same context as the Erlang vm.
In both cases you can take advantage of multicore cpu's. The first case just relies on the OS to place the OS processes on different CPU's (which any decent SMP OS should be capable of).
In the second case I'm not so sure but I would expect the drivers to run on different CPU cores also. Unless you have a strong cause for using linked drivers and you know exactly what you are doing I recommend against them for complexity and stability reasons. If a port crashes Erlang is notified and can restart it or take other precautions. If a driver crashes the whole Erlang vm is taken down hard.
The main question is what part of the problem you want to solve in Erlang, if you use erlang only to start your "plugins" this can be much easier be solved just starting processes from the shell, since your "threads" don't need to communicate, why not pass the parameters on the commandline and fork working processes from a shell script?
Preface: When I say "machine" below, I mean either a physical dedicated server, or a virtual private server. When I say "node" I mean, an instance of the erlang virtual machine, of which there could be multiple running as separate processes under a single unix kernel.
I've got a project that involves multiple erlang/OTP applications. The applications will be running together and talking to each other on the same machine. They will all be hitting the disk, using memory and spawning erlang processes. They will also be using network resources because they will be talking to similar machines with the same set of applications running on them in a cluster.
Almost all of this communication is via HTTP. Thus I could separate each erlang OTP application into a separate instance of the erlang VM on the same machine and they could still talk to each other.
My question is: Is it better to have them running all under one erlang VM so that this erlang VM process can allocate access to resources among them, and schedule the execution of the various erlang processes.
Or is it better to have separate erlang nodes on a given server?
If one is better than the other, why?
I'm assuming running all of these apps in a single erlang vm which is given, essentially, full run of the server, will result in better performance. The OS is just managing the disk and ram at the low level, and only has one significant process (the erlang VM) to switch with... and the erlang VM is probably smarter about allocating resources when it has the holistic view of all the erlang processes.
This may be something that I need to test, but I'm not in a position to do so effectively in the near term.
The answer is: it depends.
Advantages of using a single node:
Memory is controlled by a single Erlang VM. It is way easier.
Inter-application communication (if using erlang-messaging) is faster.
Less operating system context switches happens
Advantages of using multiple nodes:
If the system is linking in C code to the VM, death of one node due to a bug in C will not kill the others.
Agree with #I GIVE CRAP ANSWERS
I would go with one VM. Here is why:
dynamic handling of run time queues belonging to schedulers (with varied origin of CPU load its important)
fewer VMs to monitor
better understanding of memory allocation and easier to spot malicious process (can compare all of them at once)
much easier inter app supervision
I wouldn't care about VM crash - you need to be prepared any way. Heart works especially well in the cluster of equal units.
We've always used one VM per application because it's easier to manage.
The scheduler and SMP support in Erlang have come a long way in the past few years, so there isn't as much reason as there used to be to run multiple VMs on the same node.
I Agree with previous answers but there is a case scenario where having multiple nodes per cpu is the answer: When a heavy task hits the node. A task may take multiple minutes to complete and in such case a gen server will hold the node until completion of the task.
I'm studying Erlang's process model at the moment. I have hit a snag in a tech report (section 3, paragraph 2) on Erlang:
This explains why it in some cases can be more efficient to run several SMP VM's
with one scheduler each instead on one SMP VM with several schedulers. Of course
the running of several VM's require that the application can run in many parallel tasks
which has no or very little communication with each other.
Now this paragraph is confusing me; I can see the uni-process multiple scheduler scenario, but I am failing to see multiple processes with a single scheduler; Presumably each process would have a different node name, and this would mean a certain application, without modification, cannot be used with this model; the virtue of not requiring modification has been mentioned as a key feature of SMP in the report. If the multiple processes have the same node names, than performance would be disastrous due to inter-Erlang-process messaging storms -- this assume the use of in-memory amnesia. Is there some process model that is not introduced in the article and that I am missing here ?
What is the author trying say here ? is he trying to suggest that an application would have to be rewritten (to take multiple unique node-names into account) for the multi-process single-scheduler case ?
-- edit 1: Clarification of Source of Problem --
The question has been answered through discussion; the following is an outline of the trouble I had.
The issue for this question has been that the documentation, as I recall, does not touch on a scenario of running multiple Erlang emulators per physical machine -- it has always been shown that the emulator represents your physical machine (in industrial usage); also, the scenario of having to explicitly partition a program for computational efficiency has never been considered. This sudden introduction has been the source of my woe.
The convention is still biased towards creating LOTS of processes and that the future holds many improvements for the SMP emulator for Erlang, and this means that single node per machine is still a very viable option assuming favourable application design.
Rewrite after reading article:
This explains why it in some cases can
be more efficient to run several SMP
VM's with one scheduler each instead
on one SMP VM with several schedulers.
Non-SMP VM has no-lock so runs fast.
Single scheduler SMP VM 10% slower, due to cost of checking locks
Multiple scheduler SMP VM slower again due to using/waiting for locks
Of course the running of several VM's
require that the application can run
in many parallel tasks which has no or
very little communication with each
other.
I think: Nodes on the same server have to have different names.
Inter process messaging while by slower due to the inter-process nature verse intra process messaging of a VM node.
If you have multiple schedulers in a single VM, they will inevitably contend over various resources (e.g. ets meta table, atom-table, scheduler run-queue during migration, etc.) because of the inner architecture. If you have a single scheduler, contention will obviously not occur. Lock checking and acquiring will still be done though, so running a non SMP VM instead shall yield even better performance (but requires a rebuilding of the VM from source).
Take a four-core machine for example. Option one means that you run four instances of the Erlang VM, each with a single scheduler, affinity set to different processor cores. Option two means running a single Erlang VM with four schedulers, each scheduler's affinity set to different processor cores.
If you have a whole lot of independent processes to run, option two will result in better performance, because the four cores will be fully utilized (theoretically). In contrast, in option one, this won't be possible, because the lock contention will make execution on cores wait for each other every now and then.
On the other hand if your processes need to chatter a lot, option one is the way to go because the inter-process communication is way cheaper than communication between different VMs. You gain more with this than you lose with lock contention.
I believe the answer is in the preceding paragraph:
The SMP VM with only one scheduler is slightly slower (10%) than the non
SMP VM.
This is because the SMP VM need to use locks for all shared
datastructures. But as
long as there are no lock-conflicts the overhead caused by
locking is not that high (it
is the lock conflicts that takes time).
Scheduler's reliance on locks for shared data structures can impose an overhead on a given system. It seems to follow that having multiple schedulers on one SMP VM imposes a collectively greater overhead.
There are some advatanges with several nodes on one physical machine.
1) Resource locking overhead as mentioned.
2) Fail-over. In telecom products you really don't want to have the beam come crashing down on you. If you have NIFs or linked-in drivers in your system this might occur.
3) Memory locality. Few nodes gives you a poor-mans way to force processes to a few cores. This could be a big boost for NUMA archs typically but also for SMP. The scheduler don't take NUMA into account (yet). You can spawn a process to a specific scheduler and lock it to it, it won't migrate but that is an undocumented feature ... or it was removed all together. I forget.
With several nodes you will need a load balancer between the nodes of course but that is the usual way to do it anyways. Some logic that supervises the nodes.
However, the numbers from the EUC papers are over a year old [#] and I wouldn't recommend a multi-node approach if you don't really need it. The runtime system is much better at handling these types of problems today. A lot of lock overhead has been removed and the mrq-scheduler has been improved.
# 2009's numbers look like this.
Edit:
Regarding 3) the spawn feature i mentioned is,
spawn_opt(fun() -> ... end, [{scheduler, Id}]) -> pid(),
where Id is an integer and refers to a specific scheduler.
I wouldn't recommend using it since it undocumented.