I've done a lot of reading lately about calling my C code from Erlang. My C code will be making a remote call and I expect to have a timeout of ~10 seconds. In order to keep Erlang responsive and handle additional requests while someone else is waiting for the initial call to C, what are the best practices in 2022?
We can use async port drivers or a NIF with ERL_NIF_DIRTY_JOB_IO_BOUND to run it on the dirty IO scheduler.
My understanding is that the async port driver would be the best option, but looking for more insight here.
Let's say we have an Erlang web server handling two callers A and B. A makes a call which then enters into C and blocks for 10 seconds. While that is waiting, B makes their call which will also need to call C. With an async port driver and a dirty IO NIF what are the differences in execution and blocking? Will B get blocked in Erlang, waiting for the port driver or NIF to be ready to start working on this new request (if that's a thing), or waiting for an available thread (as I understand the async port driver will use its own single thread (unless I do something else to spawn another thread)? And the NIF would use a threadpool).
If we have even more callers calling in where will we bottleneck?
As you can see I have quite a few questions and would highly appreciate help from those with Erlang familiarity!
Related
Erlang Interoperability guide discusses different interoperability mechanisms. Here are my conclusions:
Ports and Erl_Interface programs: OS scheduled, limit scalability.
Port Drivers: dangerous because a crash in the port driver brings the
emulator down too.
C Nodes: Node server needs to scale as well as Erlang app to avoid
scalability sacrifices.
NIFs: Loic sums
them up well.
Some advocate the use of OpenCL basically delegating resource hungry computations to GPU while letting the Erlang emulator to own the CPU. This sounds fantastic but then you have a requirement on your servers having a suitable GPU.
Using JInterface and communicating with a Java process that spawns a thread for every request might be an option.
So has anyone come across a solution that has been tested in practise and turned out to work well?
Actually all solutions take place. As I've been working tightly with some of them I could say the following:
Ports are safe but port communication is slow. If port crashes, VM continues working. If you do not communicate with your port extensively or you do not trust the port - this is your choice
NIFs are extremely fast. If your data flow is great you should use them. Of course they are unsafe so you have to program NIF library carefully and you'd better learn some C (the point that most of NIF creators skip). Actually scheduling problems are easily overcome with the specific pattern. You should start the new C thread that does actual job just after receiving data from Erlang and detach processing from Erlang thread. So you quit NIF function very quickly returning back in Erlang and waiting for a message from C code.
Java Nodes or C nodes are for tasks that can be moved to the node completely. That are some long and heavy jobs.
Bearing in mind above considerations you decide the way that fits your task best.
I have a C++ code that implement a special protocol over the serial port. The code is multi-threaded and internally polls the serial port and do its own cyclic processing. I would like to call this driver from erlang and also receive events from this driver. My concern is that this C++ code is multi-threaded and also statefull meaning that when I call a certain function on the driver, it caches things internally which will be used/required on the subsequent calls of the driver. My questions are
1.Does NIF run in the same os process as the rest of my erlang proceses or NIF is launched in a separate os process?
2.Does it make sense to warp this multi-threaded stateful C++ code with NIF?
4.If NIF is not the right approach, what is the better way for me to make Elrang talk back and forth with this C++ code. I also prefer my C++ code to be inside the same OS process as the rest of my Erlang processes and as it looks like linked-in drivers are an option but not sure if the multi-threaded nature of my C++ code will be ok to that model. Plus I hear they can mess up elrang scheduler?
Unlike ports, NIFs are run within Erlang VM process, similar to drivers. Because of that, any NIF crashes will bring VM down as well. And, answering in advance, to your last question, NIFs, like drivers, may block your scheduler.
That depends on the functionality you are implementing by this C++ code. Due to the answer 1), you probably want to avoid concurrency in the C++ part, since it's a potential source of errors. It's not always possible, of course. But if you are implementing, say, some workers pool, go ahead and implement 1-threaded code, spawning it as many times as you need.
Drivers can be multi-threaded too, with same potential problems and quite similar performance (well, still slightly faster than NIFs). If you are not completely sure about your C++ code stability, use it as an Erlang port.
Speaking of the difference between NIFs and drivers, the former is synchronous natively, and the latter can be asynchronous (which can be really a huge advantage if you don't want to receive any answers for most of the commands). Drivers are easier to mess up and harder to implement (but once you grasp the main patterns and problems, they seem okay, actually).
Here's a good start for drivers:
http://www.erlang.org/doc/apps/erts/driver.html
And something similar (behold the difference in complexity) for NIFs:
http://www.erlang.org/doc/tutorial/nif.html
i must say that i am impressed by Misultin's support for Web Sockets (some examples here). My JavaScript is firing requests and getting responses down the wire with "negligible" delay or lag, Great !!
Looking at how the data handler loop for WebSockets looks like, it resembles that of normal TCP/IP Sockets, atleast the basic way in Erlang
% callback on received websockets data
handle_websocket(Ws) ->
receive
{browser, Data} ->
Ws:send(["received '", Data, "'"]),
handle_websocket(Ws);
_Ignore ->
handle_websocket(Ws)
after 5000 ->
Ws:send("pushing!"),
handle_websocket(Ws)
end.
This piece of code is executed in a process which is spawned by Misultin, a function you give to it while starting your server like this below:
start(Port)->
HTTPHandler = fun(Req) -> handle_http(Req, Port) end,
WebSocketHandler = fun(Ws) -> handle_websocket(Ws) end,
Options = [{port, Port},{loop, HTTPHandler},{ws_loop, WebSocketHandler}],
misultin:start_link(Options).
. More Code about this, check out the example page.I have several questions.
Question 1: Can i change the controlling Process of a Web Socket as we normally do with the TCP/IP Sockets in Erlang ? (we normally use: gen_tcp:controlling_process(Socket,NewProcessId))
Question 2: Is Misultin the only Erlang/OTP HTTP library which supports WebSockets ? Where are the rest ?
EDIT :
Now, the reason why i need to be able to transfer the WebSocket control from Misultin
Think of a gen_server that will control a pool of WebSockets, say its a game Server. In the current Misultin Example, for every WebSocket Connection, there is a controlling process, in other-words for every WebSocket, there will be a spawned process. Now, i know Erlang is a hero with Processes but, i do not want this, i want these initial processes to die as soon as they handle over to my gen_server the control authority of the WebSocket.
I would want this gen_server to switch data amongst these WebSockets. In the current implementation, i need to keep track of the Pid of the Misultin handle_websocket process like this:
%% Here is misultin's control process
%% I get its Pid and save it somewhere
%% and link it to my_gen_server so that
%% if it exits i know its gone
handle_websocket(Ws)->
process_flag(trap_exit,true),
Pid = self(),
link(my_gen_server),
save_connection(Pid),
wait_msgs(Ws).
wait_msgs(Ws)->
receive
{browser,Data}->
FromPid = self(),
send_to_gen_server(Data,FromPid),
handle_websocket(Ws);
{broadcast,Message} ->
%% i can broadcast to all connected WebSockets
Ws:send(Message),
handle_websocket(Ws);
_Ignore -> handle_websocket(Ws)
end.
Above, the idea works very well, whereby i save all controlling process into Mnesia Ram Table and look it up against a given criteria if the application wants to send to that particular user a message. However, with what i want to achieve, i realise that in the real-world, the processes may be so many that my server may crash. I want atleast one gen_server to control thousands of the Web Sockets than having a process for each Web Socket, in this way, i could some how conserve memory.
Suggestion: Misultin's Author could create Web Socket Groups implementation for us in his next release, whereby we can have a group of WebSockets controlled by the same process. This would be similar to Nitrogen's Comet Groups in which comet connections are grouped together under the same control. If this aint possible, we will need the control ourselves, provide an API where we can take over the control of these Web Sockets.
What do you Engineers think about this ? What is your suggestion and/or Comment about this ? Misultin's Author could say something about this. Thanks to all
(one) Cowboy developer here.
I wouldn't recommend using any type of central server being responsible for controlling a set of websocket connections. The main reason is that this is a premature optimization, you are only speculating about the memory usage.
A test done earlier last year for half a million websocket connections on a single server resulted in misultin using 20GB of memory, cowboy using 16.2GB or 14.3GB, depending on if the websocket processes were hibernating or not. You can assume that all erlang implementations of websockets are very close to these numbers.
The difference between cowboy not using hibernate and misultin should be pretty close to the memory overhead of using an extra process per connection. (feel free to correct me on this ostinelli).
I am willing to bet that it is much cheaper to take this into account when buying the servers than it is to design and resolve issues in an application where you don't have a 1:1 mapping between tasks/resources and processes.
https://twitter.com/#!/nivertech/status/114460039674212352
Misultin's author here.
I strongly discourage you from changing the controlling process, because that will break all Misultin's internals. Just as Steve suggested, YAWS and Cowboy support WebSockets, and there are implementations done over Mochiweb but I'm not aware of any being actively maintained.
You are discussing about memory concerns, but I think you are mixing concepts. I cannot understand why you do need to control everything 'centrally' from a gen_server: your assumption that 'many processes will crash your VM' is actually wrong, Erlang is built upon the actor's model and this has many advantages:
performance due to multicore usage which is not there if you use a single gen_server
being able to use the 'let it crash' philosophy: currently it looks like your gen_server crashing would bring down all available games
...
Erlang is able to handle hundreds of thousands processes on a single VM, and you'll be out of available file descriptors for your open Sockets way before that happens.
So, I'd suggest you consider having your game logic within individual Websocket processes, and use message passing to make them interact. You may consider spawning 'game processes' which hold information of a single game's participants and status, for instance. Eventually, a gen_server that keeps track of the available games - and does only that (eventually by owning an ETS table). That's the way I'd probably want to go, all with the appropriate supervisors' structure.
Obviously, I'm not sure what you are trying to achieve so I'm just assuming here. But if your concern is memory - well, as TRIAL AND ERROR EXP said here below: don't premature optimize something, especially when you are considering to use Erlang in a way that looks like it might actually limit it from doing what it is capable of.
My $0.02.
Not sure about question 1, but regarding question 2, Yaws and Cowboy also support WebSockets.
We have an application server developed with Delphi 2010 and Indy 10. This server receives more than 50 requests per second and it works well. But in some cases, it seems to me that Indy is very obscure. Their components are good, but sometimes I found myself digging into the source code only to understand a simple thing. Indy lacks on good documentation and good support.
The last thing that i came across was a big problem for me: I must detect when a client disconnects non gracefully (When the the client crashes or shutdown, for instance. Not telling the server that it will disconnect) and indy was not able to do that. If I want that, I will have to develop a algorithm like heartbeat, pooling or TCP keep-alive. I do not want to spend more time doing a, at least I think, component job. After a few study, I found out that this is not Indy's fault, but this is an issue of all blocking sockets components.
Now I am really thinking of changing the core of the Server to another good suite. I must admit I am tending to use a non-blocking socket. Based on that, I have some questions:
What do a benefit from changing from blocking to non-blocking sockets?
Will I be able to detect client disconnects (non gracefully)?
What component suite has the best product? By best product I mean: fast, good support, good tools and easy to implement.
I know this must be a subjective question, but I really want to hear that from you. My first question is the one I care most. I do not care if I have to pay 100, 500, 1000, 10000 dollars, but I want a complete solution. For now, I am thinking about Ip*works .
EDIT
I think some guys are not understand what I want. I don't want to create my own socket. I have been working with sockets for a long time and I am getting tired of it. Really.
And non-blocking sockets CAN detect client disconnects. That is a fact and it has good documentation all over the internet. A non-blocking socket checks the socket state for new incoming data all the time, and it makes possible to detect that the socket is not valid. This is not a heartbeat algorithm. A heartbeat algorithm is used on client side and it sends periodically packets (aka keep-alive) to the server to tells it is still alive.
EDIT
I am not make myself clear. Maybe because English is not my main language. I am not saying that it is possible to detect a dropped connection without trying to send or receiving data from a socket. What I am saying is that every non-blocking socket is able to do that because they constantly tries to read from the socket for new incoming data. Why is that so hard to understand? If you guys download and run ip*works demos, in special, the echoserver and echoclient ones (both use TCP) you can test by yourselves. I already tested it, and it works like I expected to do. Even if you use the old TCPSocketServer and TCPSocketClient in a non-blocking mode you will see what I meant.
"What do a benefit from changing from blocking to non-blocking sockets? Will I be able to detect client disconnects (non gracefully)?"
Just my two cents to get the ball rolling on this question - I'm not a socket EXPERT, but I do have a good deal of experience with them. If I'm mistaken, I'm sure someone will correct me... :-)
I assume that since you're running a server using blocking sockets with 50 connections per second, you have a threading mechanism in place to handle client requests. If so, you don't really stand to gain anything from non-blocking sockets. On the contrary - you will have to change your server logic to be event driven- based on events fired in your main thread from the non-blocking sockets, or use constant polling to know what your sockets are up to.
Non-blocking sockets can't detect clients disconnecting without notification any more than blocking sockets can - they don't have telepathic powers... The nature of the TCP/IP 'conversation' between client and server is the same - blocking and non-blocking is only with respect to your application's interaction with the socket connection conducting the 'conversation'.
If you need to purge dead connections, you need to implement a heartbeat or timeout mechanism on your socket (I've never seen a modern socket implementation that didn't support timeouts).
What do a benefit from changing from blocking to non-blocking sockets?
Increased speed, availability, and throughput (from my experience). I had an IndySockets client that was getting about 15 requests per second and when I went directly to asynchronous sockets the throughput increased to about 90 requests per second (on the same machine). In a separate benchmark test on a server at a data-center with a 30 Mbit connection I was able to get more than 300 requests per second.
Will I be able to detect client disconnects (non gracefully)?
That's one thing I haven't had to try yet, since all of my code has been on the client side.
What component suite has the best product? By best product I mean: fast, good support, good tools and easy to implement.
You can build your own socket client in a couple of days and it can be very robust and fast... much faster than most of the stuff I've seen "off the shelf". Feel free to take a look at my asynchronous socket client: http://codesprout.blogspot.com/2011/04/asynchronous-http-client.html
Update:
(Per Mikey's comments)
I'm asking you for a generic, technical explanation of how NBS increase throughput as opposed to a properly designed BS server.
Let's take a high load server as an example: say your server is supposed to handle 1000 connections at any given time, with blocking sockets you would have to create 1000 threads and even if they're mostly idle, the CPU will still spend a lot of time context switching. As the number of clients increases you will have to increase the number of threads in order to keep up and the CPU will inevitably increase the context switching. For every connection you establish with a blocking socket, you will incur the overhead of spawning of a new thread and you eventually you will incur the overhead of cleaning up after the thread. Of course, the first thing that comes to mind is: why not use the ThreadPool, you can reuse the threads and reduce the overhead of creating/cleaning-up of threads.
Here is how this is handled on Windows (hence the .NET connection): sure you could, but the first thing you'll notice with the .NET ThreadPool is that it has two types of threads and it's not a coincidence: user threads and I/O completion port threads. Asynchronous sockets use the IO completion ports which "allows a single thread to perform simultaneous I/O operations on different handles, or even simultaneous read and write operations on the same handle."(1) The I/O completion port threads are specifically designed to handle I/O in a much more efficient way than you would ever be able to achieve if you used the user threads in ThreadPool, unless you wrote your own kernel-mode driver.
"The completion port uses some special voodoo to make sure only a specific number of threads can run at once — if one thread blocks in kernel-mode, it will automatically start up another one."(2)
There are other advantages also: "in addition to the nonblocking advantage of the overlapped socket I/O, the other advantage is better performance because you save a buffer copy between the TCP stack buffer and the user buffer for each I/O call." (3)
I am using Indy and Synapse TCP libraries with good results for some years now, and did not find any showstoppers in them. I use the libraries in threads - client and server side, stability and performance was not a problem. (Six thousand request and response messages per second and more with the server running on the same system are typical.)
Blocking sockets are very useful if the protocol is more advanced than a simple 'send a string / receive a string'. Non-blocking sockets cause a higher coupling of message protocol handlers with the socket read / write logic, so I quickly moved away from non-blocking code.
No library can overcome the limitations of the TCP/IP protocol regarding detection of connection loss. Only trying to read or send data can tell wether the connection is still present.
In Windows, there is a third option which is overlapped I/O. Non-blocking sockets are essential a model using Windows messages developed to avoid single-threaded GUI apps to become "blocked" while waiting for data. A modern application IMHO would be better designed using threads and overlapped I/O.
See for example http://support.microsoft.com/kb/181611
Aahhrrgghh - the myth of being able to always detect "dropped" connections. If you pull the power on a machine with a client connection then the server cannot tell, without sending data, that the connection is "dead". The is through the design of the TCP protocol. Don't take my word for it - read this article (Detection of Half-Open (Dropped) TCP/IP Socket Connections).
This article explains the main differences between blocking and non-blocking:
Introduction to Indy, by Chad Z. Hower
Pros of Blocking
Easy to program - Blocking is very easy to program. All user code can
exist in one place, and in a
sequential order.
Easy to port to Unix - Since Unix uses blocking sockets, portable code
can be written easily. Indy uses this
fact to achieve its single source
solution.
Work well in threads - Since blocking sockets are sequential they
are inherently encapsulated and
therefore very easily used in threads.
Cons of Blocking
User Interface "Freeze" with clients - Blocking socket calls do not
return until they have accomplished
their task. When such calls are made
in the main thread of an application,
the application cannot process the
user interface messages. This causes
the User Interface to "freeze" because
the update, repaint and other messages
cannot be processed until the blocking
socket calls return control to the
applications message processing loop.
He also wrote:
Blocking is NOT Evil
Blocking sockets have been repeatedly
attacked with out warrant. Contrary to
popular belief, blocking sockets are
not evil.
It is not is an issue of all blocking sockets components that they are unable to detect a client disconnect. There is no technical advantage on the side of non-blocking components in this area.
I want to make a Windows Service using Erlang and Thrift.
The service will have a single thread listening in a port (socket communication) and send request to a worker's thread. The Windows Service have to response quickily (milisencods) and the throughput is mandatory. (requests per second)
The workers thread will communicate each other. I think in Earlang to resolve this issue.
So i think erlang+thrift will work good. Am I right? Any suggestions?
Your solution is reasonable. To bring you up to speed i would suggest reading up on gen_server, supervisor, application.
Thrift will generate stub files which by compiling will yield you a transport/acceptor. It's up to you to provide both the thrift api and the handler for this api.
Moreover be advised not to synchronize alot between processes if you need fast response times (ie. dont design your solution around synchronizing calls)