The difference between passive and once mode of gen_tcp - erlang

I'm reading Programming Erlang 2E. In Active and Passive Sockets of Chapter 17, it says:
You might think that using passive mode for all servers is the correct
approach. Unfortunately, when we’re in passive mode, we can wait for
the data from only one socket. This is useless for writing servers
that must wait for data from multiple sockets.
Fortunately, we can adopt a hybrid approach, neither blocking nor
nonblocking. We open the socket with the option {active, once}. In
this mode, the socket is active but for only one message. After the
controlling processes has been sent a message, it must explicitly call
inet:setopts to reenable reception of the next message. The system
will block until this happens. This is the best of both worlds.
Relevant code:
% passive mode
loop(Socket) -> 
​case​ gen_tcp:recv(Socket, N) ​of​​ 
{ok, B} ->​ 
... do something with the data ...​ 
loop(Socket);​ 
{error, closed}​ 
...​ 
​end​.
% once mode
loop(Socket) -> 
​receive​​ 
{tcp, Socket, Data} ->​ 
... do something with the data ...​ 
​%% when you're ready enable the next message​​ 
inet:setopts(Sock, [{active, once}]),​ 
loop(Socket);​ 
{tcp_closed, Socket} ->​ 
...​ 
​end​.
I don't see any real difference between the two. gen_tcp:recv in passive mode essentially does the same thing as receive in once mode. How does once mode fix this issue of passive mode:
Unfortunately, when we’re in passive mode, we can wait for
the data from only one socket. This is useless for writing servers
that must wait for data from multiple sockets.

The main difference is when you are choosing to react to an event on that socket. With an active socket your process receives a message, with a passive socket you have to decide on your own to call gen_tcp:recv. What does that mean for you?
The typical way to write Erlang programs is to have them react to events. Following that theme most Erlang processes wait for messages which represent outside events, and react to them depending on their nature. When you use an active socket you are able to program in a way that treats socket data in exactly the same way as other events: as Erlang messages. When you write using passive sockets you have to choose when to check the socket to see if it has data, and make a different choice about when to check for Erlang messages -- in other words, you wind up having to write polling routines, and this misses much of the advantage of Erlang.
So the difference between active_once and active...
With an active socket any external actor able to establish a connection can bombard a process with packets, whether the system is able to keep up or not. If you imagine a server with a thousand concurrent connections where receipt of each packet requires some significant computation or access to some other limited, external resource (not such a strange scenario) you wind up having to make choices about how to deal with overload.
With only active sockets you have already made your choice: you will let service degrade until things start failing (timeout or otherwise).
With active_once sockets you have a chance to make some choices. An active_once socket lets you receive one message on the socket and sets it passive again, until you reset it to active_once. This means you can write a blocking/synchronous call that checks whether or not it is safe for the overall system to continue processing messages and insert it between the end of processing and the beginning of the next receive that listens on the socket -- and even choose to enter the receive without reactivating the socket in the event the system is overloaded, but your process needs to deal with other Erlang messages in the meantime.
Imagine a named process called sysmon that lives on this node and checks whether an external database is being overloaded or not. Your process can receive a packet, process it, and let the system monitor know it is ready for more work before allowing the socket to send it another message. The system monitor can also send a message to listening processes telling them to temporarily stop receiving packets while they are listening for packets, which isn't possible with the gen_tcp:recv method (because you are either receiving socket data, or checking Erlang messages, but not both):
loop(S = {Socket, OtherState}) ->
sysmon ! {self(), ready},
receive
{tcp, Socket, Data} ->
ok = process_data(Data, OtherState),
loop(S);
{tcp_closed, Socket} ->
retire(OtherState),
ok;
{sysmon, activate} ->
inet:setopts(Socket, [{active, once}]),
loop(S);
{sysmon, deactivate} ->
inet:setopts(Socket, [{active, false}]),
loop(S);
{other, message} ->
system_stuff(OtherState),
loop(S)
end.
This is the beginning of a way to implement system-wide throttling, making it easy to deal with the part that is usually the most difficult: elements that are across the network, external to your system and entirely out of your control. When coupled with some early decision making (like "how much load do we take before refusing new connections entirely?"), this ability to receive socket data as Erlang messages, but not leave yourself open to being bombarded by them (or fill up your mailbox, making looking for non-socket messages arbitrarily expensive), feels pretty magical compared to manually dealing with sockets the way we used to in the stone age (or even today in other languages).
This is an interesting post by Fred Hebert, author of LYSE, about overload: "Queues Don't Fix Overload". It is not specific to Erlang, but the ideas he is writing about are a lot easier to implement in Erlang than most other languages, which may have something to do with the prevalence of the (misguided) use of queues as a capacity management technique.

Code that takes advantage of this would look something like:
loop(Socket1, Socket2) ->
​receive​​
{tcp, Socket1, Data} ->​
... do something with the data ...​
​%% when you're ready enable the next message​​
inet:setopts(Socket1, [{active, once}]),​
loop(Socket1, Socket2);​
{tcp, Socket2, Data} ->
... do something entirely different
inet:setopts(Socket2, [{active, once}]),​
loop(Socket1, Socket2);
...
end.
However, in my experience you usually don't do things like that; more often you'll have one process per socket. The advantage with active mode is that you can wait for network data and messages from other Erlang processes at the same time:
loop(Socket) ->
​receive​​
{tcp, Socket, Data} ->​
... do something with the data ...​
​%% when you're ready enable the next message​​
inet:setopts(Socket, [{active, once}]),​
loop(Socket);​
reverse_flux_capacitor ->​
reverse_flux_capacitor(),
%% keep waiting for network data
loop(Socket)
​end​.
Also, when writing a "real" Erlang/OTP application, you would usually write a gen_server module instead of a loop function, and the TCP messages would be handled nicely in the handle_info callback function along other messages.

Related

Keeping dependencies quiet in Erlang

I am using chumak in an Erlang-based ZMQ server. I am listening and spawning processes to accept connections:
{ok, LSocket} = chumak:socket(rep),
{ok, _} = chumak:bind(LSocket, tcp, "0.0.0.0", ?PORT),
spawn_link(fun() -> loop(LSocket, DBConn, RedisConn) end),
That all works fine. But there is one problem. When something "unexpected" (from chumak's point of view) happens, such as a port scan connecting to its port, the process to accept data may die. That's fine, because it restarts automatically. What's not fine is that when this happens, chumak sprays its errors all over the console. I don't care about them.
Is there any way to shut up a dependency library, in Erlang?
chumak errors are emitted through error_logger. That means, to prevent them from being displayed you have to tell your error_logger handler not to display them.
I'll guess you're using sasl for that. If that's the case, what you need to do is to add this configuration to sasl environment: {sasl_error_logger, false}.
But be careful, you'll be disabling all error logs from displaying if you do so. I'm not sure if you can tell sasl to skip particular kinds of error reports instead. If that's possible, you will want not to print out error messages for bind_error reports.

Distributed erlang security how to?

I want to have 2 independent erlang nodes that could communicate with each other:
so node a#myhost will be able to send messages to b#myhost.
Are there any ways to restrict node a#myhost, so only a function from a secure_module could be called on b#myhost?
It should be something like:
a#myhost> rpc:call(b#myhost,secure_module,do,[A,B,C]) returns {ok,Result}
and all other calls
a#myhost> rpc:call(b#myhost,Modue,Func,Args) return {error, Reason}
One of the options would be to use ZeroMQ library to establish a communication between nodes, but would it be better if it could be done using some standard Erlang functions/modules?
In this case distributed Erlang is not what you want. Connecting node A to node B makes a single cluster -- one huge, trusted computing environment. You don't want to trust part of this, so you don't want a single cluster.
Instead write a specific network service. Use the network itself as your abstraction layer. The most straightforward way to do this is to establish a stream connection (just boring old gen_tcp, or gen_sctp or use ssl, or whatever) from A to B.
The socket handling process on A receives messages from whatever parts of node A need to call B -- you write this exactly as you would if they were directly connected. Use a normal Erlang messaging style: Message = {name_of_request, Data} or similar. The connecting process on A simply does gen_tcp:send(Socket, term_to_binary(Message)).
The socket handling process on B shuttles received network messages between the socket and your servicing processes by simply receiving {tcp, Socket, Bin} -> Servicer ! binary_to_term(Bin).
Results of computation go back the other direction through the exact same process using the term_to_binary/binary_to_term translation again.
Your service processes should be receiving well defined messages, and disregarding whatever doesn't make sense (usually just logging the nonsense). So in this way you are not doing a direct RPC (which is unsafe in an untrusted environment) you are only responding to valid semantics defined in your (little tiny) messaging protocol. The way the socket handling processes are written is what can abstract this for you and make it feel just as though you are dealing with a trusted environment within distributed Erlang, but actually you have two independent clusters which are limited in what they can request of each other by the definition of your protocol.

What OTP pattern to use for gen_server socket broadcast?

So I have a non- blocking OTP socket server very similar to the one in Learn Yorself Some Erlang:
http://learnyousomeerlang.com/buckets-of-sockets
The supervisor passes the listening socket to dynamically spawned gen_servers, each of which can accept a single connection; in this way the listening socket isn't blocked by (blocking) calls to gen_tcp:accept, and each gen_server spawned by the supervisor effectively represents a single client.
Now this is all very nice and I can talk to the server via telnet, a simple echo handler echoing my requests.
But what if I want to extend this into a simple chat server ? Obvious thing missing here is the ability to send a broadcast message to all connected clients. But currently none of the gen_server clients know about the existence of any of the others!
What's a sensible OTP- compliant pattern for one gen_server to be able to get pids for all the others ? Only way I can think of is to have some kind of mnesia/ets table containing pids/usernames as part of the gen_server state variable, but somehow this doesn't seem very OTP- like.
Thoughts ?
Thanks in advance.
Using an ETS table to store the Pids would be the way to go. I would use a supervised process as the table manager and set up monitors on Pids that are added to the ETS table, that way you can detect when a process dies and can remove it from the ETS table.
For fault tolerance when working with ETS you need to take some precautions, see Don't Loose your ets Tables for a good intro on how to do this.
But for a real system I would use either the pg2 or gproc modules doing this kind of stuff. pg2 is included in OTP and geared more towards distributed systems, gproc is more flexible. Both use ETS tables to store the data.

WebSockets in Relation with TCP/IP Sockets on Misultin Erlang HTTP Library

i must say that i am impressed by Misultin's support for Web Sockets (some examples here). My JavaScript is firing requests and getting responses down the wire with "negligible" delay or lag, Great !!
Looking at how the data handler loop for WebSockets looks like, it resembles that of normal TCP/IP Sockets, atleast the basic way in Erlang
% callback on received websockets data
handle_websocket(Ws) ->
receive
{browser, Data} ->
Ws:send(["received '", Data, "'"]),
handle_websocket(Ws);
_Ignore ->
handle_websocket(Ws)
after 5000 ->
Ws:send("pushing!"),
handle_websocket(Ws)
end.
This piece of code is executed in a process which is spawned by Misultin, a function you give to it while starting your server like this below:
start(Port)->
HTTPHandler = fun(Req) -> handle_http(Req, Port) end,
WebSocketHandler = fun(Ws) -> handle_websocket(Ws) end,
Options = [{port, Port},{loop, HTTPHandler},{ws_loop, WebSocketHandler}],
misultin:start_link(Options).
. More Code about this, check out the example page.I have several questions.
Question 1: Can i change the controlling Process of a Web Socket as we normally do with the TCP/IP Sockets in Erlang ? (we normally use: gen_tcp:controlling_process(Socket,NewProcessId))
Question 2: Is Misultin the only Erlang/OTP HTTP library which supports WebSockets ? Where are the rest ?
EDIT :
Now, the reason why i need to be able to transfer the WebSocket control from Misultin
Think of a gen_server that will control a pool of WebSockets, say its a game Server. In the current Misultin Example, for every WebSocket Connection, there is a controlling process, in other-words for every WebSocket, there will be a spawned process. Now, i know Erlang is a hero with Processes but, i do not want this, i want these initial processes to die as soon as they handle over to my gen_server the control authority of the WebSocket.
I would want this gen_server to switch data amongst these WebSockets. In the current implementation, i need to keep track of the Pid of the Misultin handle_websocket process like this:
%% Here is misultin's control process
%% I get its Pid and save it somewhere
%% and link it to my_gen_server so that
%% if it exits i know its gone
handle_websocket(Ws)->
process_flag(trap_exit,true),
Pid = self(),
link(my_gen_server),
save_connection(Pid),
wait_msgs(Ws).
wait_msgs(Ws)->
receive
{browser,Data}->
FromPid = self(),
send_to_gen_server(Data,FromPid),
handle_websocket(Ws);
{broadcast,Message} ->
%% i can broadcast to all connected WebSockets
Ws:send(Message),
handle_websocket(Ws);
_Ignore -> handle_websocket(Ws)
end.
Above, the idea works very well, whereby i save all controlling process into Mnesia Ram Table and look it up against a given criteria if the application wants to send to that particular user a message. However, with what i want to achieve, i realise that in the real-world, the processes may be so many that my server may crash. I want atleast one gen_server to control thousands of the Web Sockets than having a process for each Web Socket, in this way, i could some how conserve memory.
Suggestion: Misultin's Author could create Web Socket Groups implementation for us in his next release, whereby we can have a group of WebSockets controlled by the same process. This would be similar to Nitrogen's Comet Groups in which comet connections are grouped together under the same control. If this aint possible, we will need the control ourselves, provide an API where we can take over the control of these Web Sockets.
What do you Engineers think about this ? What is your suggestion and/or Comment about this ? Misultin's Author could say something about this. Thanks to all
(one) Cowboy developer here.
I wouldn't recommend using any type of central server being responsible for controlling a set of websocket connections. The main reason is that this is a premature optimization, you are only speculating about the memory usage.
A test done earlier last year for half a million websocket connections on a single server resulted in misultin using 20GB of memory, cowboy using 16.2GB or 14.3GB, depending on if the websocket processes were hibernating or not. You can assume that all erlang implementations of websockets are very close to these numbers.
The difference between cowboy not using hibernate and misultin should be pretty close to the memory overhead of using an extra process per connection. (feel free to correct me on this ostinelli).
I am willing to bet that it is much cheaper to take this into account when buying the servers than it is to design and resolve issues in an application where you don't have a 1:1 mapping between tasks/resources and processes.
https://twitter.com/#!/nivertech/status/114460039674212352
Misultin's author here.
I strongly discourage you from changing the controlling process, because that will break all Misultin's internals. Just as Steve suggested, YAWS and Cowboy support WebSockets, and there are implementations done over Mochiweb but I'm not aware of any being actively maintained.
You are discussing about memory concerns, but I think you are mixing concepts. I cannot understand why you do need to control everything 'centrally' from a gen_server: your assumption that 'many processes will crash your VM' is actually wrong, Erlang is built upon the actor's model and this has many advantages:
performance due to multicore usage which is not there if you use a single gen_server
being able to use the 'let it crash' philosophy: currently it looks like your gen_server crashing would bring down all available games
...
Erlang is able to handle hundreds of thousands processes on a single VM, and you'll be out of available file descriptors for your open Sockets way before that happens.
So, I'd suggest you consider having your game logic within individual Websocket processes, and use message passing to make them interact. You may consider spawning 'game processes' which hold information of a single game's participants and status, for instance. Eventually, a gen_server that keeps track of the available games - and does only that (eventually by owning an ETS table). That's the way I'd probably want to go, all with the appropriate supervisors' structure.
Obviously, I'm not sure what you are trying to achieve so I'm just assuming here. But if your concern is memory - well, as TRIAL AND ERROR EXP said here below: don't premature optimize something, especially when you are considering to use Erlang in a way that looks like it might actually limit it from doing what it is capable of.
My $0.02.
Not sure about question 1, but regarding question 2, Yaws and Cowboy also support WebSockets.

Threaded Erlang C-Node(cnode) Interoperability howto?

I am at a point in my Erlang development where I need to create a C-Node (see link for C-Node docs). The basic implementation is simple enough, however, there is a huge hole in the doc.
The code implements a single threaded client and server. Ignoring the client for the moment... The 'c' code that implements the server is single threaded and can only connect to one erlang client at a time.
Launch EPMD ('epmd -daemons')
Launch the server application ('cserver 1234')
Launch the erlang client application ('erl -sname e1 -setcookie secretcookie') [in a different window from #2]
execute a server command ('complex3:foo(3).') from the erlang shell in #3
Now that the server is running and that a current erlang shell has connected to the server try it again from another window.
open a new window.
launch an erlang client ('erl -sname e2 -setcookie secretcookie').
execute a new server command ('complex3:foo(3).').
Notice that the system seems hung... when it should have executed the command. The reason it is hung is because the other erlang node is connected and that there are no other threads listening for connections.
NOTE: there seems to be a bug in the connection handling. I added a timeout in the receive block and I caught some errant behavior but I did not get them all. Also, I was able to get the cserver to crash without warnings or errors if I forced the first erlang node to terminate after the indicated steps were performed.
So the question... What is the best way to implement a threaded C-Node? What is a reasonable number of connections?
The cnode implementation example in the cnode tutorial is not meant to handle more than one connected node, so the first symptom you're experiencing is normal.
The erl_accept call is what accepts incoming connections.
if ((fd = erl_accept(listen, &conn)) == ERL_ERROR)
erl_err_quit("erl_accept");
fprintf(stderr, "Connected to %s\n\r", conn.nodename);
while (loop) {
got = erl_receive_msg(fd, buf, BUFSIZE, &emsg);
Note that, written this way, the cnode will accept only one connection and then pass the descriptor to the read/write loop. That's why when the erlang node closes, the cnode ends with an error, since erl_receive_msg will fail because fd will point to a closed socket.
If you want to accept more than one inbound connection, you'll have to loop accepting connections and implement a way to handle more than one file descriptor. You needn't a multithread programme to do so, it would probably be easier (and maybe more efficient) to use the poll or select syscall if your OS supports them.
As for the optimum number of connections, I don't think there is a rule for that, you'd need to benchmark your application if you want to support high concurrency in the cnode. But in that case it would probably be better to re-engineer the system so that erlang copes with the concurrency, alleviating the cnode from that.

Resources