I am trying to receive data at client side, but nothing is received.
Server code that sends message
client(Socket, Server) ->
gen_tcp:send(Socket,"Please enter your name"),
io:format("Sent confirmation"),
{ok, N} = gen_tcp:recv(Socket,0),
case string:tokens(N,"\r\n") of
[Name] ->
Client = #client{socket=Socket, name=Name, pid=self()},
Server ! {'new client', Client},
client_loop(Client, Server)
end.
Client that should receive and print out
client(Port)->
{ok, Sock} = gen_tcp:connect("localhost",Port,[{active,false},{packet,2}]),
A = gen_tcp:recv(Sock,0),
A.
I think your client is faulty because it specifies:
{packet, 2}
yet the server specifies (in code not shown) :
{packet, 0}
In Programming Erlang (2nd) on p. 269 it says:
Note that the arguments to packet used by the client and the server
must agree. If the server was opened with {packet,2} and the client with {packet,4}, then nothing would work.
The following client can successfully receive text from the server:
%%=== Server: {active,false}, {packet,0} ====
client(Port) ->
{ok, Socket} = gen_tcp:connect(
localhost,
Port,
[{active,false},{packet,0}]
),
{ok, Chunk} = gen_tcp:recv(Socket, 0),
io:format("Client received: ~s", [Chunk]),
timer:sleep(1000),
Name = "Marko",
io:format("Client sending: ~s~n", [Name]),
gen_tcp:send(Socket, Name),
loop(Socket).
loop(Socket) ->
{ok, Chunk} = gen_tcp:recv(Socket, 0),
io:format("Client received: ~s~n", [Chunk]),
loop(Socket).
However, I think that both the chatserver and my client have serious issues. When you send a message through a TCP (or UDP) connection, you have to assume that the message will get split into an indeterminate number of chunks--each with an arbitrary length. When {packet,0} is specified, I think recv(Socket, 0) will only read one chunk from the socket, then return. That chunk may be the entire message, or it might be only a piece of the message. To guarantee that you've read the entire message from the socket, I think you have to loop over the recv():
get_msg(Socket, Chunks) ->
Chunk = gen_tcp:recv(Socket, 0),
get_msg(Socket, [Chunk|Chunks]).
Then the question becomes: how do you know when you've read the entire message so that you can end the loop? {packet,0} tells Erlang not to prepend a length header to a message, so how do you know where the end of the message is? Are more chunks coming, or did the recv() already read the last chunk? I think the marker for the end of the message is when the other side closes the socket:
get_msg(Socket, Chunks) ->
case gen_tcp:recv(Socket, 0) of
{ok, Chunk} ->
get_msg(Socket, [Chunk|Chunks]);
{error, closed} ->
lists:reverse(Chunks);
{error, Other} ->
Other
end.
But that raises another issue: if the chatserver is looping on a recv() waiting for a message from the client, and after the client sends a message to the server the client loops on a recv() waiting for a message from the server, and both sides need the other side to close the socket to break out of their recv() loops, then you will get deadlock because neither side is closing their socket. As a result, the client will have to close the socket in order for the chatserver to break out of its recv() loop and process the message. But, then the server can't send() anything back to the client because the client closed the socket. As a result, I don't know if you can do two way communication when {packet,0} is specified.
Here are my conclusions about {packet, N} and {active, true|false} from reading the docs and searching around:
send():
When you call send(), no data* is actually transferred to the destination. Instead, send() blocks until the destination calls recv(), and only then is data transferred to the destination.
* In "Programming Erlang (2nd)", on p. 176 it says that a small amount of data will be pushed to the destination when you call send() due to the way an OS buffers data, and thereafer send() will block until a recv() pulls data to the destination.
Default options:
You can get the defaults for a socket by specifying an empty list for its options, then doing:
Defaults = inet:getopts(Socket, [mode, active, packet]),
io:format("Default options: ~w~n", [Defaults]).
--output:--
Default options: {ok,[{mode,list},{active,true},{packet,0}]}
You can use inet:getopts() to show that gen_tcp:accept(Socket) returns a socket with the same options as Socket.
{active, true} {active,false}
+--------------+----------------+
{packet, 1|2|4}: | receive | recv() |
| no loop | no loop |
+--------------+----------------+
{packet, 0|raw}: | receive | recv() |
(equivalent) | loop | loop |
+--------------+----------------+
{active, false}
Messages do not land in the mailbox. This option is used to prevent clients from flooding a server's mailbox with messages. Do not try to use a receive block to extract 'tcp' messages from the mailbox--there won't be any. When a process wants to read a message, the process needs to read the message directly from the socket by calling recv().
{packet, 1|2|4}:
The packet tuple specifies the protocol that each side expects messages to conform to. {packet, 2} specifies that each message will be preceded by two bytes, which will contain the length of the message. That way, a receiver of a message will know how long to keep reading from the stream of bytes to reach the end of the message. When you send a message over a TCP connection, you have no idea how many chunks the message will get split into. If the receiver stops reading after one chunk, it might not have read the whole message. Therefore, the receiver needs an indicator to tell it when the whole message has been read.
With {packet, 2}, a receiver will read two bytes to get the length of the message, say 100, then the receiver will wait until it has read 100 bytes from the randomly sized chunks of bytes that are streaming to the receiver.
Note that when you call send(), erlang automatically calculates the number of bytes in the message and inserts the length into N bytes, as specified by {packet, N}, and appends the message. Likewise, when you call recv() erlang automatically reads N bytes from the stream, as specified by {packet, N}, to get the length of the message, then recv() blocks until it reads length bytes from the socket, then recv() returns the whole message.
{packet, 0 | raw} (equivalent):
When {packet, 0} is specified, recv() will read the number of bytes specified by its Length argument. If Length is 0, then I think recv() will read one chunk from the stream, which will be an arbitrary number of bytes. As a result, the combination of {packet, 0} and recv(Socket, 0) requires that you create a loop to read all the chunks of a message, and the indicator for recv() to stop reading because it has reached the end of the message will be when the other side closes the socket:
get_msg(Socket, Chunks) ->
case gen_tcp:recv(Socket, 0) of
{ok, Chunk} ->
get_msg(Socket, [Chunk|Chunks]);
{error, closed} ->
lists:reverse(Chunks);
{error, Other} ->
Other
end.
Note that a sender cannot simply call gen_tcp:close(Socket) to signal that it is done sending data (see the description of gen_tcp:close/1 in the docs). Instead, a sender has to signal that is is done sending data by calling gen_tcp:shutdown/2.
I think the chatserver is faulty because it specifies {packet, 0} in combination with recv(Socket, 0), yet it does not use a loop for the recv():
client_handler(Sock, Server) ->
gen_tcp:send(Sock, "Please respond with a sensible name.\r\n"),
{ok,N} = gen_tcp:recv(Sock,0), %% <**** HERE ****
case string:tokens(N,"\r\n") of
{active, true}
Messages sent through a TCP (or UDP) connection are automatically read from the socket for you and placed in the controlling process's mailbox. The controlling process is the process that called accept() or the process that called connect(). Instead of calling recv() to read messages directly from the socket, you extract messages from the mailbox with a receive block:
get_msg(Socket)
receive
{tcp, Socket, Chunk} -> %Socket is already bound!
...
end
{packet, 1|2|4}:
Erlang automatically reads all the chunks of a message from the socket for you and places a complete message (with the length header stripped off) in the mailbox:
get_msg(Socket) ->
receive
{tcp, Socket, CompleteMsg} ->
CompleteMsg,
{tcp_closed, Socket} ->
io:format("Server closed socket.~n")
end.
{packet, 0 | raw} (equivalent):
Messages will not have a length header, so when Erlang reads from the socket, Erlang has no way of knowing when the end of the message has arrived. As a result, Erlang places each chunk it reads from the socket into the mailbox. You need a loop to extract all the chunks from the mailbox, and the other side has to close the socket to signal that no more chunks are coming:
get_msg(ClientSocket, Chunks) ->
receive
{tcp, ClientSocket, Chunk} ->
get_msg(ClientSocket, [Chunk|Chunks]);
{tcp_closed, ClientSocket} ->
lists:reverse(Chunks)
end.
The recv() docs mention something about recv()'s Length argument only being applicable to sockets in raw mode. But because I don't know when a Socket is in raw mode, I don't trust the Length argument. But see here: Erlang gen_tcp:recv(Socket, Length) semantics. Okay, now I'm getting somewhere: from the erlang inet docs:
{packet, PacketType}(TCP/IP sockets)
Defines the type of packets to use for a socket. Possible values:
raw | 0
No packaging is done.
1 | 2 | 4
Packets consist of a header specifying the number of bytes in the packet, followed by that
number of bytes. The header length can be one, two, or four bytes, and containing an
unsigned integer in big-endian byte order. Each send operation generates the header, and the
header is stripped off on each receive operation.
The 4-byte header is limited to 2Gb [message length].
As the examples at Erlang gen_tcp:recv(Socket, Length) semantics confirm, when {packet,0} is specified, a recv() can specify the Length to read from the TCP stream.
Related
As a newbie, writing a toy matching (trading) engine using gen_server.
Once a trade/match occurs there is need to notify both the clients.
Documentation says that :
reply(Client, Reply) -> Result
Types:
Client - see below
Reply = term()
Result = term()
This function can be used by a gen_server to explicitly send a reply
to a client that called call/2,3 or multi_call/2,3,4, when the reply
cannot be defined in the return value of Module:handle_call/3.
Client must be the From argument provided to the callback function. Reply is an arbitrary term, which will be given back to
the client as the return value of call/2,3 or multi_call/2,3,4.
The return value Result is not further defined, and should always be
ignored.
Given the above how is it possible to send notification to the other client.
SAMPLE SEQUENCE OF ACTIONS
C1 -> Place order IBM,BUY,100,10.55
Server -> Ack C1 for order
C2 -> Place order IBM,SELL,100,10.55
Server -> Ack C2 for order
-> Trade notification to C2
-> Trade notification to C1 %% Can I use gen_server:reply()
%% If yes - How ?
Well, you can't. Your ACK is already a reply. And only single reply is acceptable by gen_server:call contract. I mean, gen_server:call will only wait for one reply.
Generally gen_server:reply can be implemented like
reply({Pid, Ref}, Result) ->
Pid ! {Ref, Result}.
That means that if you try sending multiple replies, you just get some weired messages in the message box of the caller process.
Proposal
Instead, I believe, you should send associate every trade with some reference, and send message to the caller with that reference CX_Ref during the ACK procedure. Then, when you have to send a notification, you just emit message {C1_Ref, Payload} to C1 and {C2_Ref, Payload} to C2.
Also you may want to introduce some monitoring to handle broker crashes.
I'm trying to understand request order in erlang, but I can't seem to grasp it very well. Here's the example code:
test() ->
Server = start(),
spawn(fun() -> write(Server, 2),
io:format(”Child 1 read ~p~n”, [read(Server)]) end),
write(Server, 3),
write2(Server, 5),
spawn(fun() -> write2(Server, 1),
io:format(”Child 2 read ~p~n”, [read(Server)]) end),
io:format(”Parent read ~p~n”, [read(Server)]).
And here's the server:
start() ->
spawn(fun() -> init() end).
init() -> loop(0).
loop(N) ->
receive
{read, Pid} ->
Pid ! {value, self(), N},
loop(N);
{write, Pid, M} ->
Pid ! {write_reply, self()},
loop(M);
{write2, _Pid, M} -> loop(M)
end.
read(Serv) ->
Serv ! {read, self()},
receive {value, Serv, N} -> N end.
write(Serv, N) ->
Serv ! {write, self(), N},
receive {write_reply, Serv} -> ok end.
write2(Serv, N) ->
Serv ! {write2, self(), N},
ok.
I understand that different values could be printed by the three different processes created in test/0, but I'm trying to figure out the lowest and highest values that could be printed by those Parent, Child1 and Child2 processes. The answer states:
Parent: lowest 1, highest 5
Child1: lowest 1, highest 5
Child2: lowest 1, highest 2
Can somebody explain this?
Keep in mind that Erlang guarantees message order only from one process to another. If process A sequentially sends message 1 and then message 2 to process B, then B will receive them in that order. But Erlang guarantees no specific ordering of messages arriving at B if multiple concurrent processes are sending them. In this example, Parent, Child1, and Child2 all run concurrently and all send messages concurrently to Server.
The Parent process performs the following sequential steps:
Spawns the Server process. This eventually sets the initial value in the Server loop to 0.
Spawns Child1. This eventually writes the value 2 to Server, then reads from Server and prints the result.
Uses write/2 to send the value 3 to Server. The write case in the loop/1 function first replies to the caller and then installs the value 3 for its next iteration.
Uses write2/2 to send 5 to Server. The write2/2 function just sends a message to Server and does not await a reply. The write2 case in the loop/1 function just installs the value 5 for its next iteration.
Spawns Child2, which eventually calls write2/2 with the value 1, and then reads Server and prints the result.
Reads Server and prints the result.
For Parent, step 3 sends the value 3 to Server, so as far as Parent is concerned, Server now has the value 3. In step 4, Parent calls write2/2 to send 5 to the server, and that message must arrive at Server sometime after the message sent in step 3. In step 6, Parent performs a read, but all we know is that this message has to arrive at Server after the write message in step 4. This message ordering means the highest value Parent can see is 5.
The lowest value Parent can see is 1 because if Child2 gets its write message of 1 to Server after the Parent write message of 5 but before the final Parent read message, then Parent will see the 1.
For Child1, the highest value it can see is 5 because it runs concurrently with Parent and the two messages sent by Parent could arrive at Server before its write message of 2. The lowest Child1 can see is 1 because the Child2 write message of 1 can arrive before the Child1 read message.
For Child2, the lowest value it can see is its own write of 1. The highest value it can see is 2 from the write of Child1 because the Parent writes of 3 and 5 occur before Child2 is spawned, thus Child1 is the only process writing concurrently and so only it has a chance of interleaving its write message between the Child2 write and read messages.
I understand that I can set a seq_trace in erlang to the current process that is executing. But how can I set it on another process from the shell, or remote shell like dbg tracing?
You can enable sequential tracing on another process using dbg. For example, let's say we have a module x with an exported call/2 function:
call(Pid, Msg) ->
Pid ! {self(), Msg},
receive
{Pid, Reply} -> Reply
end.
This function implements a simple call-response. Let's also say we have a module y that has a looping receiver function:
loop() ->
receive
{Pid, Msg} ->
seq_trace:print({?MODULE, self(), Pid, Msg}),
Pid ! {self(), {Msg, os:timestamp()}};
_ -> ok
end,
?MODULE:loop().
This function expects a message of the form sent by x:call/2, and when it receives one it prints a message into the sequential trace, if enabled, and then sends the original message back to the caller augmented with a timestamp. It ignores all other messages.
We also need a function to collect the sequential trace. The recursive systracer/1 function below just collects seq_trace tuples into a list, and produces the list of seq_trace messages when asked:
systracer(Acc) ->
receive
{seq_trace,_,_,_}=S ->
systracer([S|Acc]);
{seq_trace,_,_}=S ->
systracer([S|Acc]);
{dump, Pid} ->
Pid ! lists:reverse(Acc),
systracer([]);
stop -> ok
end.
Let's assume our systracer/1 function is exported from module x as well.
Let's use our Erlang shell to set this all up. First, let's spawn y:loop/0 and x:systracer/1:
1> Y = spawn(y,loop,[]).
<0.36.0>
2> S = spawn(x,systracer,[[]]).
<0.38.0>
3> seq_trace:set_system_tracer(S).
false
After spawning x:systracer/1 we set the process as the seq_trace system tracer. Now we need to start dbg:
4> dbg:tracer(), dbg:p(all,call).
{ok,[{matched,nonode#nohost,28}]}
These dbg calls are pretty standard, but you can feel free to vary them as needed especially if you plan to use dbg tracing during your debug session as well.
In practice when you enable sequential tracing with dbg, you typically do so by keying on a particular argument to a function. This enables you to get a trace specific to a given function invocation without getting traces for all invocations of that function. Along these lines, we'll use dbg:tpl/3 to turn on sequential trace flags when x:call/2 is invoked with its second argument having the value of the atom trace. First, we use dbg:fun2ms/1 to create the appropriate match specification to enable the sequential tracing flags we want, then we'll apply the match spec with dbg:tpl/3:
5> Ms = dbg:fun2ms(fun([_,trace]) -> set_seq_token(send,true), set_seq_token('receive',true), set_seq_token(print,true) end).
[{['_',trace],
[],
[{set_seq_token,send,true},
{set_seq_token,'receive',true},
{set_seq_token,print,true}]}]
6> dbg:tpl(x,call,Ms).
{ok,[{matched,nonode#nohost,1},{saved,1}]}
Now we can call x:call/2 with the second argument trace to cause sequential tracing to occur. We make this call from a spawned process to avoid having shell I/O-related messages appearing in the resulting trace:
7> spawn(fun() -> x:call(Y, trace), x:call(Y, foo) end).
(<0.46.0>) call x:call(<0.36.0>,trace)
<0.46.0>
The first line of output comes from normal dbg tracing, since we specified dbg:p(all, call) earlier. To get the sequential trace results, we need to get a dump from our systrace/1 process:
8> S ! {dump, self()}.
{dump,<0.34.0>}
This sends all sequential trace collected so far to our shell process. We can use the shell flush() command to view them:
9> flush().
Shell got [{seq_trace,0,{send,{0,1},<0.47.0>,<0.36.0>,{<0.47.0>,trace}}},
{seq_trace,0,{'receive',{0,1},<0.47.0>,<0.36.0>,{<0.47.0>,trace}}},
{seq_trace,0,{print,{1,2},<0.36.0>,[],{y,<0.36.0>,<0.47.0>,trace}}},
{seq_trace,0,
{send,{1,3},
<0.36.0>,<0.47.0>,
{<0.36.0>,{trace,{1423,709096,206121}}}}},
{seq_trace,0,
{'receive',{1,3},
<0.36.0>,<0.47.0>,
{<0.36.0>,{trace,{1423,709096,206121}}}}},
{seq_trace,0,{send,{3,4},<0.47.0>,<0.36.0>,{<0.47.0>,foo}}},
{seq_trace,0,{'receive',{3,4},<0.47.0>,<0.36.0>,{<0.47.0>,foo}}},
{seq_trace,0,{print,{4,5},<0.36.0>,[],{y,<0.36.0>,<0.47.0>,foo}}},
{seq_trace,0,
{send,{4,6},
<0.36.0>,<0.47.0>,
{<0.36.0>,{foo,{1423,709096,206322}}}}},
{seq_trace,0,
{'receive',{4,6},
<0.36.0>,<0.47.0>,
{<0.36.0>,{foo,{1423,709096,206322}}}}}]
And sure enough, these are the sequential trace messages we expected to see. First, for the message containing the trace atom, we have the send from x:call/2 followed by the reception in y:loop/0 and the result of seq_trace:print/1, then the send from y:loop/0 back to the caller of x:call/2. Then, since x:call(Y,foo) is called in the same process, which means all the sequential tracing flags are still enabled, the first set of sequential trace messages is followed by a similar set for the x:call(Y,foo) invocation.
If we just call x:call(Y,foo) we can see we get no sequential trace messages:
10> spawn(fun() -> x:call(Y, foo) end).
<0.55.0>
11> S ! {dump, self()}.
{dump,<0.34.0>}
12> flush().
Shell got []
This is because our match spec enables sequential tracing only when the second argument to x:call/2 is the atom trace.
For more information, see the seq_trace and dbg man pages, and also read the match specification chapter of the Erlang Run-Time System Application (ERTS)
User's Guide.
new to Erlang and just having a bit of trouble getting my head around the new paradigm!
OK, so I have this internal function within an OTP gen_server:
my_func() ->
Result = ibrowse:send_req(?ROOTPAGE,[{"User-Agent",?USERAGENT}],get),
case Result of
{ok, "200", _, Xml} -> %<<do some stuff that won't interest you>>
,ok;
{error,{conn_failed,{error,nxdomain}}} -> <<what the heck do I do here?>>
end.
If I leave out the case for handling the connection failed then I get an exit signal propagated to the supervisor and it gets shut down along with the server.
What I want to happen (at least I think this is what I want to happen) is that on a connection failure I'd like to pause and then retry send_req say 10 times and at that point the supervisor can fail.
If I do something ugly like this...
{error,{conn_failed,{error,nxdomain}}} -> stop()
it shuts down the server process and yes, I get to use my (try 10 times within 10 seconds) restart strategy until it fails, which is also the desired result however the return value from the server to the supervisor is 'ok' when I would really like to return {error,error_but_please_dont_fall_over_mr_supervisor}.
I strongly suspect in this scenario that I'm supposed to handle all the business stuff like retrying failed connections within 'my_func' rather than trying to get the process to stop and then having the supervisor restart it in order to try it again.
Question: what is the 'Erlang way' in this scenario ?
I'm new to erlang too.. but how about something like this?
The code is long just because of the comments. My solution (I hope I've understood correctly your question) will receive the maximum number of attempts and then do a tail-recursive call, that will stop by pattern-matching the max number of attempts with the next one. Uses timer:sleep() to pause to simplify things.
%% #doc Instead of having my_func/0, you have
%% my_func/1, so we can "inject" the max number of
%% attempts. This one will call your tail-recursive
%% one
my_func(MaxAttempts) ->
my_func(MaxAttempts, 0).
%% #doc This one will match when the maximum number
%% of attempts have been reached, terminates the
%% tail recursion.
my_func(MaxAttempts, MaxAttempts) ->
{error, too_many_retries};
%% #doc Here's where we do the work, by having
%% an accumulator that is incremented with each
%% failed attempt.
my_func(MaxAttempts, Counter) ->
io:format("Attempt #~B~n", [Counter]),
% Simulating the error here.
Result = {error,{conn_failed,{error,nxdomain}}},
case Result of
{ok, "200", _, Xml} -> ok;
{error,{conn_failed,{error,nxdomain}}} ->
% Wait, then tail-recursive call.
timer:sleep(1000),
my_func(MaxAttempts, Counter + 1)
end.
EDIT: If this code is in a process which is supervised, I think it's better to have a simple_one_for_one, where you can add dinamically whatever workers you need, this is to avoid delaying initialization due to timeouts (in a one_for_one the workers are started in order, and having sleep's at that point will stop the other processes from initializing).
EDIT2: Added an example shell execution:
1> c(my_func).
my_func.erl:26: Warning: variable 'Xml' is unused
{ok,my_func}
2> my_func:my_func(5).
Attempt #0
Attempt #1
Attempt #2
Attempt #3
Attempt #4
{error,too_many_retries}
With 1s delays between each printed message.
Can I Nest receive {tcp, Socket, Bin} -> calls? For example I have a top level loop called Loop, which upon receipt of tcp data calls a function, parse_header, to parse header data (an integer which indicates the kind of data to follow and thus its size), after that I need to receive the entire payload before moving on. I might only receive 4 bytes when I need a full 20 bytes and would like to call receive in a separate function called parse_payload. So the call chain would look like loop->parse_header->parse_payload and I would like parse_payload to call receive {tcp, Socket, Bin} ->. I don't know if this ok or if I'm completely going to mess things up and can only do it in the Loop function. Can someone enlighten me? If I am allowed to do this is am I violating some sort of best practice?
Maybe you can check the sample code for "erlang programming".
The download page is Erlang Programming Source Code
In file socket_examples.erl, please check "receive_data" function.
For perse message, I think you should determine how to seperate messages one by one (fixed length or with termination byte), then parse message's header, and payload.
receive_data(Socket, SoFar) ->
receive
{tcp,Socket,Bin} -> %% (3)
receive_data(Socket, [Bin|SoFar]);
{tcp_closed,Socket} -> %% (4)
list_to_binary(reverse(SoFar)) %% (5)
end.
You can also set a gen_tcp socket in passive mode. This way, the owning process won't receive the input by messages but has to fetch it using gen_tcp:recv(Socket, ByteCount) which returns either {ok, Input} or {error, Reason}. As this methods waits infinitely for the bytes you might want to add a timeout using gen_tcp:recv/3. (Erlang documentation of gen_tcp:recv)
While at first glance it might seem the process is now completely unable to react to messages sent to it, there is the following workaround improving the situation a bit:
f1(X) ->
receive
message1 ->
... do something ...,
f1(X);
message2 ->
... do something ...,
f1(X)
after 0 %timeout in ms
{ok, Input} = gen_tcp:recv(Socket, ByteCount, Timeout),
... do something ... % maybe call some times gen_tcp:recv again
f1(X)
end.
If you don't add a timeout to gen_tcp:recv here, other processes could wait ages for f1 to handle their messages.