Erlang drop messages - erlang

I have a proxy that prevents the server from getting overloaded with requests.
Clients sends their requests to the proxy and the proxy determine wether or not to pass the requests to the server.
NrOfReq is the current number of requests that the server is handling.
MaxReq is the maximum number of requests that the server can handle before
the mailbox get full.
Every time the server has handled a request it sends the ready_to_serve atom
to the proxy.
Whenever the guard after the when-keyword is false I want to drop the message from the client and prevent it from ending up in the proxys mail-box.
How can I do this?
proxy(ServerPid, NrOfReq, MaxReq) ->
receive
{client_request, Request, ClientPid} when NrOfReq < MaxReq ->
New = NrOfReq + 1,
ServerPid ! {Request, ClientPid, self()};
ready_to_serve ->
New = NrOfReq - 1
end,
proxy(ServerPid, New, MaxReq).

Separate message receiving from message handling
proxy(ServerPid, NrOfReq, MaxReq) ->
receive
{client_request, Request, ClientPid} ->
if
NrOfReq < MaxReq ->
New = NrOfReq + 1,
ServerPid ! {Request, ClientPid, self()};
true ->
'message-dropped';
end;
ready_to_serve ->
New = NrOfReq - 1
end,
proxy(ServerPid, New, MaxReq).
Or else make fallback clause for dropping messages
proxy(ServerPid, NrOfReq, MaxReq) ->
receive
{client_request, Request, ClientPid} when NrOfReq < MaxReq ->
New = NrOfReq + 1,
ServerPid ! {Request, ClientPid, self()};
{client_request, Request, ClientPid} when NrOfReq >= MaxReq ->
'message-dropped';
ready_to_serve ->
New = NrOfReq - 1
end,
proxy(ServerPid, New, MaxReq).
The latter is a bit flatter but can cause issues when you decide to change message format.
PS: Why you no use OTP?

I never tried your proposal, on my side I would use 2 clauses for the loop:
proxy(ServerPid, MaxReq, MaxReq) ->
receive
ready_to_serve ->
New = NrOfReq - 1
end,
proxy(ServerPid, New, MaxReq);
proxy(ServerPid, NrOfReq, MaxReq) ->
receive
{client_request, Request, ClientPid} ->
New = NrOfReq + 1,
ServerPid ! {Request, ClientPid, self()};
ready_to_serve ->
New = NrOfReq - 1
end,
proxy(ServerPid, New, MaxReq).
This proxy looks weird to me. The pattern I saw most often in Erlang is to spawn 1 server per client, and have dedicated processes to spawn those servers, manage storage ...

Related

how to use gun:open in a gen_server module

I have a gen_server module, I use gun as http client to make a long pull connection with a http server, so I call gun:open in my module's init, but if gun:open fail, my module fail, so my application fail to start. What is the proper way to do this. The following is my codeļ¼š
init() ->
lager:debug("http_api_client: connecting to admin server...~n"),
{ok, ConnPid} = gun:open("localhost", 5001),
{ok, Protocol} = gun:await_up(ConnPid),
{ok, #state{conn_pid = ConnPid, streams = #{},protocol = Protocol}}.
Basically you have two options: either your process requires the HTTP server to be available (your current solution), or it doesn't, and handles requests while the connection to the HTTP server is down gracefully (by returning error responses). This blog post presents this idea more eloquently: https://ferd.ca/it-s-about-the-guarantees.html
You could do that by separating this code out into a separate function, that doesn't crash if the connection fails:
try_connect(State) ->
lager:debug("http_api_client: connecting to admin server...~n"),
case gun:open("localhost", 5001) of
{ok, ConnPid} ->
{ok, Protocol} = gun:await_up(ConnPid),
State#state{conn_pid = ConnPid, streams = #{},protocol = Protocol};
{error, _} ->
State#state{conn_pid = undefined}
end.
And call this function from init. That is, regardless of whether you can connect, your gen_server will start.
init(_) ->
{ok, try_connect(#state{})}.
Then, when you make a request to this gen_server that requires the connection to be present, check whether it is undefined:
handle_call(foo, _, State = #state{conn_pid = undefined}) ->
{reply, {error, not_connected}, State};
handle_call(foo, _, State = #state{conn_pid = ConnPid}) ->
%% make a request through ConnPid here
{reply, ok, State};
Of course, that means that if the connection fails at startup, your gen_server will never try to connect again. You could add a timer, or you could add an explicit reconnect command:
handle_call(reconnect, _, State = #state{conn_pid = undefined}) ->
NewState = try_connect(State),
Result = case NewState of
#state{conn_pid = undefined} ->
reconnect_failed;
_ ->
ok
end,
{reply, Result, NewState};
handle_call(reconnect, _, State) ->
{reply, already_connected, State}.
The code above doesn't handle the case when the connection goes down while the gen_server is running. You could handle that explicitly, or you could just let your gen_server process crash in that case, so that it restarts into the "not connected" state.

Issue when registering two local process with gproc within cowboy websocket handler

I tried to register a bunch of processes with a unique family name with gproc under a cowboy websocket handler.
In my handler I created two method, the first one is to handle registration:
websocket_handle({text, <<"Reg: ",Message/binary>>}, State) ->
io:format("Client ~p requesting to register ~n",[Message]),
MyPID=list_to_binary(pid_to_list(self())),
{[{_,Family}]}=jiffy:decode(Message),
io:format("Client ~p requesting to register ~n",[Family]),
Test = gproc:reg({p, l, Family}),
erlang:display(Test),
io:format("Registration OK, replying ..."),
Result = gproc:lookup_pids({p, l, Family}),
erlang:display(Result),
[PID] = Result,
io:format("PASS ~n"),
io:format("PID ~p FORMATTED ~n",[PID]),
Res= list_to_binary(pid_to_list(PID)),
\"inform\",\"From\" : \"Server\",\"Message\" : \"How you are doing !\"}">>),
{reply, {text,<<"{\"Type\" : \"fb_server\",\"Action\" : \"registration\",\"From\" : \"Server\",\"Message\" : \"",Res/binary,"\"}">>}, State};
The second one is to handle pis recuperation:
websocket_handle({text, <<"Get: ",Message/binary>>}, State) ->
io:format("Client ~p requesting Pids ~n",[Message]),
{[{_,Family}]}=jiffy:decode(Message),
Result = gproc:lookup_pids({p, l, Family}),
erlang:display(Result),
if
Result == [] ->
{reply, {text,<<"{\"Type\" : \"fb_server\",\"Action\" : \"Get Pids\",\"From\" : \"Server\",\"Message\" : \"Empty list\"}">>}, State};
true ->
[PID] = Result,
io:format("PASS ~n"),
io:format("PID ~p FORMATTED ~n",[PID]),
Res= list_to_binary(pid_to_list(PID)),
\"fb_server\",\"Action\" : \"inform\",\"From\" : \"Server\",\"Message\" : \"How you are doing !\"}">>),
{reply, {text,<<"{\"Type\" : \"fb_server\",\"Action\" : \"Get Pids\",\"From\" : \"Server\",\"Message\" : \"",Res/binary,"\"}">>}, State}
end.
To test my handler I created two js files, the first one is to register a process family, I start the registration request as follows:
writeToScreen("CONNECTED");
var msg = {family: "Js"};
websocket.send("Reg: "+JSON.stringify(msg) );
The second test file is to get the pid of process already registered by the first file:
function onOpen(evt)
{
//ON opening connection we will send a getPids request to get pids of processes registered under Family "Js"
writeToScreen("CONNECTED");
var msg = {family: "Js"};
//websocket.send("Reg: "+JSON.stringify(msg) );
getPids(msg);
//doSend("WebSocket rocks");
}
function getPids(msg)
{
writeToScreen("get Pids");
websocket.send("Get: "+JSON.stringify(msg) );
}
My problem is that the first file register the process successfully but the second one get en empty list, basically it should get a list with the pid already created by the first file ??
Best regards .
#Stefan Zobel, you are right,In my onmessage event I have a call to onclose() event.

async computation doesn't catch OperationCancelledException

I'm trying to make an asynchronous web request to a URL that will return if the request takes too long. I'm using the F# asynchronous workflow and the System.Net.Http library to do this.
However, I am unable to catch the Task/OperationCancelledExceptions that are raised by the System.Net.Http library in the async workflow. Instead, the exception is raised at the Async.RunSynchronously method, as you can see in this stack trace:
> System.OperationCanceledException: The operation was canceled. at
> Microsoft.FSharp.Control.AsyncBuilderImpl.commit[a](Result`1 res)
> at
> Microsoft.FSharp.Control.CancellationTokenOps.RunSynchronously[a](CancellationToken
> token, FSharpAsync`1 computation, FSharpOption`1 timeout) at
> Microsoft.FSharp.Control.FSharpAsync.RunSynchronously[T](FSharpAsync`1
> computation, FSharpOption`1 timeout, FSharpOption`1 cancellationToken)
> at <StartupCode$FSI_0004>.$FSI_0004.main#()
The code:
#r "System.Net.Http"
open System.Net.Http
open System
let readGoogle () = async {
try
let request = new HttpRequestMessage(HttpMethod.Get, "https://google.co.uk")
let client = new HttpClient()
client.Timeout <- TimeSpan.FromSeconds(0.01) //intentionally low to always fail in this example
let! response = client.SendAsync(request, HttpCompletionOption.ResponseContentRead) |> Async.AwaitTask
return Some response
with
| ex ->
//is never called
printfn "TIMED OUT"
return None
}
//exception is raised here
readGoogle ()
|> Async.RunSynchronously
|> ignore
Cancellation was always different from the error. In your case you can override default behavior of AwaitTask that invokes "cancel continuation" if task is cancelled and handle it differently:
let readGoogle () = async {
try
let request = new HttpRequestMessage(HttpMethod.Get, "https://google.co.uk")
let client = new HttpClient()
client.Timeout <- TimeSpan.FromSeconds(0.01) //intentionally low to always fail in this example
return! (
let t = client.SendAsync(request, HttpCompletionOption.ResponseContentRead)
Async.FromContinuations(fun (s, e, _) ->
t.ContinueWith(fun (t: Task<_>) ->
// if task is cancelled treat it as timeout and process on success path
if t.IsCanceled then s(None)
elif t.IsFaulted then e(t.Exception)
else s(Some t.Result)
)
|> ignore
)
)
with
| ex ->
//is never called
printfn "TIMED OUT"
return None
}

F# Async.RunSynchronously with timeout and cancellationToken

When calling Async.RunSynchronously with a timeout and a CancellationToken, the timeout value seems to be ignored. I can work around this by calling CancelAfter on the CancellationToken, but ideally I'd like to be able to distinguish between exceptions that occur in the workflow, TimeOutExceptions and OperationCanceledExceptions.
I believe the sample code below demonstrates this.
open System
open System.Threading
let work =
async {
let endTime = DateTime.UtcNow.AddMilliseconds(100.0)
while DateTime.UtcNow < endTime do
do! Async.Sleep(10)
Console.WriteLine "working..."
raise ( Exception "worked for more than 100 millis" )
}
[<EntryPoint>]
let main argv =
try
Async.RunSynchronously(work, 50)
with
| e -> Console.WriteLine (e.GetType().Name + ": " + e.Message)
let cts = new CancellationTokenSource()
try
Async.RunSynchronously(work, 50, cts.Token)
with
| e -> Console.WriteLine (e.GetType().Name + ": " + e.Message)
cts.CancelAfter(80)
try
Async.RunSynchronously(work, 50, cts.Token)
with
| e -> Console.WriteLine (e.GetType().Name + ": " + e.Message)
Console.ReadKey(true) |> ignore
0
The outputs the following, showing that the timeout is only effective in the first case (where no CancelationToken is specified)
working...
working...
TimeoutException: The operation has timed out.
working...
working...
working...
working...
working...
working...
working...
Exception: worked for more than 100 millis
working...
working...
working...
working...
working...
working...
OperationCanceledException: The operation was canceled.
Is this the intended behaviour? Is there any way get the behaviour I'm after?
Thanks!
I'm not sure if this is intended behaviour - at least, I do not see any reason why it would be. However, this behaviour is implemented directly in the handling of parameters of RunSynchronously. If you look at the library source code, you can see:
static member RunSynchronously (p:Async<'T>,?timeout,?cancellationToken) =
let timeout,token =
match cancellationToken with
| None -> timeout,(!defaultCancellationTokenSource).Token
| Some token when not token.CanBeCanceled -> timeout, token
| Some token -> None, token
In your case (with both timeout and a cancellation token that can be cancelled), the code goes through the last branch and ignores the timeout. I think this is either a bug or it is something that should be mentioned in the documentation.
As a workaround, you can create a separate CancellationTokenSource to specify the timeout and link it to the main cancellation source so that the caller provides (using CreateLinkedTokenSource). When you get OperationCancelledException, you can then detect whether the source was an actual cancellation or a timeout:
type Microsoft.FSharp.Control.Async with
static member RunSynchronouslyEx(a:Async<'T>, timeout:int, cancellationToken) =
// Create cancellation token that is cancelled after 'timeout'
let timeoutCts = new CancellationTokenSource()
timeoutCts.CancelAfter(timeout)
// Create a combined token that is cancelled either when
// 'cancellationToken' is cancelled, or after a timeout
let combinedCts =
CancellationTokenSource.CreateLinkedTokenSource
(cancellationToken, timeoutCts.Token)
// Run synchronously with the combined token
try Async.RunSynchronously(a, cancellationToken = combinedCts.Token)
with :? OperationCanceledException as e ->
// If the timeout occurred, then we throw timeout exception instead
if timeoutCts.IsCancellationRequested then
raise (new System.TimeoutException())
else reraise()

Problems gen_tcp:accept

I've made a tcp server witch spawns a process to listen incoming connections. Here is the sample code (removed a few things from my original code):
module a:
main([]) ->
{ ok, Pid } = b:start(),
receive
_ ->
ok
end.
module b:
-define(TCP_OPTIONS, [binary, { active, false}, { packet, 0 } , {reuseaddr, true}]).
...
start_link(Port) ->
Pid = spawn_link(server_listener, init, [ Port ]),
{ ok , self() }.
init(Port, Processor) ->
case gen_tcp:listen(Port, ?TCP_OPTIONS) of
{ ok , LSocket } ->
accept_loop(LSocket);
{ error, Reason } ->
{ stop, Reason }
end.
accept_loop( LSocket) ->
?LOG("Current socket acceptor PID [~w]~n", [self()]),
case gen_tcp:accept(LSocket) of
{ ok, Socket } ->
%do stuff here
spawn(server_listener , accept_loop, [ LSocket ]);
{ error, Reason } ->
?LOG("Error accepting socket! [ ~s ]~n", [ Reason ])
end.
The problem is: EVERY time that I try to connect from telnet on this port, I'm receiving an error { error, closed } on gen_tcp:accept. This is already driving me nuts trying to figure out what is happening.
Thanks,
Your "accept loop" isn't really a loop... and it is contrived.
You probably want "do_accept_loop" and a proper "server_loop" for handling a connection. Have a look at this.
You want something along the lines of:
% Call echo:listen(Port) to start the service.
listen(Port) ->
{ok, LSocket} = gen_tcp:listen(Port, ?TCP_OPTIONS),
accept(LSocket).
% Wait for incoming connections and spawn the echo loop when we get one.
accept(LSocket) ->
{ok, Socket} = gen_tcp:accept(LSocket),
spawn(fun() -> loop(Socket) end),
accept(LSocket).
% Echo back whatever data we receive on Socket.
loop(Socket) ->
case gen_tcp:recv(Socket, 0) of
{ok, Data} ->
gen_tcp:send(Socket, Data),
loop(Socket);
{error, closed} ->
ok
end.

Resources