how to use gun:open in a gen_server module - erlang

I have a gen_server module, I use gun as http client to make a long pull connection with a http server, so I call gun:open in my module's init, but if gun:open fail, my module fail, so my application fail to start. What is the proper way to do this. The following is my codeļ¼š
init() ->
lager:debug("http_api_client: connecting to admin server...~n"),
{ok, ConnPid} = gun:open("localhost", 5001),
{ok, Protocol} = gun:await_up(ConnPid),
{ok, #state{conn_pid = ConnPid, streams = #{},protocol = Protocol}}.

Basically you have two options: either your process requires the HTTP server to be available (your current solution), or it doesn't, and handles requests while the connection to the HTTP server is down gracefully (by returning error responses). This blog post presents this idea more eloquently: https://ferd.ca/it-s-about-the-guarantees.html
You could do that by separating this code out into a separate function, that doesn't crash if the connection fails:
try_connect(State) ->
lager:debug("http_api_client: connecting to admin server...~n"),
case gun:open("localhost", 5001) of
{ok, ConnPid} ->
{ok, Protocol} = gun:await_up(ConnPid),
State#state{conn_pid = ConnPid, streams = #{},protocol = Protocol};
{error, _} ->
State#state{conn_pid = undefined}
end.
And call this function from init. That is, regardless of whether you can connect, your gen_server will start.
init(_) ->
{ok, try_connect(#state{})}.
Then, when you make a request to this gen_server that requires the connection to be present, check whether it is undefined:
handle_call(foo, _, State = #state{conn_pid = undefined}) ->
{reply, {error, not_connected}, State};
handle_call(foo, _, State = #state{conn_pid = ConnPid}) ->
%% make a request through ConnPid here
{reply, ok, State};
Of course, that means that if the connection fails at startup, your gen_server will never try to connect again. You could add a timer, or you could add an explicit reconnect command:
handle_call(reconnect, _, State = #state{conn_pid = undefined}) ->
NewState = try_connect(State),
Result = case NewState of
#state{conn_pid = undefined} ->
reconnect_failed;
_ ->
ok
end,
{reply, Result, NewState};
handle_call(reconnect, _, State) ->
{reply, already_connected, State}.
The code above doesn't handle the case when the connection goes down while the gen_server is running. You could handle that explicitly, or you could just let your gen_server process crash in that case, so that it restarts into the "not connected" state.

Related

How to cancel a `ResponseSocket` server?

module Main
open System
open System.Threading
open System.Threading.Tasks
open NetMQ
open NetMQ.Sockets
let uri = "ipc://hello-world"
let f (token : CancellationToken) =
use server = new ResponseSocket()
use poller = new NetMQPoller()
poller.Add(server)
printfn "Server is binding to: %s" uri
server.Bind(uri)
printfn <| "Done binding."
use __ = server.ReceiveReady.Subscribe(fun x ->
if token.CanBeCanceled then poller.Stop()
)
use __ = server.SendReady.Subscribe(fun x ->
if token.CanBeCanceled then poller.Stop()
)
poller.Run()
printfn "Server closing."
server.Unbind(uri)
let src = new CancellationTokenSource()
let token = src.Token
let task = Task.Run((fun () -> f token), token)
src.CancelAfter(100)
task.Wait() // Does not trigger.
My failed attempt looks something like this. The problem is that the poller will only check the cancellation token if it gets or sends a message. I guess one way to do it would be to send a special cancel message from the client rather than these tokens, but that would not work if the server gets into a send state.
What would be a reliable way of closing the server in NetMQ?

Issue when registering two local process with gproc within cowboy websocket handler

I tried to register a bunch of processes with a unique family name with gproc under a cowboy websocket handler.
In my handler I created two method, the first one is to handle registration:
websocket_handle({text, <<"Reg: ",Message/binary>>}, State) ->
io:format("Client ~p requesting to register ~n",[Message]),
MyPID=list_to_binary(pid_to_list(self())),
{[{_,Family}]}=jiffy:decode(Message),
io:format("Client ~p requesting to register ~n",[Family]),
Test = gproc:reg({p, l, Family}),
erlang:display(Test),
io:format("Registration OK, replying ..."),
Result = gproc:lookup_pids({p, l, Family}),
erlang:display(Result),
[PID] = Result,
io:format("PASS ~n"),
io:format("PID ~p FORMATTED ~n",[PID]),
Res= list_to_binary(pid_to_list(PID)),
\"inform\",\"From\" : \"Server\",\"Message\" : \"How you are doing !\"}">>),
{reply, {text,<<"{\"Type\" : \"fb_server\",\"Action\" : \"registration\",\"From\" : \"Server\",\"Message\" : \"",Res/binary,"\"}">>}, State};
The second one is to handle pis recuperation:
websocket_handle({text, <<"Get: ",Message/binary>>}, State) ->
io:format("Client ~p requesting Pids ~n",[Message]),
{[{_,Family}]}=jiffy:decode(Message),
Result = gproc:lookup_pids({p, l, Family}),
erlang:display(Result),
if
Result == [] ->
{reply, {text,<<"{\"Type\" : \"fb_server\",\"Action\" : \"Get Pids\",\"From\" : \"Server\",\"Message\" : \"Empty list\"}">>}, State};
true ->
[PID] = Result,
io:format("PASS ~n"),
io:format("PID ~p FORMATTED ~n",[PID]),
Res= list_to_binary(pid_to_list(PID)),
\"fb_server\",\"Action\" : \"inform\",\"From\" : \"Server\",\"Message\" : \"How you are doing !\"}">>),
{reply, {text,<<"{\"Type\" : \"fb_server\",\"Action\" : \"Get Pids\",\"From\" : \"Server\",\"Message\" : \"",Res/binary,"\"}">>}, State}
end.
To test my handler I created two js files, the first one is to register a process family, I start the registration request as follows:
writeToScreen("CONNECTED");
var msg = {family: "Js"};
websocket.send("Reg: "+JSON.stringify(msg) );
The second test file is to get the pid of process already registered by the first file:
function onOpen(evt)
{
//ON opening connection we will send a getPids request to get pids of processes registered under Family "Js"
writeToScreen("CONNECTED");
var msg = {family: "Js"};
//websocket.send("Reg: "+JSON.stringify(msg) );
getPids(msg);
//doSend("WebSocket rocks");
}
function getPids(msg)
{
writeToScreen("get Pids");
websocket.send("Get: "+JSON.stringify(msg) );
}
My problem is that the first file register the process successfully but the second one get en empty list, basically it should get a list with the pid already created by the first file ??
Best regards .
#Stefan Zobel, you are right,In my onmessage event I have a call to onclose() event.

Erlang drop messages

I have a proxy that prevents the server from getting overloaded with requests.
Clients sends their requests to the proxy and the proxy determine wether or not to pass the requests to the server.
NrOfReq is the current number of requests that the server is handling.
MaxReq is the maximum number of requests that the server can handle before
the mailbox get full.
Every time the server has handled a request it sends the ready_to_serve atom
to the proxy.
Whenever the guard after the when-keyword is false I want to drop the message from the client and prevent it from ending up in the proxys mail-box.
How can I do this?
proxy(ServerPid, NrOfReq, MaxReq) ->
receive
{client_request, Request, ClientPid} when NrOfReq < MaxReq ->
New = NrOfReq + 1,
ServerPid ! {Request, ClientPid, self()};
ready_to_serve ->
New = NrOfReq - 1
end,
proxy(ServerPid, New, MaxReq).
Separate message receiving from message handling
proxy(ServerPid, NrOfReq, MaxReq) ->
receive
{client_request, Request, ClientPid} ->
if
NrOfReq < MaxReq ->
New = NrOfReq + 1,
ServerPid ! {Request, ClientPid, self()};
true ->
'message-dropped';
end;
ready_to_serve ->
New = NrOfReq - 1
end,
proxy(ServerPid, New, MaxReq).
Or else make fallback clause for dropping messages
proxy(ServerPid, NrOfReq, MaxReq) ->
receive
{client_request, Request, ClientPid} when NrOfReq < MaxReq ->
New = NrOfReq + 1,
ServerPid ! {Request, ClientPid, self()};
{client_request, Request, ClientPid} when NrOfReq >= MaxReq ->
'message-dropped';
ready_to_serve ->
New = NrOfReq - 1
end,
proxy(ServerPid, New, MaxReq).
The latter is a bit flatter but can cause issues when you decide to change message format.
PS: Why you no use OTP?
I never tried your proposal, on my side I would use 2 clauses for the loop:
proxy(ServerPid, MaxReq, MaxReq) ->
receive
ready_to_serve ->
New = NrOfReq - 1
end,
proxy(ServerPid, New, MaxReq);
proxy(ServerPid, NrOfReq, MaxReq) ->
receive
{client_request, Request, ClientPid} ->
New = NrOfReq + 1,
ServerPid ! {Request, ClientPid, self()};
ready_to_serve ->
New = NrOfReq - 1
end,
proxy(ServerPid, New, MaxReq).
This proxy looks weird to me. The pattern I saw most often in Erlang is to spawn 1 server per client, and have dedicated processes to spawn those servers, manage storage ...

async computation doesn't catch OperationCancelledException

I'm trying to make an asynchronous web request to a URL that will return if the request takes too long. I'm using the F# asynchronous workflow and the System.Net.Http library to do this.
However, I am unable to catch the Task/OperationCancelledExceptions that are raised by the System.Net.Http library in the async workflow. Instead, the exception is raised at the Async.RunSynchronously method, as you can see in this stack trace:
> System.OperationCanceledException: The operation was canceled. at
> Microsoft.FSharp.Control.AsyncBuilderImpl.commit[a](Result`1 res)
> at
> Microsoft.FSharp.Control.CancellationTokenOps.RunSynchronously[a](CancellationToken
> token, FSharpAsync`1 computation, FSharpOption`1 timeout) at
> Microsoft.FSharp.Control.FSharpAsync.RunSynchronously[T](FSharpAsync`1
> computation, FSharpOption`1 timeout, FSharpOption`1 cancellationToken)
> at <StartupCode$FSI_0004>.$FSI_0004.main#()
The code:
#r "System.Net.Http"
open System.Net.Http
open System
let readGoogle () = async {
try
let request = new HttpRequestMessage(HttpMethod.Get, "https://google.co.uk")
let client = new HttpClient()
client.Timeout <- TimeSpan.FromSeconds(0.01) //intentionally low to always fail in this example
let! response = client.SendAsync(request, HttpCompletionOption.ResponseContentRead) |> Async.AwaitTask
return Some response
with
| ex ->
//is never called
printfn "TIMED OUT"
return None
}
//exception is raised here
readGoogle ()
|> Async.RunSynchronously
|> ignore
Cancellation was always different from the error. In your case you can override default behavior of AwaitTask that invokes "cancel continuation" if task is cancelled and handle it differently:
let readGoogle () = async {
try
let request = new HttpRequestMessage(HttpMethod.Get, "https://google.co.uk")
let client = new HttpClient()
client.Timeout <- TimeSpan.FromSeconds(0.01) //intentionally low to always fail in this example
return! (
let t = client.SendAsync(request, HttpCompletionOption.ResponseContentRead)
Async.FromContinuations(fun (s, e, _) ->
t.ContinueWith(fun (t: Task<_>) ->
// if task is cancelled treat it as timeout and process on success path
if t.IsCanceled then s(None)
elif t.IsFaulted then e(t.Exception)
else s(Some t.Result)
)
|> ignore
)
)
with
| ex ->
//is never called
printfn "TIMED OUT"
return None
}

Problems gen_tcp:accept

I've made a tcp server witch spawns a process to listen incoming connections. Here is the sample code (removed a few things from my original code):
module a:
main([]) ->
{ ok, Pid } = b:start(),
receive
_ ->
ok
end.
module b:
-define(TCP_OPTIONS, [binary, { active, false}, { packet, 0 } , {reuseaddr, true}]).
...
start_link(Port) ->
Pid = spawn_link(server_listener, init, [ Port ]),
{ ok , self() }.
init(Port, Processor) ->
case gen_tcp:listen(Port, ?TCP_OPTIONS) of
{ ok , LSocket } ->
accept_loop(LSocket);
{ error, Reason } ->
{ stop, Reason }
end.
accept_loop( LSocket) ->
?LOG("Current socket acceptor PID [~w]~n", [self()]),
case gen_tcp:accept(LSocket) of
{ ok, Socket } ->
%do stuff here
spawn(server_listener , accept_loop, [ LSocket ]);
{ error, Reason } ->
?LOG("Error accepting socket! [ ~s ]~n", [ Reason ])
end.
The problem is: EVERY time that I try to connect from telnet on this port, I'm receiving an error { error, closed } on gen_tcp:accept. This is already driving me nuts trying to figure out what is happening.
Thanks,
Your "accept loop" isn't really a loop... and it is contrived.
You probably want "do_accept_loop" and a proper "server_loop" for handling a connection. Have a look at this.
You want something along the lines of:
% Call echo:listen(Port) to start the service.
listen(Port) ->
{ok, LSocket} = gen_tcp:listen(Port, ?TCP_OPTIONS),
accept(LSocket).
% Wait for incoming connections and spawn the echo loop when we get one.
accept(LSocket) ->
{ok, Socket} = gen_tcp:accept(LSocket),
spawn(fun() -> loop(Socket) end),
accept(LSocket).
% Echo back whatever data we receive on Socket.
loop(Socket) ->
case gen_tcp:recv(Socket, 0) of
{ok, Data} ->
gen_tcp:send(Socket, Data),
loop(Socket);
{error, closed} ->
ok
end.

Resources