Distributed Erlang: multicall exceeds requested timeout - erlang

We use distributed erlang cluster and now I tests it in case of net splits.
To get information from all nodes of the cluster I use gen_server:multicall/4 with defined timeout. What I need is to get information from available nodes as soon as possible. So timeout is not too big (about 3000 ms).
Here call example:
Timeout = 3000
Nodes = AllConfiguredNodes
gen_server:multi_call(Nodes, broker, get_score, Timeout)
I expect that this call returns result in Timeout ms. But in case of net split it does not. It waits approx. 8 seconds.
What I found that multi_call request is halted for additional 5 seconds in call erlang:monitor(process, {Name, Node}) before sending request.
I really do not care that some node do not reply or busy or not available, I can use any other but with this halting I forced to wait until Erlang VM
try to establish new connection to dead/not available node.
The question is: do you know solution that can prevent this halting? Or may be another RPC that suitable for my situation.

I'm not sure if I totally understand the problem you are trying to solve, but if it is to get all the answers that can be retrieved in X amount of time and ignore the rest, you might try a combination of async_call and nb_yield.
Maybe something like
somefun() ->
SmallTimeMs = 50,
Nodes = AllConfiguredNodes,
Promises = [rpc:async_call(N, some_mod, some_fun, ArgList) || N <- Nodes],
get_results([], Promises, SmallTimeMs).
get_results(Results, _Promises, _SmallTimeMs) when length(Results) > 1 -> % Replace 1 with whatever is the minimum acceptable number of results
lists:flatten(Results);
get_results(Results, Promises, SmallTimeMs) ->
Rs = get_promises(Promises, SmallTimeMs)
get_results([Results|Rs], Promises, SmallTimeMs)).
get_promise(Promises, WaitMs) ->
[rpc:nb_yield(Key, WaitMs) || Key <- Promises].
See: http://erlang.org/doc/man/rpc.html#async_call-4 for more details.

My solution of the problem.
I've made my own implementation of multicall that uses gen_server:call
Basic idea is to call all nodes with gen_server:call() in separate process. And collect result of these calls. Collection is made by receiving messages from mailbox of calling process.
To control timeout I calculate deadline when timeout expired and then use it as reference point to calculate timeout for after in receive.
Implementation
Main function is:
multicall(Nodes, Name, Req, Timeout) ->
Refs = lists:map(fun(Node) -> call_node(Node, Name, Req, Timeout) end, Nodes),
Results = read_all(Timeout, Refs),
PosResults = [ { Node, Result } || { ok, { ok, { Node, Result } } } <- Results ],
{ PosResults, calc_bad_nodes(Nodes, PosResults) }.
Idea here is to call all nodes and wait for all results within one Timeout.
Calling one node is performed from spawned process. It catches exits that used by gen_server:call in case of error.
call_node(Node, Name, Req, Timeout) ->
Ref = make_ref(),
Self = self(),
spawn_link(fun() ->
try
Result = gen_server:call({Name,Node},Req,Timeout),
Self ! { Ref, { ok, { Node, Result } } }
catch
exit:Exit ->
Self ! { Ref, { error, { 'EXIT', Exit } } }
end
end),
Ref.
Bad nodes are calculated as those that are not respond within Timout
calc_bad_nodes(Nodes, PosResults) ->
{ GoodNodes, _ } = lists:unzip(PosResults),
[ BadNode || BadNode <- Nodes, not lists:member(BadNode, GoodNodes) ].
Results are collected by reading mailbox with Timeout
read_all(ReadList, Timeout) ->
Now = erlang:monotonic_time(millisecond),
Deadline = Now + Timeout,
read_all_impl(ReadList, Deadline, []).
Implementation reads until Deadline does not occur
read_all_impl([], _, Results) ->
lists:reverse(Results);
read_all_impl([ W | Rest ], expired, Results) ->
R = read(0, W),
read_all_impl(Rest, expired, [R | Results ]);
read_all_impl([ W | Rest ] = L, Deadline, Results) ->
Now = erlang:monotonic_time(millisecond),
case Deadline - Now of
Timeout when Timeout > 0 ->
R = read(Timeout, W),
case R of
{ ok, _ } ->
read_all_impl(Rest, Deadline, [ R | Results ]);
{ error, { read_timeout, _ } } ->
read_all_impl(Rest, expired, [ R | Results ])
end;
Timeout when Timeout =< 0 ->
read_all_impl(L, expired, Results)
end.
One single read is just receive from mailbox with Timeout.
read(Timeout, Ref) ->
receive
{ Ref, Result } ->
{ ok, Result }
after Timeout ->
{ error, { read_timeout, Timeout } }
end.
Further improvements:
rpc module spawns separate process to avoid garbage of late answers. So It will be useful to do the same in this multicall function
infinity timeout may be handled in obvious way

Related

What causes Erlang runtime error {undef,[{rand,uniform,[2],[]},...]}?

When executing an implementation of the Tarry distributed algorithm, a problem occurs that I don't know how to address: a crash containing the error {undef,[{rand,uniform,[2],[]}. My module is below:
-module(assign2_ex).
-compile(export_all).
%% Tarry's Algorithm with depth-first version
start() ->
Out = get_lines([]),
Nodes = createNodes(tl(Out)),
Initial = lists:keyfind(hd(Out), 1, Nodes),
InitialPid = element(2, Initial),
InitialPid ! {{"main", self()}, []},
receive
{_, List} ->
Names = lists:map(fun(X) -> element(1, X) end, List),
String = lists:join(" ", lists:reverse(Names)),
io:format("~s~n", [String])
end.
get_lines(Lines) ->
case io:get_line("") of
%% End of file, reverse the input for correct order
eof -> lists:reverse(Lines);
Line ->
%% Split each line on spaces and new lines
Nodes = string:tokens(Line, " \n"),
%% Check next line and add nodes to the result
get_lines([Nodes | Lines])
end.
%% Create Nodes
createNodes(List) ->
NodeNames = [[lists:nth(1, Node)] || Node <- List],
Neighbours = [tl(SubList) || SubList <- List],
Pids = [spawn(assign2_ex, midFunction, [Name]) || Name <-NodeNames],
NodeIDs = lists:zip(NodeNames, Pids),
NeighbourIDs = [getNeighbours(N, NodeIDs) || N <- lists:zip(NodeIDs, Neighbours)],
[Pid ! NeighbourPids || {{_, Pid}, NeighbourPids} <- NeighbourIDs],
NodeIDs.
getNeighbours({{Name, PID}, NeighboursForOne}, NodeIDs) ->
FuncMap = fun(Node) -> lists:keyfind([Node], 1, NodeIDs) end,
{{Name, PID}, lists:map(FuncMap, NeighboursForOne)}.
midFunction(Node) ->
receive
Neighbours -> tarry_depth(Node, Neighbours, [])
end.
%% Tarry's Algorithm with depth-first version
%% Doesn't visit the nodes which have been visited
tarry_depth(Name, Neighbours, OldParent) ->
receive
{Sender, Visited} ->
Parent = case OldParent of [] -> [Sender]; _ -> OldParent end,
Unvisited = lists:subtract(Neighbours, Visited),
Next = case Unvisited of
[] -> hd(Parent);
_ -> lists:nth(rand:uniform(length(Unvisited)), Unvisited)
end,
Self = {Name, self()},
element(2, Next) ! {Self, [Self | Visited]},
tarry_depth(Name, Neighbours, Parent)
end.
An undef error means that the program tried to call an undefined function. There are three reasons that this can happen for:
There is no module with that name (in this case rand), or it cannot be found and loaded for some reason
The module doesn't define a function with that name and arity. In this case, the function in question is uniform with one argument. (Note that in Erlang, functions with the same name but different numbers of arguments are considered separate functions.)
There is such a function, but it isn't exported.
You can check the first by typing l(rand). in an Erlang shell, and the second and third by running rand:module_info(exports)..
In this case, I suspect that the problem is that you're using an old version of Erlang/OTP. As noted in the documentation, the rand module was introduced in release 18.0.
Will be good if you provide the version of Erlang/OTP you are using for future questions as Erlang has changed a lot over the years. As far as i know there is no rand:uniform with arity 2 at least in recent Erlang versions and that is what you are getting the undef error, for that case you could use crypto:rand_uniform/2 like crypto:rand_uniform(Low, High). Hope this helps :)

proper way to split / rejoin a Flux

is the following the proper/idiomatic way to split off a Flux into different processing paths and join them back - for the purpose of the question, events shouldn't be discarded, ordering is unimportant, and memory is unlimited.
Flux<Integer> beforeFork = Flux.range(1, 10);
ConnectableFlux<Integer> forkPoint = beforeFork
.publish()
;
Flux<String> slowPath = forkPoint
.filter(i -> i % 2 == 0)
.map(i -> "slow"+"_"+i)
.delayElements(Duration.ofSeconds(1))
;
Flux<String> fastPath = forkPoint
.filter(i -> i % 2 != 0)
.map(i -> "fast"+"_"+i)
;
// merge vs concat since we need to eagerly subscribe to
// the ConnectableFlux before the connect()
Flux.merge(fastPath, slowPath)
.map(s -> s.toUpperCase()) // pretend this is a more complex sequence
.subscribe(System.out:println)
;
forkPoint.connect();
i suppose i could also groupBy() then filter() on key() if the filter() function was slower than %.
NOTE that i do want the slowPath and fastPath to consume the same events from the beforeFork point since beforeFork is slow to produce.
NOTE that i do have a more complex followup (i.e. change to range(1,100) and the behavior around the prefetch boundary is confusing to me) - but i'd only make sense if the above snippet is legal.
I believe that it is more common to see this written this way:
Flux<Integer> beforeFork = Flux.range(1, 10).publish().autoConnect(2);
Flux<String> slowPath = beforeFork
.filter(i -> i % 2 == 0)
.map(i -> "slow"+"_"+i)
.delayElements(Duration.ofSeconds(1));
Flux<String> fastPath = beforeFork
.filter(i -> i % 2 != 0)
.map(i -> "fast"+"_"+i);
Flux.merge(fastPath, slowPath)
.map(s -> s.toUpperCase())
.doOnNext(System.out::println)
.blockLast();
A summary of the changes:
autoConnect(N) - allows us to specify that the beforeFork publisher is connected to after N subscribers. If we know the number of expected paths upfront, we can specify this and prevent caching or duplicate execution the publisher.
blockingLast() - we block on the joining Flux itself. You might have noticed that if you ran your current code only the fast results appear to be logged. This is because we were not actually waiting for the slow results to complete.
This is assuming that your original Publisher is finite with a fixed number of elements. Other changes would need to be made for something like Flux.interval or an ongoing stream.
For prefetch I can refer you to this question:
What does prefetch mean in Project Reactor?

How do I implement Failover within an Akka.NET cluster using the Akka.FSharp API?

How do I implement Failover within an Akka.NET cluster using the Akka.FSharp API?
I have the following cluster node that serves as a seed:
open Akka
open Akka.FSharp
open Akka.Cluster
open System
open System.Configuration
let systemName = "script-cluster"
let nodeName = sprintf "cluster-node-%s" Environment.MachineName
let akkaConfig = Configuration.parse("""akka {
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
}
remote {
log-remote-lifecycle-events = off
helios.tcp {
hostname = "127.0.0.1"
port = 2551
}
}
cluster {
roles = ["seed"] # custom node roles
seed-nodes = ["akka.tcp://script-cluster#127.0.0.1:2551"]
# when node cannot be reached within 10 sec, mark is as down
auto-down-unreachable-after = 10s
}
}""")
let actorSystem = akkaConfig |> System.create systemName
let clusterHostActor =
spawn actorSystem nodeName (fun (inbox: Actor<ClusterEvent.IClusterDomainEvent>) ->
let cluster = Cluster.Get actorSystem
cluster.Subscribe(inbox.Self, [| typeof<ClusterEvent.IClusterDomainEvent> |])
inbox.Defer(fun () -> cluster.Unsubscribe(inbox.Self))
let rec messageLoop () =
actor {
let! message = inbox.Receive()
// TODO: Handle messages
match message with
| :? ClusterEvent.MemberJoined as event -> printfn "Member %s Joined the Cluster at %O" event.Member.Address.Host DateTime.Now
| :? ClusterEvent.MemberLeft as event -> printfn "Member %s Left the Cluster at %O" event.Member.Address.Host DateTime.Now
| other -> printfn "Cluster Received event %O at %O" other DateTime.Now
return! messageLoop()
}
messageLoop())
I then have an arbitrary node that could die:
open Akka
open Akka.FSharp
open Akka.Cluster
open System
open System.Configuration
let systemName = "script-cluster"
let nodeName = sprintf "cluster-node-%s" Environment.MachineName
let akkaConfig = Configuration.parse("""akka {
actor {
provider = "Akka.Cluster.ClusterActorRefProvider, Akka.Cluster"
}
remote {
log-remote-lifecycle-events = off
helios.tcp {
hostname = "127.0.0.1"
port = 0
}
}
cluster {
roles = ["role-a"] # custom node roles
seed-nodes = ["akka.tcp://script-cluster#127.0.0.1:2551"]
# when node cannot be reached within 10 sec, mark is as down
auto-down-unreachable-after = 10s
}
}""")
let actorSystem = akkaConfig |> System.create systemName
let listenerRef =
spawn actorSystem "temp2"
<| fun mailbox ->
let cluster = Cluster.Get (mailbox.Context.System)
cluster.Subscribe (mailbox.Self, [| typeof<ClusterEvent.IMemberEvent>|])
mailbox.Defer <| fun () -> cluster.Unsubscribe (mailbox.Self)
printfn "Created an actor on node [%A] with roles [%s]" cluster.SelfAddress (String.Join(",", cluster.SelfRoles))
let rec seed () =
actor {
let! (msg: obj) = mailbox.Receive ()
match msg with
| :? ClusterEvent.MemberRemoved as actor -> printfn "Actor removed %A" msg
| :? ClusterEvent.IMemberEvent -> printfn "Cluster event %A" msg
| _ -> printfn "Received: %A" msg
return! seed () }
seed ()
What is the recommended practice for implementing failover within a cluster?
Specifically, is there a code example of how a cluster should behave when one of its nodes is no longer available?
Should my cluster node spin-up a replacement or is there a different behavior?
Is there a configuration that automatically handles this that I can just set without having to write code?
What code would I have to implement and where?
First of all it's a better idea to rely on MemberUp and MemberRemoved events (both implementing ClusterEvent.IMemberEvent interface, so subscribe for it), as they mark phases, when node joining/leaving procedure has been completed. Joined and left events doesn't necessarily ensure that node is fully operable at signaled point in time.
Regarding failover scenario:
Automatic spinning of the replacements can be done via Akka.Cluster.Sharding plugin (read articles 1 and 2 to get more info about how does it work). There's no equivalent in Akka.FSharp for it, but you may use Akkling.Cluster.Sharding plugin instead: see example code.
Another way is to create replacement actors up-front on each of the nodes. You can route messages to them by using clustered routers or distributed publish/subscribe. This is however more a case in situation, when you have stateless scenarios, so that each actor is perfectly able to pick up work of another actor at any time. This is more general solution for distributing work among many actors living on many different nodes.
You may also set watchers over processing actors. By using monitor function, you may order your actor to watch over another actor (no matter where does it live). In case of node failure, info about dying actor will be send in form of Terminated message to all of its watchers. This way you may implement your own logic i.e. recreating actor on another node. This is actually the most generic way, as it doesn't use any extra plugins or configuration, but the behavior needs to be described by yourself.

Global state and Async Workflows in F#

A common example used to illustrate asynchronous workflows in F# is retrieving multiple webpages in parallel. One such example is given at: http://en.wikibooks.org/wiki/F_Sharp_Programming/Async_Workflows Code shown here in case the link changes in the future:
open System.Text.RegularExpressions
open System.Net
let download url =
let webclient = new System.Net.WebClient()
webclient.DownloadString(url : string)
let extractLinks html = Regex.Matches(html, #"http://\S+")
let downloadAndExtractLinks url =
let links = (url |> download |> extractLinks)
url, links.Count
let urls =
[#"http://www.craigslist.com/";
#"http://www.msn.com/";
#"http://en.wikibooks.org/wiki/Main_Page";
#"http://www.wordpress.com/";
#"http://news.google.com/";]
let pmap f l =
seq { for a in l -> async { return f a } }
|> Async.Parallel
|> Async.Run
let testSynchronous() = List.map downloadAndExtractLinks urls
let testAsynchronous() = pmap downloadAndExtractLinks urls
let time msg f =
let stopwatch = System.Diagnostics.Stopwatch.StartNew()
let temp = f()
stopwatch.Stop()
printfn "(%f ms) %s: %A" stopwatch.Elapsed.TotalMilliseconds msg temp
let main() =
printfn "Start..."
time "Synchronous" testSynchronous
time "Asynchronous" testAsynchronous
printfn "Done."
main()
What I would like to know is how one should handle changes in global state such as loss of a network connection? Is there an elegant way to do this?
One could check the state of the network prior to making the Async.Parallel call, but the state could change during execution. Assuming what one wanted to do was pause execution until the network was available again rather than fail, is there a functional way to do this?
First of all, there is one issue with the example - it uses Async.Parallel to run multiple operations in parallel but the operations themselves are not implemented as asynchronous, so this will not avoid blocking excessive number of threads in the thread pool.
Asynchronous. To make the code fully asynchronous, the download and downloadAndExtractLinks functions should be asynchronous too, so that you can use AsyncDownloadString of the WebClient:
let asyncDownload url = async {
let webclient = new System.Net.WebClient()
return! webclient.AsyncDownloadString(System.Uri(url : string)) }
let asyncDownloadAndExtractLinks url = async {
let! html = asyncDownload url
let links = extractLinks html
return url, links.Count }
let pmap f l =
seq { for a in l -> async { return! f a } }
|> Async.Parallel
|> Async.RunSynchronously
Retrying. Now, to answer the question - there is no built-in mechanism for handling of errors such as network failure, so you will need to implement this logic yourself. What is the right approach depends on your situation. One common approach is to retry the operation certain number of times and throw the exception only if it does not succeed e.g. 10 times. You can write this as a primitive that takes other asynchronous workflow:
let rec asyncRetry times op = async {
try
return! op
with e ->
if times <= 1 then return (reraise e)
else return! asyncRetry (times - 1) op }
Then you can change the main function to build a workflow that retries the download 10 times:
let testAsynchronous() =
pmap (asyncRetry 10 downloadAndExtractLinks) urls
Shared state. Another problem is that Async.Parallel will only return once all the downloads have completed (if there is one faulty web site, you will have to wait). If you want to show the results as they come back, you will need something more sophisticated.
One nice way to do this is to use F# agent - create an agent that stores the results obtained so far and can handle two messages - one that adds new result and another that returns the current state. Then you can start multiple async tasks that will send the result to the agent and, in a separate async workflow, you can use polling to check the current status (and e.g. update the user interface).
I wrote a MSDN series about agents and also two articles for developerFusion that have a plenty of code samples with F# agents.

Erlang Dictionary fetch crash

EDIT
I have two modules and both cause bad args errors when fetching from the dictionary(gen_server state)
Here is code from one module
init([ChunkSize, RunningCounter]) ->
D0 = dict:new(),
D1 = dict:store(chunkSize, ChunkSize, D0),
D2 = dict:store(torrentUploadSpeed, 0, D1),
D3 = dict:store(torrentDownloadSpeed, 0, D2),
TorrentDownloadQueue = queue:new(),
TorrentUploadQueue = queue:new(),
D4 = dict:store(torrentDownloadQueue, TorrentDownloadQueue, D3),
D5 = dict:store(torrentUploadQueue, TorrentUploadQueue, D4),
D6 = dict:store(runningCounter, RunningCounter, D5),
{ok, D6}.
I then set_peer_state which sets up a peer dictionary(1 unique for each peer) The dictionary holds the download and upload (queue and speed) and I add this to the main gen_server state(dictionary) So I have the main torrent data in the main dictionary with a dictionary for each peer stored by the peer id.
set_peer_state(Id) ->
gen_server:cast(?SERVER, {setPeerState, Id}).
handle_cast({setPeerState, Id}, State) ->
io:format("In the Set Peer State ~p~n", [dict:fetch(runningCounter, State)]),
Id0 = dict:new(),
PeerDownloadQueue = queue:new(),
PeerUploadQueue = queue:new(),
Id1 = dict:store(peerDownloadQueue, PeerDownloadQueue, Id0),
Id2 = dict:store(peerUploadQueue, PeerUploadQueue, Id1),
Id3 = dict:store(peerDownloadSpeed, 0, Id2),
Id4 = dict:store(peerUploadSpeed, 0, Id3),
D = dict:store(Id, Id4, State),
{noreply, D};
This seems to work so far. But when I try updating the torrent state it crashes when fetching from the dictionary.
handle_cast({updateTorrentDownloadState, Time}, State) ->
% fetch the counter for the speed calculation and queue length
RunningCounter = dict:fetch(runningCounter, State),
% Fetch the Torrents download queue
TorrentDownloadQueue = dict:fetch(torrentDownloadQueue, State),
io:format("The fetched queue is ~p~n", [dict:fetch(torrentDownloadQueue, State)]),
% Add the item to the queue (main torrent upload queue)
TorrentDownloadQueue2 = queue:in(Time, TorrentDownloadQueue),
% Get the lenght of the downloadQueue
TorrentDownloadQueueLength = queue:len(TorrentDownloadQueue2),
% If the queue is larger than the running counter remove item
if
TorrentDownloadQueueLength >= RunningCounter ->
% Remove item from the queue
TorrentDownloadQueue3 = queue:drop(TorrentDownloadQueue2),
update_torrent_download(TorrentDownloadQueue3, State);
TorrentDownloadQueueLength < RunningCounter ->
update_torrent_download(TorrentDownloadQueue2, State)
end;
and here are the 2 internal functions
update_torrent_download(TorrentDownloadQueue, State) ->
% Store the queue to the new torrent dict
State2 = dict:store(torrentDownLoadQueue, TorrentDownloadQueue, State),
Speed = calculate_speed(TorrentDownloadQueue, State2),
State3 = dict:store(torrentDownloadSpeed, Speed, State2),
{noreply, State3}.
calculate_speed(Queue, State) ->
List = queue:to_list(Queue),
Sum = lists:sum(List),
Count = queue:len(Queue),
ChunkSize = dict:fetch(chunkSize, State),
Speed = (Count * ChunkSize) div Sum,
{ok, Speed}.
Could it be that passing incorrect data to the setters crash the server?
Or does the state get lost along the way?
This way of doing it seems messy with all the new dicts to store in the old dict, is there a better way to handle this data structure(main torrent and data for each peer)?
I know I could make the dictionaries from lists, but it was messing with my mind at the point I was testing this module.
Thanks
Your problem is that State is not a dict.
1> dict:fetch(runningCounter, not_a_dict).
** exception error: {badrecord,dict}
in function dict:get_slot/2
in call from dict:fetch/2
As YOUR ARGUMENT IS VALID suggested, you're state, at that point of your code, is not a dict.
Answering to your comments, now.
The state of your gen_server is set up in the init function, where you return: {ok, State}.
Every time your gen_server receive a message, an handle_call or an handle_cast are called (depending if the call is synchronous or asynchronous). Inside these functions, the State that you set up during the init phase can be updated and transformed into ANYTHING. You can't rely on the assumption that the "type" of your initial state is the same during the whole execution of your server.
In other words, if you do something like:
init(_Args) -> {ok, dict:new()}.
handle_call(_Request, _From, State) ->
{ok, completely_new_state}.
you've just "converted" your state from a dict into an atom and that (the atom) is what you will get in subsequent calls.
For this kind of errors, the Erlang tracing tool dbg is quite helpful, allowing you to see how functions are called and which results are returned. Have a look to this short tutorial to learn how to use it:
http://aloiroberto.wordpress.com/2009/02/23/tracing-erlang-functions/
UPDATE:
What you should do:
init(_Args) -> {ok, dict:new()}.
handle_call(_Request, _From, State) ->
NewState = edit_dict(State),
{ok, NewState}.
Where the edit_dict function is a function which takes a dict and returns an updated dict.

Resources