EDIT
I have two modules and both cause bad args errors when fetching from the dictionary(gen_server state)
Here is code from one module
init([ChunkSize, RunningCounter]) ->
D0 = dict:new(),
D1 = dict:store(chunkSize, ChunkSize, D0),
D2 = dict:store(torrentUploadSpeed, 0, D1),
D3 = dict:store(torrentDownloadSpeed, 0, D2),
TorrentDownloadQueue = queue:new(),
TorrentUploadQueue = queue:new(),
D4 = dict:store(torrentDownloadQueue, TorrentDownloadQueue, D3),
D5 = dict:store(torrentUploadQueue, TorrentUploadQueue, D4),
D6 = dict:store(runningCounter, RunningCounter, D5),
{ok, D6}.
I then set_peer_state which sets up a peer dictionary(1 unique for each peer) The dictionary holds the download and upload (queue and speed) and I add this to the main gen_server state(dictionary) So I have the main torrent data in the main dictionary with a dictionary for each peer stored by the peer id.
set_peer_state(Id) ->
gen_server:cast(?SERVER, {setPeerState, Id}).
handle_cast({setPeerState, Id}, State) ->
io:format("In the Set Peer State ~p~n", [dict:fetch(runningCounter, State)]),
Id0 = dict:new(),
PeerDownloadQueue = queue:new(),
PeerUploadQueue = queue:new(),
Id1 = dict:store(peerDownloadQueue, PeerDownloadQueue, Id0),
Id2 = dict:store(peerUploadQueue, PeerUploadQueue, Id1),
Id3 = dict:store(peerDownloadSpeed, 0, Id2),
Id4 = dict:store(peerUploadSpeed, 0, Id3),
D = dict:store(Id, Id4, State),
{noreply, D};
This seems to work so far. But when I try updating the torrent state it crashes when fetching from the dictionary.
handle_cast({updateTorrentDownloadState, Time}, State) ->
% fetch the counter for the speed calculation and queue length
RunningCounter = dict:fetch(runningCounter, State),
% Fetch the Torrents download queue
TorrentDownloadQueue = dict:fetch(torrentDownloadQueue, State),
io:format("The fetched queue is ~p~n", [dict:fetch(torrentDownloadQueue, State)]),
% Add the item to the queue (main torrent upload queue)
TorrentDownloadQueue2 = queue:in(Time, TorrentDownloadQueue),
% Get the lenght of the downloadQueue
TorrentDownloadQueueLength = queue:len(TorrentDownloadQueue2),
% If the queue is larger than the running counter remove item
if
TorrentDownloadQueueLength >= RunningCounter ->
% Remove item from the queue
TorrentDownloadQueue3 = queue:drop(TorrentDownloadQueue2),
update_torrent_download(TorrentDownloadQueue3, State);
TorrentDownloadQueueLength < RunningCounter ->
update_torrent_download(TorrentDownloadQueue2, State)
end;
and here are the 2 internal functions
update_torrent_download(TorrentDownloadQueue, State) ->
% Store the queue to the new torrent dict
State2 = dict:store(torrentDownLoadQueue, TorrentDownloadQueue, State),
Speed = calculate_speed(TorrentDownloadQueue, State2),
State3 = dict:store(torrentDownloadSpeed, Speed, State2),
{noreply, State3}.
calculate_speed(Queue, State) ->
List = queue:to_list(Queue),
Sum = lists:sum(List),
Count = queue:len(Queue),
ChunkSize = dict:fetch(chunkSize, State),
Speed = (Count * ChunkSize) div Sum,
{ok, Speed}.
Could it be that passing incorrect data to the setters crash the server?
Or does the state get lost along the way?
This way of doing it seems messy with all the new dicts to store in the old dict, is there a better way to handle this data structure(main torrent and data for each peer)?
I know I could make the dictionaries from lists, but it was messing with my mind at the point I was testing this module.
Thanks
Your problem is that State is not a dict.
1> dict:fetch(runningCounter, not_a_dict).
** exception error: {badrecord,dict}
in function dict:get_slot/2
in call from dict:fetch/2
As YOUR ARGUMENT IS VALID suggested, you're state, at that point of your code, is not a dict.
Answering to your comments, now.
The state of your gen_server is set up in the init function, where you return: {ok, State}.
Every time your gen_server receive a message, an handle_call or an handle_cast are called (depending if the call is synchronous or asynchronous). Inside these functions, the State that you set up during the init phase can be updated and transformed into ANYTHING. You can't rely on the assumption that the "type" of your initial state is the same during the whole execution of your server.
In other words, if you do something like:
init(_Args) -> {ok, dict:new()}.
handle_call(_Request, _From, State) ->
{ok, completely_new_state}.
you've just "converted" your state from a dict into an atom and that (the atom) is what you will get in subsequent calls.
For this kind of errors, the Erlang tracing tool dbg is quite helpful, allowing you to see how functions are called and which results are returned. Have a look to this short tutorial to learn how to use it:
http://aloiroberto.wordpress.com/2009/02/23/tracing-erlang-functions/
UPDATE:
What you should do:
init(_Args) -> {ok, dict:new()}.
handle_call(_Request, _From, State) ->
NewState = edit_dict(State),
{ok, NewState}.
Where the edit_dict function is a function which takes a dict and returns an updated dict.
Related
When executing an implementation of the Tarry distributed algorithm, a problem occurs that I don't know how to address: a crash containing the error {undef,[{rand,uniform,[2],[]}. My module is below:
-module(assign2_ex).
-compile(export_all).
%% Tarry's Algorithm with depth-first version
start() ->
Out = get_lines([]),
Nodes = createNodes(tl(Out)),
Initial = lists:keyfind(hd(Out), 1, Nodes),
InitialPid = element(2, Initial),
InitialPid ! {{"main", self()}, []},
receive
{_, List} ->
Names = lists:map(fun(X) -> element(1, X) end, List),
String = lists:join(" ", lists:reverse(Names)),
io:format("~s~n", [String])
end.
get_lines(Lines) ->
case io:get_line("") of
%% End of file, reverse the input for correct order
eof -> lists:reverse(Lines);
Line ->
%% Split each line on spaces and new lines
Nodes = string:tokens(Line, " \n"),
%% Check next line and add nodes to the result
get_lines([Nodes | Lines])
end.
%% Create Nodes
createNodes(List) ->
NodeNames = [[lists:nth(1, Node)] || Node <- List],
Neighbours = [tl(SubList) || SubList <- List],
Pids = [spawn(assign2_ex, midFunction, [Name]) || Name <-NodeNames],
NodeIDs = lists:zip(NodeNames, Pids),
NeighbourIDs = [getNeighbours(N, NodeIDs) || N <- lists:zip(NodeIDs, Neighbours)],
[Pid ! NeighbourPids || {{_, Pid}, NeighbourPids} <- NeighbourIDs],
NodeIDs.
getNeighbours({{Name, PID}, NeighboursForOne}, NodeIDs) ->
FuncMap = fun(Node) -> lists:keyfind([Node], 1, NodeIDs) end,
{{Name, PID}, lists:map(FuncMap, NeighboursForOne)}.
midFunction(Node) ->
receive
Neighbours -> tarry_depth(Node, Neighbours, [])
end.
%% Tarry's Algorithm with depth-first version
%% Doesn't visit the nodes which have been visited
tarry_depth(Name, Neighbours, OldParent) ->
receive
{Sender, Visited} ->
Parent = case OldParent of [] -> [Sender]; _ -> OldParent end,
Unvisited = lists:subtract(Neighbours, Visited),
Next = case Unvisited of
[] -> hd(Parent);
_ -> lists:nth(rand:uniform(length(Unvisited)), Unvisited)
end,
Self = {Name, self()},
element(2, Next) ! {Self, [Self | Visited]},
tarry_depth(Name, Neighbours, Parent)
end.
An undef error means that the program tried to call an undefined function. There are three reasons that this can happen for:
There is no module with that name (in this case rand), or it cannot be found and loaded for some reason
The module doesn't define a function with that name and arity. In this case, the function in question is uniform with one argument. (Note that in Erlang, functions with the same name but different numbers of arguments are considered separate functions.)
There is such a function, but it isn't exported.
You can check the first by typing l(rand). in an Erlang shell, and the second and third by running rand:module_info(exports)..
In this case, I suspect that the problem is that you're using an old version of Erlang/OTP. As noted in the documentation, the rand module was introduced in release 18.0.
Will be good if you provide the version of Erlang/OTP you are using for future questions as Erlang has changed a lot over the years. As far as i know there is no rand:uniform with arity 2 at least in recent Erlang versions and that is what you are getting the undef error, for that case you could use crypto:rand_uniform/2 like crypto:rand_uniform(Low, High). Hope this helps :)
Now Im playing with the gen_server
I have two modules - one is Gen Server mod, second - logic module
and would like to send the message to the PID through the gen_server:call
here is the snip of code:
lookup_by_date(FromTime, ToTime) ->
gen_server:call({global, ?MODULE}, {lookup_by_date,FromTime,ToTime}).
here is the handle_call func:
handle_call({lookup_by_date, FromTime, ToTime}, _From, _State) ->
FromSec = calendar:datetime_to_gregorian_seconds(FromTime),
ToSec = calendar:datetime_to_gregorian_seconds(ToTime),
Pid = spawn(fun()-> logic:handler() end),
{reply, Pid !{lookup_by_date, FromSec, ToSec}, _State};
aand the logic mod code:
lookup_by_date(FromTime, ToTime) -> lookup_by_date(FromTime, ToTime, ets:first(auth), []).
lookup_by_date(_FromTime, _ToTime, '$end_of_table', Acc) -> {reply, Acc, ok};
lookup_by_date(FromTime, ToTime, Key, Acc) ->
case ets:lookup(auth, Key) of
[{Login, Pass, TTL, Unix, Unix2}] ->
F = calendar:datetime_to_gregorian_seconds(Unix2),
T = calendar:datetime_to_gregorian_seconds(Unix2),
if
F >= FromTime, T =< ToTime -> NewAcc = [{Login, Pass, TTL, Unix, Unix2}|Acc],
N = ets:next(auth, Key),
lookup_by_date(FromTime, ToTime, N, NewAcc);
true -> N = ets:next(auth, Key),
lookup_by_date(FromTime, ToTime, N, Acc)
end
end.
handler() ->
receive
{lookup_by_date, FromTime, ToTime}->
lookup_by_date(FromTime, ToTime),
handler();
Other->
io:format("Error message for ~p~n" ,[Other]),
handler()
end.
but i am getting the error (actually not an error)
2> c(cache_server).
{ok,cache_server}
3> c(logic).
{ok,logic}
4> cache_server:start([{ttl, 15000}]).
{ok,<0.73.0>}
5> cache_server:insert(test, root, 15000).
{auth,test,root,15000,1484309726435,
{{2017,1,13},{14,15,11}}}
6> cache_server:lookup_by_date({{2017,1,13},{14,15,11}},{{2017,1,13},{14,15,11}}).
{lookup_by_date,63651536111,63651536111}
I am receiving data from - {reply, Pid !{lookup_by_date, FromSec, ToSec}, _State};
but dont receive data from the "logic:lookup_by_date" function
Is there anyway you show me the right direction because Im stuck a little bit.
Thx...
In your code, the reply to the gen_server call is:
Pid !{lookup_by_date, FromSec, ToSec}
In Erlang messages are asynchronous, they are just sent to the process, so this code doesn't wait for a response, and it simply returns, immediatly, the message you are sending. It is why you get the reply {lookup_by_date, FromSec, ToSec}.
In your case you don't have to spawn a process, but simply call the lookup_by_date function:
handle_call({lookup_by_date, FromTime, ToTime}, _From, _State) ->
FromSec = calendar:datetime_to_gregorian_seconds(FromTime),
ToSec = calendar:datetime_to_gregorian_seconds(ToTime),
{reply, logic:lookup_by_date(FromSec, ToSec), _State};
Note: Your gen_server doesn't use the result, its state is not modified by the request, so you could directly call the function lookup_by_date and include the time conversion in it.
I have a map organized as follows.Key is a simple term lets say an integer but the value is complex tuple {BB,CC,DD}. What is the best way to find the minimum CC in the map ? So far I have the following
-module(test).
-author("andre").
%% API
-export([init/0]).
init() ->
TheMap = build(maps:new(), 20),
io:format("Map: ~p~n", [TheMap]),
AKey = hd(maps:keys(TheMap)),
AValue = maps:get(AKey, TheMap),
maps:fold(fun my_min/3, {AKey, AValue}, TheMap).
build(MyMap, Count) when Count == 0 ->
MyMap;
build(MyMap, Count) ->
NewMap = maps:put(Count, {random:uniform(100), random:uniform(100), random:uniform(100)}, MyMap),
build(NewMap, Count - 1).
my_min(Key, {A,B,C}, {MinKey, {AA,BB,CC}}) ->
if B < BB -> {Key, {A,B,C}};
B >= BB -> {MinKey, {AA,BB,CC}}
end.
My map is small so I am not too worried about the usage of AKey and AValue to find initial values for the fold, but I was wondering if there was a better way, or other data structure.
--
Thanks.
What you have is close to a good solution, but it can be improved. There's no need to dig out the first key and value to use an the initial value for the fold, since you can just pass an artificial value instead and make your fold function deal with it. Also, you can improve your use of pattern matching in function heads. Lastly, use start instead of init since that makes it easier to invoke when calling erl from the command line.
Here's an improved version:
-module(test).
-author("andre").
%% API
-export([start/0]).
start() ->
TheMap = build(maps:new(), 20),
io:format("Map: ~p~n", [TheMap]),
maps:fold(fun my_min/3, {undefined, undefined}, TheMap).
build(MyMap, 0) ->
MyMap;
build(MyMap, Count) ->
NewMap = maps:put(Count, {random:uniform(100), random:uniform(100), random:uniform(100)}, MyMap),
build(NewMap, Count - 1).
my_min(Key, Value, {undefined, undefined}) ->
{Key, Value};
my_min(Key, {_,B,_}=Value, {_, {_,BB,_}}) when B < BB ->
{Key, Value};
my_min(_Key, _Value, Acc) ->
Acc.
The my_min/3 fold function has three clauses. The first matches the special start value {undefined, undefined} and returns as the new accumulator value whatever {Key, Value} it was passed. The benefit of this is not only that you avoid special processing before starting the fold, but also that if the map is empty, you'll get the special value {undefined, undefined} as the result and you can handle it accordingly. The second clause uses a guard to check if B of the value is less than the BB value in the fold accumulator, and if it is, return {Key, Value} as the new accumulator value. The final clause just returns the existing accumulator value, since this clause is called only for values greater than or equal to that in the existing accumulator.
You might also look into using a simple list of key/value tuples, since for a small number of elements it might outperform a map. If your measurements indicate you should use a list, a similar fold would work for it as well.
-module(test).
-author("andre").
%% API
-export([init/0]).
init() ->
TheMap = build(maps:new(), 24),
io:format("Map: ~p~n", [TheMap]),
List = maps:to_list(TheMap),
io:format("List: ~p~n", [List]),
Fun = fun({_, {_, V1, _}} = Element, {_, {_, V2, _}}) when V1 < V2 ->
Element;
(_, Res) ->
Res
end,
Res = lists:foldl(Fun, hd(List), tl(List)),
io:format("Res: ~p~n", [Res]).
build(MyMap, Count) when Count == 0 ->
MyMap;
build(MyMap, Count) ->
NewMap = maps:put(Count, {random:uniform(100), random:uniform(100), random:uniform(100)}, MyMap),
build(NewMap, Count - 1).
You can use maps:to_list/1 to convert the map to a list, then you can use lists:foldl/3 to calculate the minimun value.
I am currently playing around with minimal web servers, like Cowboy. I want to pass a number in the URL, load lines of a file, sort these lines and print the element in the middle to test IO and sorting.
So the code loads the path like /123, makes a padded "00123" out of the number, loads the file "input00123.txt" and sorts its content and then returns something like "input00123.txt 0.50000".
At the sime time I have a test tool which makes 50 simultaneous requests, where only 2 get answered, the rest times out.
My handler looks like the following:
-module(toppage_handler).
-export([init/3]).
-export([handle/2]).
-export([terminate/3]).
init(_Transport, Req, []) ->
{ok, Req, undefined}.
readlines(FileName) ->
{ok, Device} = file:open(FileName, [read]),
get_all_lines(Device, []).
get_all_lines(Device, Accum) ->
case io:get_line(Device, "") of
eof -> file:close(Device), Accum;
Line -> get_all_lines(Device, Accum ++ [Line])
end.
handle(Req, State) ->
{PathBin, _} = cowboy_req:path(Req),
case PathBin of
<<"/">> -> Output = <<"Hello, world!">>;
_ -> PathNum = string:substr(binary_to_list(PathBin),2),
Num = string:right(PathNum, 5, $0),
Filename = string:concat("input",string:concat(Num, ".txt")),
Filepath = string:concat("../data/",Filename),
SortedLines = lists:sort(readlines(Filepath)),
MiddleIndex = erlang:trunc(length(SortedLines)/2),
MiddleElement = lists:nth(MiddleIndex, SortedLines),
Output = iolist_to_binary(io_lib:format("~s\t~s",[Filename,MiddleElement]))
end,
{ok, ReqRes} = cowboy_req:reply(200, [], Output, Req),
{ok, ReqRes, State}.
terminate(_Reason, _Req, _State) ->
ok.
I am running this on Windows to compare it with .NET. Is there anything to make this more performant, like running the sorting/IO in threads or how can I improve it? Running with cygwin didn't change the result a lot, I got about 5-6 requests answered.
Thanks in advance!
The most glaring issue: get_all_lines is O(N^2) because list concatenation (++) is O(N). Erlang list type is a singly linked list. The typical approach here is to use "cons" operator, appending to the head of the list, and reverse accumulator at the end:
get_all_lines(Device, Accum) ->
case io:get_line(Device, "") of
eof -> file:close(Device), lists:reverse(Accum);
Line -> get_all_lines(Device, [Line | Accum])
end.
Pass binary flag to file:open to use binaries instead of strings (which are just lists of characters in Erlang), they are much more memory and CPU-friendly.
One process listen to server on async socket and on each message {tcp, Socket, Bin} takes it's buffer and:
Data = list_to_binary([Buffer, Bin]),
{next_state, 'READY', State#state{buffer = Data}}.
On some events it flushes buffer:
'READY'({flush}, #state{buffer = Buffer} = State) ->
{reply, {Buffer}, 'READY', State#state{buffer = <<>>}}.
Is it expensive? Maybe better just to make a list and make list_to_binary(lists:reverse()) once on flush?
Your first method appears to be much slower than your second method (by a factor of about 3000 on my platform):
-module(test).
-export([test/0, performance_test/4]).
-define(ITERATIONS, 100000).
-define(NEW_DATA, <<1, 2, 3, 4, 5, 6, 7, 8, 9, 10>>).
accumulate_1(AccumulatedData, NewData) ->
list_to_binary([AccumulatedData, NewData]).
extract_1(AccumulatedData) ->
AccumulatedData.
accumulate_2(AccumulatedData, NewData) ->
[NewData | AccumulatedData].
extract_2(AccumulatedData) ->
list_to_binary(lists:reverse(AccumulatedData)).
performance_test(AccumulateFun, ExtractFun) ->
{Time, _Result} = timer:tc(test, performance_test, [AccumulateFun, ExtractFun, [], ?ITERATIONS]),
io:format("Test run: ~p microseconds~n", [Time]).
performance_test(_AccumulateFun, ExtractFun, AccumulatedData, _MoreIterations = 0) ->
ExtractFun(AccumulatedData);
performance_test(AccumulateFun, ExtractFun, AccumulatedData, MoreIterations) ->
NewAccumulatedData = AccumulateFun(AccumulatedData, ?NEW_DATA),
performance_test(AccumulateFun, ExtractFun, NewAccumulatedData, MoreIterations - 1).
test() ->
performance_test(fun accumulate_1/2, fun extract_1/1),
performance_test(fun accumulate_2/2, fun extract_2/1),
ok.
Output:
7> test:test().
Test run: 57204314 microseconds
Test run: 18996 microseconds
In current releases, the handling of binaries by the emulator has been significant improved,
so now you could also take the simpler path and generate the binary chunk by chunk:
buffer = #state{buffer = <<Buffer/binary, Bin/binary>>}.
I didn't test it against the other approach, but shouldn't be bad.
Performance between different implementations will also likely depends on how many times you are performing this on the same buffer and how big each chunk is.