Querying mnesia Fragmentated Tables using QLC returns wrong results - mnesia

am josh in Uganda. i created a mnesia fragmented table (64 fragments), and managed to populate it upto 9948723 records. Each fragment was a disc_copies type, with two replicas.
Now, using qlc (query list comprehension), was too slow in searching for a record, and was returning inaccurate results.
I found out that this overhead is that qlc uses the select function of mnesia which traverses the entire table in order to match records. i tried something else below.
-define(ACCESS_MOD,mnesia_frag).
-define(DEFAULT_CONTEXT,transaction).
-define(NULL,'_').
-record(address,{tel,zip_code,email}).
-record(person,{name,sex,age,address = #address{}}).
match()-> Z = fun(Spec) -> mnesia:match_object(Spec) end,Z.
match_object(Pattern)->
Match = match(),
mnesia:activity(?DEFAULT_CONTEXT,Match,[Pattern],?ACCESS_MOD).
Trying this functionality gave me good results. But i found that i have to dynamically build patterns for every search that may be made in my stored procedures.
i decided to go through the havoc of doing this, so i wrote functions which will dynamically build wild patterns for my records depending on which parameter is to be searched.
%% This below gives me the default pattern for all searches ::= {person,'_','_','_'}
pattern(Record_name)->
N = length(my_record_info(Record_name)) + 1,
erlang:setelement(1,erlang:make_tuple(N,?NULL),Record_name).
%% this finds the position of the provided value and places it in that
%% position while keeping '_' in the other positions.
%% The caller function can use this function recursively until
%% it has built the full search pattern of interest
pattern({Field,Value},Pattern_sofar)->
N = position(Field,my_record_info(element(1,Pattern_sofar))),
case N of
-1 -> Pattern_sofar;
Int when Int >= 1 -> erlang:setelement(N + 1,Pattern_sofar,Value);
_ -> Pattern_sofar
end.
my_record_info(Record_name)->
case Record_name of
staff_dynamic -> record_info(fields,staff_dynamic);
person -> record_info(fields,person);
_ -> []
end.
%% These below,help locate the position of an element in a list
%% returned by "-record_info(fields,person)"
position(_,[]) -> -1;
position(Value,List)->
find(lists:member(Value,List),Value,List,1).
find(false,_,_,_) -> -1;
find(true,V,[V|_],N)-> N;
find(true,V,[_|X],N)->
find(V,X,N + 1).
find(V,[V|_],N)-> N;
find(V,[_|X],N) -> find(V,X,N + 1).
This was working very well though it was computationally intensive.
It could still work even after changing the record definition since at compile time, it gets the new record info
The problem is that when i initiate even 25 processes on a 3.0 GHz pentium 4 processor running WinXP, It hangs and takes a long time to return results.
If am to use qlc in these fragments, to get accurate results, i have to specify which fragment to search in like this.
find_person_by_tel(Tel)->
select(qlc:q([ X || X <- mnesia:table(Frag), (X#person.address)#address.tel == Tel])).
select(Q)->
case ?transact(fun() -> qlc:e(Q) end) of
{atomic,Val} -> Val;
{aborted,_} = Error -> report_mnesia_event(Error)
end.
Qlc was returning [], when i search for something yet when i use match_object/1 i get accurate results. I found that using match_expressions can help.
mnesia:table(Tab,Props).
where Props is a data structure that defines the match expression, the chunk size of return values e.t.c
I got a problem when i tried building match expressions dynamically.
Function mnesia:read/1 or mnesia:read/2 requires that you have the primary key
Now am asking myself, how can i efficiently use QLC to search for records in a large fragmented table? Please help.
I know that using tuple representation of records makes code hard to upgrade. This is why
i hate using mnesia:select/1, mnesia:match_object/1 and i want to stick to QLC. QLC is giving me wrong results in my queries from a mnesia table of 64 fragments even on the same node.
Has anyone ever used QLC to query a fragmented table?, please help

Do you invoke the qlc in the activity context?
tfn_match(Id) ->
Search = #person{address=#address{tel=Id, _ = '_'}, _ = '_'},
trans(fun() -> mnesia:match_object(Search) end).
tfn_qlc(Id) ->
Q = qlc:q([ X || X <- mnesia:table(person), (X#person.address)#address.tel == Id]),
trans(fun() -> qlc:e(Q) end).
trans(Fun) ->
try Res = mnesia:activity(transaction, Fun, mnesia_frag),
{atomic, Res}
catch exit:Error ->
{aborted, Error}
end.

Related

What is the Erlang way to do stream manipulations?

Suppose I wanted to do something like:
dict
.values()
.map(fun scrub/1)
.flatMap(fun split/1)
.groupBy(fun keyFun/1, fun count/1)
.to_dict()
What is the most elegant way to achieve this in Erlang?
There is no direct easy way of doing that. All attempts I saw looked even worse than straightforward composition. If you will look at majority of open source project in Erlang, you will find that they use generic composition. Re-using your example:
to_dict(
groupBy(fun keyFun/1, fun count/1,
flatMap(fun split/1,
map(fun scrub/1,
values(dict))))).
This isn't a construct that's natural in Erlang. If you have a couple functions, regular composition is what I'd use:
lists:flatten(lists:map(fun (A) ->
do_stuff(A)
end,
generate_list())).
For a longer series of operations, intermediary variables:
Dict = #{hello => world, ...},
Values = maps:values(Dict),
ScrubbedValues = lists:map(fun scrub/1, Values),
SplitValues = lists:flatten(lists:map(fun split/1, ScrubbedValues)),
GroupedValues = basil_lists:group_by(fun keyFun/1, fun count/1, SplitValues),
Dict2 = maps:from_list(GroupedValues).
That's how it'd look if you wanted all of those operations grouped in one shot together.
However, I'd more likely write this in a different way:
-spec remap_values(map()) -> map().
remap_values(Map) ->
map_values(maps:values(Map)).
-spec map_values(list()) -> map().
map_values(Values) ->
map_values(Values, [], []).
-spec map_values(list(), list(), list()) -> map().
map_values([], OutList, OutGroup) ->
%% Base case: transform into a map
Grouped = lists:zip(OutGroup, OutList),
lists:foldl(fun ({Group, Element}, Acc = #{Group := Existing}) ->
Acc#{Group => [Element | Existing]};
({Group, Element}, Acc) ->
Acc#{Group => [Element]}
end,
#{},
Grouped;
map_values([First|Rest], OutList, OutGroup) ->
%% Recursive case: categorize process the first element and categorize the result
Processed = split(scrub(First)),
Categories = lists:map(fun categorize/1, Processed),
map_values(Rest, OutList ++ Processed, OutGroup ++ Categories).
The actual correct implementation depends a lot on how the code's going to be run -- what I've written here is pretty simple, but might not perform well on large amounts of data. If you're actually looking to process an endless stream of data you'll need to write that yourself (though you may find Gen Servers to be a very useful framework for doing so).

this clause cannot match because of different types/sizes

i tried to implement binary_search in erlang :
binary_search(X , List) ->
case {is_number(x) , is_list(List)} of
{false , false} -> {error};
{false , true} -> {error} ;
{true , false} -> {error} ;
{true , true} ->
Length = length(List) ,
case Length of
0 -> {false};
1 -> case lists:member(X , List) of
true -> {true};
false -> {false}
end ;
_ ->
Middle = (Length + 1) div 2 ,
case X >= Middle of
true -> binary_search(X , lists:sublist(List , Middle , Length));
false -> binary_search(X , lists:sublist(List , 1 , Middle))
end
end
end .
However when i try to compile it , i get the following error : "this clause cannot match because of different types/sizes" in the two lines :
{true , false} -> {error} ;
{true , true} ->
is_number(x) will always return false since you made a typo: x instead of X, an atom instead of a variable.
BTW, I don't know what you are experiencing, but the whole code can be written as:
binary_search(X , [_|_] = List) when is_number(X) ->
{lists:member(X,List)};
binary_search(_,_) -> {error}.
Context: The OP's post appears to be a learning example -- an attempt to understand binary search in Erlang -- and is treated as one below (hence the calls to io:format/2 each iteration of the inner function). In production lists:member/2 should be used as noted by Steve Vinoski in a comment below, or lists:member/2 guarded by a function head as in Pascal's answer. What follows is a manual implementation of binary search.
Pascal is correct about the typo, but this code has more fundamental problems. Instead of just finding the typo let's see if we can obviate the need for this nested case checking entirely.
(The code as written above won't work anyway because X should not represent the value of an index, but rather the value that is held at that index, so Middle will likely never match X. Also, there is another issue: you don't cover all the base cases (cases in which you should stop recursing). So the inner function below covers them all up front as matches within the function head, so it is more obvious how the search works. Note the Middle + 1 when X > Value, by the way; contemplate why this is necessary.)
Two main notes on Erlang style
First: If you receive the wrong sort of data, just crash, don't return an error. With that in mind, consider using a guard.
Second: If you find yourself doing lots of cases, you can usually simplify your life by making them named functions. This gives you two advantages:
A much better crash report than you will get within nested case expressions.
A named, pure function can be tested and even formally verified rather easily if it is small enough -- which is also pretty cool. (As a side note, the religion of testing tests my patience and sanity at times, but when you have pure functions you actually can test at least those parts of your program -- so distilling out as much of this sort of thing as possible is a big win.)
Below I do both, and this should obviate the issue you ran into as well as make things a bit easier to read/sort through mentally:
%% Don't return errors, just crash.
%% Only check the data on entry.
%% Guarantee the data is sorted, as this is fundamental to binary search.
binary_search(X, List)
when is_number(X),
is_list(List) ->
bs(X, lists:sort(List)).
%% Get all of our obvious base cases out of the way as matches.
%% Note the lack of type checking; its already been done.
bs(_, []) -> false;
bs(X, [X]) -> true;
bs(X, [_]) -> false;
bs(X, List) ->
ok = io:format("bs(~p, ~p)~n", [X, List]),
Length = length(List),
Middle = (Length + 1) div 2,
Value = lists:nth(Middle, List),
% This is one of those rare times I find an 'if' to be more
% clear in meaning than a 'case'.
if
X == Value -> true;
X > Value -> bs(X, lists:sublist(List, Middle + 1, Length));
X < Value -> bs(X, lists:sublist(List, 1, Middle))
end.

maps,filter,folds and more? Do we really need these in Erlang?

Maps, filters, folds and more : http://learnyousomeerlang.com/higher-order-functions#maps-filters-folds
The more I read ,the more i get confused.
Can any body help simplify these concepts?
I am not able to understand the significance of these concepts.In what use cases will these be needed?
I think it is majorly because of the syntax,diff to find the flow.
The concepts of mapping, filtering and folding prevalent in functional programming actually are simplifications - or stereotypes - of different operations you perform on collections of data. In imperative languages you usually do these operations with loops.
Let's take map for an example. These three loops all take a sequence of elements and return a sequence of squares of the elements:
// C - a lot of bookkeeping
int data[] = {1,2,3,4,5};
int squares_1_to_5[sizeof(data) / sizeof(data[0])];
for (int i = 0; i < sizeof(data) / sizeof(data[0]); ++i)
squares_1_to_5[i] = data[i] * data[i];
// C++11 - less bookkeeping, still not obvious
std::vec<int> data{1,2,3,4,5};
std::vec<int> squares_1_to_5;
for (auto i = begin(data); i < end(data); i++)
squares_1_to_5.push_back((*i) * (*i));
// Python - quite readable, though still not obvious
data = [1,2,3,4,5]
squares_1_to_5 = []
for x in data:
squares_1_to_5.append(x * x)
The property of a map is that it takes a collection of elements and returns the same number of somehow modified elements. No more, no less. Is it obvious at first sight in the above snippets? No, at least not until we read loop bodies. What if there were some ifs inside the loops? Let's take the last example and modify it a bit:
data = [1,2,3,4,5]
squares_1_to_5 = []
for x in data:
if x % 2 == 0:
squares_1_to_5.append(x * x)
This is no longer a map, though it's not obvious before reading the body of the loop. It's not clearly visible that the resulting collection might have less elements (maybe none?) than the input collection.
We filtered the input collection, performing the action only on some elements from the input. This loop is actually a map combined with a filter.
Tackling this in C would be even more noisy due to allocation details (how much space to allocate for the output array?) - the core idea of the operation on data would be drowned in all the bookkeeping.
A fold is the most generic one, where the result doesn't have to contain any of the input elements, but somehow depends on (possibly only some of) them.
Let's rewrite the first Python loop in Erlang:
lists:map(fun (E) -> E * E end, [1,2,3,4,5]).
It's explicit. We see a map, so we know that this call will return a list as long as the input.
And the second one:
lists:map(fun (E) -> E * E end,
lists:filter(fun (E) when E rem 2 == 0 -> true;
(_) -> false end,
[1,2,3,4,5])).
Again, filter will return a list at most as long as the input, map will modify each element in some way.
The latter of the Erlang examples also shows another useful property - the ability to compose maps, filters and folds to express more complicated data transformations. It's not possible with imperative loops.
They are used in almost every application, because they abstract different kinds of iteration over lists.
map is used to transform one list into another. Lets say, you have list of key value tuples and you want just the keys. You could write:
keys([]) -> [];
keys([{Key, _Value} | T]) ->
[Key | keys(T)].
Then you want to have values:
values([]) -> [];
values([{_Key, Value} | T}]) ->
[Value | values(T)].
Or list of only third element of tuple:
third([]) -> [];
third([{_First, _Second, Third} | T]) ->
[Third | third(T)].
Can you see the pattern? The only difference is what you take from the element, so instead of repeating the code, you can simply write what you do for one element and use map.
Third = fun({_First, _Second, Third}) -> Third end,
map(Third, List).
This is much shorter and the shorter your code is, the less bugs it has. Simple as that.
You don't have to think about corner cases (what if the list is empty?) and for experienced developer it is much easier to read.
filter searches lists. You give it function, that takes element, if it returns true, the element will be on the returned list, if it returns false, the element will not be there. For example filter logged in users from list.
foldl and foldr are used, when you have to do additional bookkeeping while iterating over the list - for example summing all the elements or counting something.
The best explanations, I've found about those functions are in books about Lisp: "Structure and Interpretation of Computer Programs" and "On Lisp" Chapter 4..

Erlang: erl shell hangs after building a large data structure

As suggested in answers to a previous question, I tried using Erlang proplists to implement a prefix trie.
The code seems to work decently well... But, for some reason, it doesn't play well with the interactive shell. When I try to run it, the shell hangs:
> Trie = trie:from_dict(). % Creates a trie from a dictionary
% ... the trie is printed ...
% Then nothing happens
I see the new trie printed to the screen (ie, the call to trie:from_dict() has returned), then the shell just hangs. No new > prompt comes up and ^g doesn't do anything (but ^c will eventually kill it off).
With a very small dictionary (the first 50 lines of /usr/share/dict/words), the hang only lasts a second or two (and the trie is built almost instantly)... But it seems to grow exponentially with the size of the dictionary (100 words takes 5 or 10 seconds, I haven't had the patience to try larger wordlists). Also, as the shell is hanging, I notice that the beam.smp process starts eating up a lot of memory (somewhere between 1 and 2 gigs).
So, is there anything obvious that could be causing this shell hang and incredible memory usage?
Some various comments:
I have a hunch that the garbage collector is at fault, but I don't know how to profile or create an experiment to test that.
I've tried profiling with eprof and nothing obvious showed up.
Here is my "add string to trie" function:
add([], Trie) ->
[ stop | Trie ];
add([Ch|Rest], Trie) ->
SubTrie = proplists:get_value(Ch, Trie, []),
NewSubTrie = add(Rest, SubTrie),
NewTrie = [ { Ch, NewSubTrie } | Trie ],
% Arbitrarily decide to compress key/value list once it gets
% more than 60 pairs.
if length(NewTrie) > 60 ->
proplists:compact(NewTrie);
true ->
NewTrie
end.
The problem is (amongst others ? -- see my comment) that you are always adding a new {Ch, NewSubTrie} tuple to your proplist Tries, no matter if Ch already existed, or not.
Instead of
NewTrie = [ { Ch, NewSubTrie } | Trie ]
you need something like:
NewTrie = lists:keystore(Ch, 1, Trie, {Ch, NewSubTrie})
You're not really building a trie here. Your end result is effectively a randomly ordered proplist of proplists that requires full scans at each level when walking the list. Tries are typically implied ordering based on position in the array (or list).
Here's an implementation that uses tuples as the storage mechanism. Calling set only rebuilds the root and direct path tuples.
(note: would probably have to make the pair a triple (adding size) make delete work with any efficiency)
I believe erlang tuples are really just arrays (thought I read that somewhere), so lookup should be super fast, and modify is probably straight forward. Maybe this is faster with the array module, but I haven't really played with it much to know.
this version also stores an arbitrary value, so you can do things like:
1> c(trie).
{ok,trie}
2> trie:get("ab",trie:set("aa",bar,trie:new("ab",foo))).
foo
3> trie:get("abc",trie:set("aa",bar,trie:new("ab",foo))).
undefined
4>
code (entire module): note2: assumes lower case non empty string keys
-module(trie).
-compile(export_all).
-define(NEW,{ %% 26 pairs, to avoid cost of calculating a new level at runtime
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},{undefined,nodepth},
{undefined,nodepth},{undefined,nodepth}
}
).
-define(POS(Ch), Ch - $a + 1).
new(Key,V) -> set(Key,V,?NEW).
set([H],V,Trie) ->
Pos = ?POS(H),
{_,SubTrie} = element(Pos,Trie),
setelement(Pos,Trie,{V,SubTrie});
set([H|T],V,Trie) ->
Pos = ?POS(H),
{SubKey,SubTrie} = element(Pos,Trie),
case SubTrie of
nodepth -> setelement(Pos,Trie,{SubKey,set(T,V,?NEW)});
SubTrie -> setelement(Pos,Trie,{SubKey,set(T,V,SubTrie)})
end.
get([H],Trie) ->
{Val,_} = element(?POS(H),Trie),
Val;
get([H|T],Trie) ->
case element(?POS(H),Trie) of
{_,nodepth} -> undefined;
{_,SubTrie} -> get(T,SubTrie)
end.

Erlang and run-time record limitations

I'm developing an Erlang system and having reoccurring problems with the fact that records are compile-time pre-processor macros (almost), and that they cant be manipulated at runtime...
basically, I'm working with a property pattern, where properties are added at run-time to objects on the front-end (AS3). Ideally, I would reflect this with a list on the Erlang side, since its a fundamental data type, but then using records in QCL [to query ETS tables] would not be possible since to use them I have to specifically say which record property I want to query over... I have at least 15 columns in the larges table, so listing them all in one huge switch statement (case X of) is just plain ugly.
does anyone have any ideas how to elegantly solve this? maybe some built-in functions for creating tuples with appropriate signatures for use in pattern matching (for QLC)?
thanks
It sounds like you want to be able to do something like get_record_field(Field, SomeRecord) where Field is determined at runtime by user interface code say.
You're right in that you can't do this in standard erlang as records and the record_info function are expanded and eliminated at compile time.
There are a couple of solutions that I've used or looked at. My solution is as follows: (the example gives runtime access to the #dns_rec and #dns_rr records from inet_dns.hrl)
%% Retrieves the value stored in the record Rec in field Field.
info(Field, Rec) ->
Fields = fields(Rec),
info(Field, Fields, tl(tuple_to_list(Rec))).
info(_Field, _Fields, []) -> erlang:error(bad_record);
info(_Field, [], _Rec) -> erlang:error(bad_field);
info(Field, [Field | _], [Val | _]) -> Val;
info(Field, [_Other | Fields], [_Val | Values]) -> info(Field, Fields, Values).
%% The fields function provides the list of field positions
%% for all the kinds of record you want to be able to query
%% at runtime. You'll need to modify this to use your own records.
fields(#dns_rec{}) -> fields(dns_rec);
fields(dns_rec) -> record_info(fields, dns_rec);
fields(#dns_rr{}) -> fields(dns_rr);
fields(dns_rr) -> record_info(fields, dns_rr).
%% Turns a record into a proplist suitable for use with the proplists module.
to_proplist(R) ->
Keys = fields(R),
Values = tl(tuple_to_list(R)),
lists:zip(Keys,Values).
A version of this that compiles is available here: rec_test.erl
You can also extend this dynamic field lookup to dynamic generation of matchspecs for use with ets:select/2 or mnesia:select/2 as shown below:
%% Generates a matchspec that does something like this
%% QLC psuedocode: [ V || #RecordKind{MatchField=V} <- mnesia:table(RecordKind) ]
match(MatchField, RecordKind) ->
MatchTuple = match_tuple(MatchField, RecordKind),
{MatchTuple, [], ['$1']}.
%% Generates a matchspec that does something like this
%% QLC psuedocode: [ T || T <- mnesia:table(RecordKind),
%% T#RecordKind.Field =:= MatchValue]
match(MatchField, MatchValue, RecordKind) ->
MatchTuple = match_tuple(MatchField, RecordKind),
{MatchTuple, [{'=:=', '$1', MatchValue}], ['$$']}.
%% Generates a matchspec that does something like this
%% QLC psuedocode: [ T#RecordKind.ReturnField
%% || T <- mnesia:table(RecordKind),
%% T#RecordKind.MatchField =:= MatchValue]
match(MatchField, MatchValue, RecordKind, ReturnField)
when MatchField =/= ReturnField ->
MatchTuple = list_to_tuple([RecordKind
| [if F =:= MatchField -> '$1'; F =:= ReturnField -> '$2'; true -> '_' end
|| F <- fields(RecordKind)]]),
{MatchTuple, [{'=:=', '$1', MatchValue}], ['$2']}.
match_tuple(MatchField, RecordKind) ->
list_to_tuple([RecordKind
| [if F =:= MatchField -> '$1'; true -> '_' end
|| F <- fields(RecordKind)]]).
Ulf Wiger has also written a parse_transform, Exprecs, that more or less does this for you automagically. I've never tried it, but Ulf's code is usually very good.
I solve this problem (in development) by use the parse transform tools to read the .hrl files and generate helper functions.
I wrote a tutorial on it at Trap Exit.
We use it all the time to generate match specs. The beauty is that you don't need to know anything about the current state of the record at development time.
However once you are in production things change! If your record is the basis of a table (as opposed to the definition of a field in a table) then changing an underlying record is more difficult (to put it mildly!).
I'm not sure I fully understand your Problem but I have moved from records to proplists in most cases. They are much more flexible and much slower. Using (d)ets I usually use a few record fields for coarse selection and then check the proplists on the remaining records for detailed selection.

Resources