So I have made 2 databases:
Db1 that contains: [{james,london}]
Db2 that contains: [{james,london},{fredrik,berlin},{fred,berlin}]
I have a match function that looks like this:
match(Element, Db) -> proplists:lookup_all(Element, Db).
When I do: match(berlin, Db2) I get: [ ]
What I am trying to get is a way to input the value and get back the keys in this way: [fredrik,fred]
Regarding to documentation proplists:lookup_all works other way:
Returns the list of all entries associated with Key in List.
So, you can lookup only by keys:
(kilter#127.0.0.1)1> Db = [{james,london},{fredrik,berlin},{fred,berlin}].
[{james,london},{fredrik,berlin},{fred,berlin}]
(kilter#127.0.0.1)2> proplists:lookup_all(berlin, Db).
[]
(kilter#127.0.0.1)3> proplists:lookup_all(fredrik, Db).
[{fredrik,berlin}]
You can use lists:filter and lists:map instead:
(kilter#127.0.0.1)7> lists:filter(fun ({K, V}) -> V =:= berlin end, Db).
[{fredrik,berlin},{fred,berlin}]
(kilter#127.0.0.1)8> lists:map(fun ({K,V}) -> K end, lists:filter(fun ({K, V}) -> V =:= berlin end, Db)).
[fredrik,fred]
So, finally
match(Element, Db) -> lists:map(
fun ({K,V}) -> K end,
lists:filter(fun ({K, V}) -> V =:= Element end, Db)
).
proplists:lookup_all/2 takes as a first argument a key; in your example, berlin is a value and it's not a key therefore an empty list is returned.
Naturally, you can use recursion and find all the elements (meaning that you will use it like an ordinary list and not a proplist).
Another solution is to change the encoding scheme:
[{london,james},{berlin,fredrik},{berlin,fred}]
and then use proplists:lookup_all/2
The correct way to encode it depends on the way you will access the data (what kind of "queries" you will perform most); but unless you manipulate large amounts of data (in which case you might want to use some other datastructure) it isn't really worth analyzing.
Related
I want to know how can I update multiple records of table
for example I have a table named : transaction
I want to modifiy the id of transaction
I try without success with
testupdate()->
Key =20,
Update=#transaction{id=Key} ,
Fun = fun() ->
List = mnesia:match_object(Update),
lists:foreach(fun(X) ->
mnesia:write_object(X)
end, List)
end,
mnesia:transaction(Fun).
when I test I didn't find an error
1> model:testupdate().
{atomic,ok}
but the id of transaction are not changed
It depends a lot on the Type of Table, and what field you are updating in the records. Unfortunately, you have not told us some details about your table. Lets say, you are updating multiple records basing on a given field.
NOTE that i do not recommend the use of the word transaction as a table name. But for purposes of learning, lets continue. Pseudo Code:
Get all records whose: Obj#transaction.field == Key
Then, Foreach, set: Obj#transaction.field == Key2
Then consider this example basing on Query List Comprehension
-include_lib("stdlib/include/qlc.hrl").
select(Q)->
case mnesia:is_transaction() of
false ->
F = fun(QH)-> qlc:e(QH) end,
mnesia:activity(transaction,F,[Q],mnesia_frag);
true -> qlc:e(Q)
end.
gen_update(FilterFun,UpdateFun)->
A = select(qlc:q([R || R <- mnesia:table(transaction),FilterFun(R) == true])),
[UpdateFun(X) || X <- A],
ok.
update_by_key(OldKey,NewKey)->
FilterFun = fun(#transaction{key = OldKey}) -> true;
(_) -> false
end,
UpdateFun = fun(T) ->
NewT = T#transaction{key = NewKey},
mnesia:write(NewT),
ok
end,
gen_update(FilterFun,UpdateFun),
ok.
That should do it. Look at the function: gen_update. I have used funs to create generic Objects which will filter according to any desired form, and another which will do the update. Now, you can construct any fun of your choice as long as it takes in a record as an argument. Note that this method may be applicable to tables of type set, depending on what you are doing. If you are updating by primary key, then, you need to make some new changes.
I tried this code snippet:
print_next(Current) ->
case mnesia:dirty_next(muppet, Current) of
'$end_of_table' ->
io:format("~n", []),
ok;
Next ->
[Muppet] = mnesia:dirty_read({muppet, Next}),
io:format("~p~n", [Muppet]),
print_next(Next),
ok
end.
print() ->
case mnesia:dirty_first(muppet) of
'$end_of_table' ->
ok;
First ->
[Muppet] = mnesia:dirty_read({muppet, First}),
io:format("~p~n", [Muppet]),
print_next(First),
ok
end.
But it is so long. Also I can use dirty_all_keys and then iterate through key list, but I want to know if there is a better way to print out Mnesia table contents.
If you just want a quick and dirty way to print the contents of a Mnesia table in the shell, and if your table is not of type disc_only_copies, then you can take advantage of the fact that Mnesia stores its data in ETS tables and run:
ets:tab2list(my_table).
or, if you think the shell truncates the data too much:
rp(ets:tab2list(my_table)).
Not recommended for "real" code, of course.
For a simple and quick look at your table contents you can use select function of mnesia with catch-all Match Specification as follows:
CatchAll = [{'_',[],['$_']}].
mnesia:dirty_select(TableName, CatchAll).
and also you can run it inside a transaction context:
CatchAll = [{'_',[],['$_']}].
SelectFun = fun() -> mnesia:select(TableName, CatchAll) end.
mnesia:transaction(SelectFun).
however be careful if you are in a production environment with a big data.
Well, if the intent is to see the contents of your table, there is the application called tv, which can view both ETS and mnesia tables.
If you wish to see all the table contents on your terminal, then try something like this:
traverse_table_and_show(Table_name)->
Iterator = fun(Rec,_)->
io:format("~p~n",[Rec]),
[]
end,
case mnesia:is_transaction() of
true -> mnesia:foldl(Iterator,[],Table_name);
false ->
Exec = fun({Fun,Tab}) -> mnesia:foldl(Fun, [],Tab) end,
mnesia:activity(transaction,Exec,[{Iterator,Table_name}],mnesia_frag)
end.
Then if your table is called muppet, you use the function as follows:
traverse_table_and_show(muppet).
Advantages of this:
If its executed within a transaction , it will have no problems of nested transactions. It is less work because its done within one mnesia transaction through mnesia iterator functionality as compared to your implementation of get_next_key -> do_read_with_key -> then read the record (these are many operations). With this, mnesia will automatically tell that it has covered all the records in your entire table. Also, if the table is fragmented, your functionality will only display records in the first fragment. This will iterate through all the fragments the belong to that table.
In this iteration mnesia method, i do nothing with the Accumulator variable which should go along with the Iterator fun and thats why you see the underscore for the second variable.
Details of this iteration can be found here: http://www.erlang.org/doc/man/mnesia.html#foldl-3
As Muzaaya told, you can you use tv (table visualizer tool) to view both mnesia and ets tables.
Alternatively, you can use the following code to get mnesia table data - Print on terminal or in case you want to store the result in a file :
select_all() ->
mnesia:transaction(
fun() ->
P=qlc:e(qlc:q([E || E <- mnesia:table(tableName)])), %query to select all data from table named 'tableName'
io:format(" ~p ~n ", [P]), % Prints table data on terminal
to_file("fileName.txt",P) % to_file method writes the data to file
end ).
to_file(File, L) ->
mnesia:transaction(
fun() ->
{ok, S} = file:open(File, write),
lists:foreach(fun(X) -> io:format(S, "~p.~n" ,[X]) end, L),
file:close(S)
end).
Lets say I have a record:
-record(foo, {bar}).
What I would like to do is to be able to pass the record name to a function as a parameter, and get back a new record. The function should be generic so that it should be able to accept any record, something like this.
make_record(foo, [bar], ["xyz"])
When implementing such a function I've tried this:
make_record(RecordName, Fields, Values) ->
NewRecord = #RecordName{} %% this line gives me an error: syntax error before RecordName
Is it possible to use the record name as a parameter?
You can't use the record syntax if you don't have access to the record during compile time.
But because records are simply transformed into tuples during compilation it is really easy to construct them manually:
-record(some_rec, {a, b}).
make_record(Rec, Values) ->
list_to_tuple([Rec | Values]).
test() ->
R = make_record(some_rec, ["Hej", 5]), % Dynamically create record
#some_rec{a = A, b = B} = R, % Access it using record syntax
io:format("a = ~p, b = ~p~n", [A, B]).
Alternative solution
Or, if you at compile time make a list of all records that the function should be able to construct, you can use the field names also:
%% List of record info created with record_info macro during compile time
-define(recs,
[
{some_rec, record_info(fields, some_rec)},
{some_other_rec, record_info(fields, some_other_rec)}
]).
make_record_2(Rec, Fields, Values) ->
ValueDict = lists:zip(Fields, Values),
% Look up the record name and fields in record list
Body = lists:map(
fun(Field) -> proplists:get_value(Field, ValueDict, undefined) end,
proplists:get_value(Rec, ?recs)),
list_to_tuple([Rec | Body]).
test_2() ->
R = make_record_2(some_rec, [b, a], ["B value", "A value"]),
#some_rec{a = A, b = B} = R,
io:format("a = ~p, b = ~p~n", [A, B]).
With the second version you can also do some verification to make sure you are using the right fields etc.
Other tips
Other useful constructs to keep in mind when working with records dynamically is the #some_rec.a expression which evaluates to the index of the a field in some_recs, and the element(N, Tuple) function which given a tuple and an index returns the element in that index.
This is not possible, as records are compile-time only structures. At compilation they are converted into tuples. Thus the compiler needs to know the name of the record, so you cannot use a variable.
You could also use some parse-transform magic (see exprecs) to create record constructors and accessors, but this design seems to go in the wrong direction.
If you need to dynamically create record-like things, you can use some structures instead, like key-value lists, or dicts.
To cover all cases: If you have fields and values but don't necessarily have them in the correct order, you could make your function take in the result of record_info(fields, Record), with Record being the atom of the record you want to make. Then it'll have the ordered field names to work with. And a record is just a tuple with its atom name in the first slot, so you can build it that way. Here's how I build an arbitrary shallow record from a JSON string (not thoroughly tested and not optimized, but tested and working):
% Converts the given JSON string to a record
% WARNING: Only for shallow records. Won't work for nested ones!
%
% Record: The atom representing the type of record to be converted to
% RecordInfo: The result of calling record_info(fields, Record)
% JSON: The JSON string
jsonToRecord(Record, RecordInfo, JSON) ->
JiffyList = element(1, jiffy:decode(JSON)),
Struct = erlang:make_tuple(length(RecordInfo)+1, ""),
Struct2 = erlang:setelement(1, Struct, Record),
recordFromJsonList(RecordInfo, Struct2, JiffyList).
% private methods
recordFromJsonList(_RecordInfo, Struct, []) -> Struct;
recordFromJsonList(RecordInfo, Struct, [{Name, Val} | Rest]) ->
FieldNames = atomNames(RecordInfo),
Index = index_of(erlang:binary_to_list(Name), FieldNames),
recordFromJsonList(RecordInfo, erlang:setelement(Index+1, Struct, Val), Rest).
% Converts a list of atoms to a list of strings
%
% Atoms: The list of atoms
atomNames(Atoms) ->
F = fun(Field) ->
lists:flatten(io_lib:format("~p", [Field]))
end,
lists:map(F, Atoms).
% Gets the index of an item in a list (one-indexed)
%
% Item: The item to search for
% List: The list
index_of(Item, List) -> index_of(Item, List, 1).
% private helper
index_of(_, [], _) -> not_found;
index_of(Item, [Item|_], Index) -> Index;
index_of(Item, [_|Tl], Index) -> index_of(Item, Tl, Index+1).
Brief explanation: The JSON represents some key:value pairs corresponding to field:value pairs in the record we're trying to build. We might not get the key:value pairs in the correct order, so we need the list of record fields passed in so we can insert the values into their correct positions in the tuple.
am josh in Uganda. i created a mnesia fragmented table (64 fragments), and managed to populate it upto 9948723 records. Each fragment was a disc_copies type, with two replicas.
Now, using qlc (query list comprehension), was too slow in searching for a record, and was returning inaccurate results.
I found out that this overhead is that qlc uses the select function of mnesia which traverses the entire table in order to match records. i tried something else below.
-define(ACCESS_MOD,mnesia_frag).
-define(DEFAULT_CONTEXT,transaction).
-define(NULL,'_').
-record(address,{tel,zip_code,email}).
-record(person,{name,sex,age,address = #address{}}).
match()-> Z = fun(Spec) -> mnesia:match_object(Spec) end,Z.
match_object(Pattern)->
Match = match(),
mnesia:activity(?DEFAULT_CONTEXT,Match,[Pattern],?ACCESS_MOD).
Trying this functionality gave me good results. But i found that i have to dynamically build patterns for every search that may be made in my stored procedures.
i decided to go through the havoc of doing this, so i wrote functions which will dynamically build wild patterns for my records depending on which parameter is to be searched.
%% This below gives me the default pattern for all searches ::= {person,'_','_','_'}
pattern(Record_name)->
N = length(my_record_info(Record_name)) + 1,
erlang:setelement(1,erlang:make_tuple(N,?NULL),Record_name).
%% this finds the position of the provided value and places it in that
%% position while keeping '_' in the other positions.
%% The caller function can use this function recursively until
%% it has built the full search pattern of interest
pattern({Field,Value},Pattern_sofar)->
N = position(Field,my_record_info(element(1,Pattern_sofar))),
case N of
-1 -> Pattern_sofar;
Int when Int >= 1 -> erlang:setelement(N + 1,Pattern_sofar,Value);
_ -> Pattern_sofar
end.
my_record_info(Record_name)->
case Record_name of
staff_dynamic -> record_info(fields,staff_dynamic);
person -> record_info(fields,person);
_ -> []
end.
%% These below,help locate the position of an element in a list
%% returned by "-record_info(fields,person)"
position(_,[]) -> -1;
position(Value,List)->
find(lists:member(Value,List),Value,List,1).
find(false,_,_,_) -> -1;
find(true,V,[V|_],N)-> N;
find(true,V,[_|X],N)->
find(V,X,N + 1).
find(V,[V|_],N)-> N;
find(V,[_|X],N) -> find(V,X,N + 1).
This was working very well though it was computationally intensive.
It could still work even after changing the record definition since at compile time, it gets the new record info
The problem is that when i initiate even 25 processes on a 3.0 GHz pentium 4 processor running WinXP, It hangs and takes a long time to return results.
If am to use qlc in these fragments, to get accurate results, i have to specify which fragment to search in like this.
find_person_by_tel(Tel)->
select(qlc:q([ X || X <- mnesia:table(Frag), (X#person.address)#address.tel == Tel])).
select(Q)->
case ?transact(fun() -> qlc:e(Q) end) of
{atomic,Val} -> Val;
{aborted,_} = Error -> report_mnesia_event(Error)
end.
Qlc was returning [], when i search for something yet when i use match_object/1 i get accurate results. I found that using match_expressions can help.
mnesia:table(Tab,Props).
where Props is a data structure that defines the match expression, the chunk size of return values e.t.c
I got a problem when i tried building match expressions dynamically.
Function mnesia:read/1 or mnesia:read/2 requires that you have the primary key
Now am asking myself, how can i efficiently use QLC to search for records in a large fragmented table? Please help.
I know that using tuple representation of records makes code hard to upgrade. This is why
i hate using mnesia:select/1, mnesia:match_object/1 and i want to stick to QLC. QLC is giving me wrong results in my queries from a mnesia table of 64 fragments even on the same node.
Has anyone ever used QLC to query a fragmented table?, please help
Do you invoke the qlc in the activity context?
tfn_match(Id) ->
Search = #person{address=#address{tel=Id, _ = '_'}, _ = '_'},
trans(fun() -> mnesia:match_object(Search) end).
tfn_qlc(Id) ->
Q = qlc:q([ X || X <- mnesia:table(person), (X#person.address)#address.tel == Id]),
trans(fun() -> qlc:e(Q) end).
trans(Fun) ->
try Res = mnesia:activity(transaction, Fun, mnesia_frag),
{atomic, Res}
catch exit:Error ->
{aborted, Error}
end.
I'm developing an Erlang system and having reoccurring problems with the fact that records are compile-time pre-processor macros (almost), and that they cant be manipulated at runtime...
basically, I'm working with a property pattern, where properties are added at run-time to objects on the front-end (AS3). Ideally, I would reflect this with a list on the Erlang side, since its a fundamental data type, but then using records in QCL [to query ETS tables] would not be possible since to use them I have to specifically say which record property I want to query over... I have at least 15 columns in the larges table, so listing them all in one huge switch statement (case X of) is just plain ugly.
does anyone have any ideas how to elegantly solve this? maybe some built-in functions for creating tuples with appropriate signatures for use in pattern matching (for QLC)?
thanks
It sounds like you want to be able to do something like get_record_field(Field, SomeRecord) where Field is determined at runtime by user interface code say.
You're right in that you can't do this in standard erlang as records and the record_info function are expanded and eliminated at compile time.
There are a couple of solutions that I've used or looked at. My solution is as follows: (the example gives runtime access to the #dns_rec and #dns_rr records from inet_dns.hrl)
%% Retrieves the value stored in the record Rec in field Field.
info(Field, Rec) ->
Fields = fields(Rec),
info(Field, Fields, tl(tuple_to_list(Rec))).
info(_Field, _Fields, []) -> erlang:error(bad_record);
info(_Field, [], _Rec) -> erlang:error(bad_field);
info(Field, [Field | _], [Val | _]) -> Val;
info(Field, [_Other | Fields], [_Val | Values]) -> info(Field, Fields, Values).
%% The fields function provides the list of field positions
%% for all the kinds of record you want to be able to query
%% at runtime. You'll need to modify this to use your own records.
fields(#dns_rec{}) -> fields(dns_rec);
fields(dns_rec) -> record_info(fields, dns_rec);
fields(#dns_rr{}) -> fields(dns_rr);
fields(dns_rr) -> record_info(fields, dns_rr).
%% Turns a record into a proplist suitable for use with the proplists module.
to_proplist(R) ->
Keys = fields(R),
Values = tl(tuple_to_list(R)),
lists:zip(Keys,Values).
A version of this that compiles is available here: rec_test.erl
You can also extend this dynamic field lookup to dynamic generation of matchspecs for use with ets:select/2 or mnesia:select/2 as shown below:
%% Generates a matchspec that does something like this
%% QLC psuedocode: [ V || #RecordKind{MatchField=V} <- mnesia:table(RecordKind) ]
match(MatchField, RecordKind) ->
MatchTuple = match_tuple(MatchField, RecordKind),
{MatchTuple, [], ['$1']}.
%% Generates a matchspec that does something like this
%% QLC psuedocode: [ T || T <- mnesia:table(RecordKind),
%% T#RecordKind.Field =:= MatchValue]
match(MatchField, MatchValue, RecordKind) ->
MatchTuple = match_tuple(MatchField, RecordKind),
{MatchTuple, [{'=:=', '$1', MatchValue}], ['$$']}.
%% Generates a matchspec that does something like this
%% QLC psuedocode: [ T#RecordKind.ReturnField
%% || T <- mnesia:table(RecordKind),
%% T#RecordKind.MatchField =:= MatchValue]
match(MatchField, MatchValue, RecordKind, ReturnField)
when MatchField =/= ReturnField ->
MatchTuple = list_to_tuple([RecordKind
| [if F =:= MatchField -> '$1'; F =:= ReturnField -> '$2'; true -> '_' end
|| F <- fields(RecordKind)]]),
{MatchTuple, [{'=:=', '$1', MatchValue}], ['$2']}.
match_tuple(MatchField, RecordKind) ->
list_to_tuple([RecordKind
| [if F =:= MatchField -> '$1'; true -> '_' end
|| F <- fields(RecordKind)]]).
Ulf Wiger has also written a parse_transform, Exprecs, that more or less does this for you automagically. I've never tried it, but Ulf's code is usually very good.
I solve this problem (in development) by use the parse transform tools to read the .hrl files and generate helper functions.
I wrote a tutorial on it at Trap Exit.
We use it all the time to generate match specs. The beauty is that you don't need to know anything about the current state of the record at development time.
However once you are in production things change! If your record is the basis of a table (as opposed to the definition of a field in a table) then changing an underlying record is more difficult (to put it mildly!).
I'm not sure I fully understand your Problem but I have moved from records to proplists in most cases. They are much more flexible and much slower. Using (d)ets I usually use a few record fields for coarse selection and then check the proplists on the remaining records for detailed selection.