I have a table person with this record
-record(person, {id, firstname, lastname, phone}).
I want to update the phone of all records of this table
Itry with
test()->
Newphone ="216",
Update=#person{phone=Newphone} ,
Fun = fun() ->
List = mnesia:match_object(Update),
lists:foreach(fun(X) ->
mnesia:write_object(X)
end, List)
end,
mnesia:transaction(Fun).
The table person contains
12 alen dumas 97888888
15 franco mocci 55522225
13 ali othmani 44444449
I want that this table became like this :
12 alen dumas 216
15 franco mocci 216
13 ali othmani 216
I try with :
test()->
Newphone ="216",
Update=X#person{phone=Newphone, _ = '_'}
Fun = fun() ->
List = mnesia:match_object(Update),
lists:foreach(fun(X) ->
mnesia:write(X)
end, List)
end,
mnesia:transaction(Fun).
but with this code I have this error :
Variable X is unbound
this is related to this line :
Update=X#person{phone=Newphone, _ = '_'}
to resolve this probleme I do :
test()->
Newphone ="216",
Update=#person{phone=Newphone, _ = '_'}
Fun = fun() ->
List = mnesia:match_object(Update),
lists:foreach(fun(X) ->
mnesia:write(X)
end, List)
end,
mnesia:transaction(Fun).
when I test I have this message :
{atomic,ok}
but when I consult the database I find that the records are not changed
the difficulty in my code is to change all records of the table person
so change 97888888 and 55522225 and 44444449
this values should became 216
Continuing from what #legoscia has started. There are a couple of problems left with your code:
In the mnesia:match_object/1 call Update is being used as a pattern so when you set the phone field phone=NewPhone in Update you are actually saying to match_object give me all the records which have a phone of value "216". Which is not what you want.
You are writing back exactly the same record as you matched. You are not changing the record before writing it back.
A solution could be (untested):
test()->
Newphone ="216",
Match=#person{_ = '_'}, %Will match all records
Fun = fun() ->
List = mnesia:match_object(Match),
lists:foreach(fun(X) ->
%% Create new record with phone=NewPhone and write it back
Update = X#person{phone=NewPhone},
mnesia:write(Update)
end, List)
end,
mnesia:transaction(Fun).
Any fields you set in Match will limit which records you will match in match_object. For example Match = #person{phone="123",_='_'} will match all records which have phone "123".
A few things need changing here:
If you are going to use a record as a template for mnesia:match_object, you should fill in the record fields that you don't care about with the atom '_'. There is a special syntax to do that:
Update=#person{phone=Newphone, _ = '_'}
You probably don't want Newphone in there—the record that you pass to match_object should match the objects that are already in the table but should be changed, not what you want the objects to be changed to.
Which records are you trying to change? That will determine what you should pass to match_object.
The only thing you do in the transaction is reading records and writing them back unchanged. Have a look at the Records chapter of the Erlang reference manual; you probably want something like X#person{phone = Newphone} to change the phone field in X.
The function mnesia:write_object doesn't exist; you probably meant mnesia:write.
Related
mnesia:read returns an empty list when using table fragmentation in mnesia , but I do have a record:
My code is like this :
F = fun() ->
mnesia:dirty_read({offline_msg, <<0,0,0,0,0,0,0,11>>})
end.
Result = mnesia:activity(transaction, F, [],mnesia_frag).
Result is :
[#offline_msg{userid = <<0,0,0,0,0,0,0,11>>,timestamp =1547039796317984,from = 123}]
but
F = fun() ->
mnesia:read({offline_msg, <<0,0,0,0,0,0,0,11>>})
end.
Result = mnesia:activity(transaction, F, [],mnesia_frag).
Result is []
table info:
PrimProps = [{n_fragments, 64}, {n_disc_only_copies, 1}, {node_pool, [node()]}],
mnesia:create_table(offline_msg,
[{disc_only_copies, [node()]},{type, bag},{attributes, record_info(fields, offline_msg)}, {frag_properties, PrimProps}])
Did you write the record to the table using mnesia:dirty_write?
The "dirty" functions (dirty_read, dirty_write etc) bypass Mnesia's table fragmentation, even if used inside mnesia:activity as in your first example: they always access the first fragment of the table. So I suspect that what happened is this:
the record was written to the first fragment using mnesia:dirty_write
in your first example, mnesia:dirty_read looked for the record in the first fragment, and found it
in your secord example, mnesia:read inside mnesia:activity used a hash of the record key to figure out which fragment the record should be in, and looked in that fragment - but the record is not present, since it was written to the wrong fragment.
If you want to use dirty operations with fragmented tables, call mnesia:activity with sync_dirty or async_dirty:
mnesia:activity(sync_dirty, F, [],mnesia_frag).
For example, to write the record to the table:
OfflineMsg = #offline_msg{...},
F = fun() -> mnesia:write(OfflineMsg) end,
mnesia:activity(sync_dirty, F, [],mnesia_frag).
This will let mnesia_frag ensure that the record gets written to the correct table fragment.
I want to know how can I update multiple records of table
for example I have a table named : transaction
I want to modifiy the id of transaction
I try without success with
testupdate()->
Key =20,
Update=#transaction{id=Key} ,
Fun = fun() ->
List = mnesia:match_object(Update),
lists:foreach(fun(X) ->
mnesia:write_object(X)
end, List)
end,
mnesia:transaction(Fun).
when I test I didn't find an error
1> model:testupdate().
{atomic,ok}
but the id of transaction are not changed
It depends a lot on the Type of Table, and what field you are updating in the records. Unfortunately, you have not told us some details about your table. Lets say, you are updating multiple records basing on a given field.
NOTE that i do not recommend the use of the word transaction as a table name. But for purposes of learning, lets continue. Pseudo Code:
Get all records whose: Obj#transaction.field == Key
Then, Foreach, set: Obj#transaction.field == Key2
Then consider this example basing on Query List Comprehension
-include_lib("stdlib/include/qlc.hrl").
select(Q)->
case mnesia:is_transaction() of
false ->
F = fun(QH)-> qlc:e(QH) end,
mnesia:activity(transaction,F,[Q],mnesia_frag);
true -> qlc:e(Q)
end.
gen_update(FilterFun,UpdateFun)->
A = select(qlc:q([R || R <- mnesia:table(transaction),FilterFun(R) == true])),
[UpdateFun(X) || X <- A],
ok.
update_by_key(OldKey,NewKey)->
FilterFun = fun(#transaction{key = OldKey}) -> true;
(_) -> false
end,
UpdateFun = fun(T) ->
NewT = T#transaction{key = NewKey},
mnesia:write(NewT),
ok
end,
gen_update(FilterFun,UpdateFun),
ok.
That should do it. Look at the function: gen_update. I have used funs to create generic Objects which will filter according to any desired form, and another which will do the update. Now, you can construct any fun of your choice as long as it takes in a record as an argument. Note that this method may be applicable to tables of type set, depending on what you are doing. If you are updating by primary key, then, you need to make some new changes.
I tried this code snippet:
print_next(Current) ->
case mnesia:dirty_next(muppet, Current) of
'$end_of_table' ->
io:format("~n", []),
ok;
Next ->
[Muppet] = mnesia:dirty_read({muppet, Next}),
io:format("~p~n", [Muppet]),
print_next(Next),
ok
end.
print() ->
case mnesia:dirty_first(muppet) of
'$end_of_table' ->
ok;
First ->
[Muppet] = mnesia:dirty_read({muppet, First}),
io:format("~p~n", [Muppet]),
print_next(First),
ok
end.
But it is so long. Also I can use dirty_all_keys and then iterate through key list, but I want to know if there is a better way to print out Mnesia table contents.
If you just want a quick and dirty way to print the contents of a Mnesia table in the shell, and if your table is not of type disc_only_copies, then you can take advantage of the fact that Mnesia stores its data in ETS tables and run:
ets:tab2list(my_table).
or, if you think the shell truncates the data too much:
rp(ets:tab2list(my_table)).
Not recommended for "real" code, of course.
For a simple and quick look at your table contents you can use select function of mnesia with catch-all Match Specification as follows:
CatchAll = [{'_',[],['$_']}].
mnesia:dirty_select(TableName, CatchAll).
and also you can run it inside a transaction context:
CatchAll = [{'_',[],['$_']}].
SelectFun = fun() -> mnesia:select(TableName, CatchAll) end.
mnesia:transaction(SelectFun).
however be careful if you are in a production environment with a big data.
Well, if the intent is to see the contents of your table, there is the application called tv, which can view both ETS and mnesia tables.
If you wish to see all the table contents on your terminal, then try something like this:
traverse_table_and_show(Table_name)->
Iterator = fun(Rec,_)->
io:format("~p~n",[Rec]),
[]
end,
case mnesia:is_transaction() of
true -> mnesia:foldl(Iterator,[],Table_name);
false ->
Exec = fun({Fun,Tab}) -> mnesia:foldl(Fun, [],Tab) end,
mnesia:activity(transaction,Exec,[{Iterator,Table_name}],mnesia_frag)
end.
Then if your table is called muppet, you use the function as follows:
traverse_table_and_show(muppet).
Advantages of this:
If its executed within a transaction , it will have no problems of nested transactions. It is less work because its done within one mnesia transaction through mnesia iterator functionality as compared to your implementation of get_next_key -> do_read_with_key -> then read the record (these are many operations). With this, mnesia will automatically tell that it has covered all the records in your entire table. Also, if the table is fragmented, your functionality will only display records in the first fragment. This will iterate through all the fragments the belong to that table.
In this iteration mnesia method, i do nothing with the Accumulator variable which should go along with the Iterator fun and thats why you see the underscore for the second variable.
Details of this iteration can be found here: http://www.erlang.org/doc/man/mnesia.html#foldl-3
As Muzaaya told, you can you use tv (table visualizer tool) to view both mnesia and ets tables.
Alternatively, you can use the following code to get mnesia table data - Print on terminal or in case you want to store the result in a file :
select_all() ->
mnesia:transaction(
fun() ->
P=qlc:e(qlc:q([E || E <- mnesia:table(tableName)])), %query to select all data from table named 'tableName'
io:format(" ~p ~n ", [P]), % Prints table data on terminal
to_file("fileName.txt",P) % to_file method writes the data to file
end ).
to_file(File, L) ->
mnesia:transaction(
fun() ->
{ok, S} = file:open(File, write),
lists:foreach(fun(X) -> io:format(S, "~p.~n" ,[X]) end, L),
file:close(S)
end).
Lets say I have a record:
-record(foo, {bar}).
What I would like to do is to be able to pass the record name to a function as a parameter, and get back a new record. The function should be generic so that it should be able to accept any record, something like this.
make_record(foo, [bar], ["xyz"])
When implementing such a function I've tried this:
make_record(RecordName, Fields, Values) ->
NewRecord = #RecordName{} %% this line gives me an error: syntax error before RecordName
Is it possible to use the record name as a parameter?
You can't use the record syntax if you don't have access to the record during compile time.
But because records are simply transformed into tuples during compilation it is really easy to construct them manually:
-record(some_rec, {a, b}).
make_record(Rec, Values) ->
list_to_tuple([Rec | Values]).
test() ->
R = make_record(some_rec, ["Hej", 5]), % Dynamically create record
#some_rec{a = A, b = B} = R, % Access it using record syntax
io:format("a = ~p, b = ~p~n", [A, B]).
Alternative solution
Or, if you at compile time make a list of all records that the function should be able to construct, you can use the field names also:
%% List of record info created with record_info macro during compile time
-define(recs,
[
{some_rec, record_info(fields, some_rec)},
{some_other_rec, record_info(fields, some_other_rec)}
]).
make_record_2(Rec, Fields, Values) ->
ValueDict = lists:zip(Fields, Values),
% Look up the record name and fields in record list
Body = lists:map(
fun(Field) -> proplists:get_value(Field, ValueDict, undefined) end,
proplists:get_value(Rec, ?recs)),
list_to_tuple([Rec | Body]).
test_2() ->
R = make_record_2(some_rec, [b, a], ["B value", "A value"]),
#some_rec{a = A, b = B} = R,
io:format("a = ~p, b = ~p~n", [A, B]).
With the second version you can also do some verification to make sure you are using the right fields etc.
Other tips
Other useful constructs to keep in mind when working with records dynamically is the #some_rec.a expression which evaluates to the index of the a field in some_recs, and the element(N, Tuple) function which given a tuple and an index returns the element in that index.
This is not possible, as records are compile-time only structures. At compilation they are converted into tuples. Thus the compiler needs to know the name of the record, so you cannot use a variable.
You could also use some parse-transform magic (see exprecs) to create record constructors and accessors, but this design seems to go in the wrong direction.
If you need to dynamically create record-like things, you can use some structures instead, like key-value lists, or dicts.
To cover all cases: If you have fields and values but don't necessarily have them in the correct order, you could make your function take in the result of record_info(fields, Record), with Record being the atom of the record you want to make. Then it'll have the ordered field names to work with. And a record is just a tuple with its atom name in the first slot, so you can build it that way. Here's how I build an arbitrary shallow record from a JSON string (not thoroughly tested and not optimized, but tested and working):
% Converts the given JSON string to a record
% WARNING: Only for shallow records. Won't work for nested ones!
%
% Record: The atom representing the type of record to be converted to
% RecordInfo: The result of calling record_info(fields, Record)
% JSON: The JSON string
jsonToRecord(Record, RecordInfo, JSON) ->
JiffyList = element(1, jiffy:decode(JSON)),
Struct = erlang:make_tuple(length(RecordInfo)+1, ""),
Struct2 = erlang:setelement(1, Struct, Record),
recordFromJsonList(RecordInfo, Struct2, JiffyList).
% private methods
recordFromJsonList(_RecordInfo, Struct, []) -> Struct;
recordFromJsonList(RecordInfo, Struct, [{Name, Val} | Rest]) ->
FieldNames = atomNames(RecordInfo),
Index = index_of(erlang:binary_to_list(Name), FieldNames),
recordFromJsonList(RecordInfo, erlang:setelement(Index+1, Struct, Val), Rest).
% Converts a list of atoms to a list of strings
%
% Atoms: The list of atoms
atomNames(Atoms) ->
F = fun(Field) ->
lists:flatten(io_lib:format("~p", [Field]))
end,
lists:map(F, Atoms).
% Gets the index of an item in a list (one-indexed)
%
% Item: The item to search for
% List: The list
index_of(Item, List) -> index_of(Item, List, 1).
% private helper
index_of(_, [], _) -> not_found;
index_of(Item, [Item|_], Index) -> Index;
index_of(Item, [_|Tl], Index) -> index_of(Item, Tl, Index+1).
Brief explanation: The JSON represents some key:value pairs corresponding to field:value pairs in the record we're trying to build. We might not get the key:value pairs in the correct order, so we need the list of record fields passed in so we can insert the values into their correct positions in the tuple.
am josh in Uganda. i created a mnesia fragmented table (64 fragments), and managed to populate it upto 9948723 records. Each fragment was a disc_copies type, with two replicas.
Now, using qlc (query list comprehension), was too slow in searching for a record, and was returning inaccurate results.
I found out that this overhead is that qlc uses the select function of mnesia which traverses the entire table in order to match records. i tried something else below.
-define(ACCESS_MOD,mnesia_frag).
-define(DEFAULT_CONTEXT,transaction).
-define(NULL,'_').
-record(address,{tel,zip_code,email}).
-record(person,{name,sex,age,address = #address{}}).
match()-> Z = fun(Spec) -> mnesia:match_object(Spec) end,Z.
match_object(Pattern)->
Match = match(),
mnesia:activity(?DEFAULT_CONTEXT,Match,[Pattern],?ACCESS_MOD).
Trying this functionality gave me good results. But i found that i have to dynamically build patterns for every search that may be made in my stored procedures.
i decided to go through the havoc of doing this, so i wrote functions which will dynamically build wild patterns for my records depending on which parameter is to be searched.
%% This below gives me the default pattern for all searches ::= {person,'_','_','_'}
pattern(Record_name)->
N = length(my_record_info(Record_name)) + 1,
erlang:setelement(1,erlang:make_tuple(N,?NULL),Record_name).
%% this finds the position of the provided value and places it in that
%% position while keeping '_' in the other positions.
%% The caller function can use this function recursively until
%% it has built the full search pattern of interest
pattern({Field,Value},Pattern_sofar)->
N = position(Field,my_record_info(element(1,Pattern_sofar))),
case N of
-1 -> Pattern_sofar;
Int when Int >= 1 -> erlang:setelement(N + 1,Pattern_sofar,Value);
_ -> Pattern_sofar
end.
my_record_info(Record_name)->
case Record_name of
staff_dynamic -> record_info(fields,staff_dynamic);
person -> record_info(fields,person);
_ -> []
end.
%% These below,help locate the position of an element in a list
%% returned by "-record_info(fields,person)"
position(_,[]) -> -1;
position(Value,List)->
find(lists:member(Value,List),Value,List,1).
find(false,_,_,_) -> -1;
find(true,V,[V|_],N)-> N;
find(true,V,[_|X],N)->
find(V,X,N + 1).
find(V,[V|_],N)-> N;
find(V,[_|X],N) -> find(V,X,N + 1).
This was working very well though it was computationally intensive.
It could still work even after changing the record definition since at compile time, it gets the new record info
The problem is that when i initiate even 25 processes on a 3.0 GHz pentium 4 processor running WinXP, It hangs and takes a long time to return results.
If am to use qlc in these fragments, to get accurate results, i have to specify which fragment to search in like this.
find_person_by_tel(Tel)->
select(qlc:q([ X || X <- mnesia:table(Frag), (X#person.address)#address.tel == Tel])).
select(Q)->
case ?transact(fun() -> qlc:e(Q) end) of
{atomic,Val} -> Val;
{aborted,_} = Error -> report_mnesia_event(Error)
end.
Qlc was returning [], when i search for something yet when i use match_object/1 i get accurate results. I found that using match_expressions can help.
mnesia:table(Tab,Props).
where Props is a data structure that defines the match expression, the chunk size of return values e.t.c
I got a problem when i tried building match expressions dynamically.
Function mnesia:read/1 or mnesia:read/2 requires that you have the primary key
Now am asking myself, how can i efficiently use QLC to search for records in a large fragmented table? Please help.
I know that using tuple representation of records makes code hard to upgrade. This is why
i hate using mnesia:select/1, mnesia:match_object/1 and i want to stick to QLC. QLC is giving me wrong results in my queries from a mnesia table of 64 fragments even on the same node.
Has anyone ever used QLC to query a fragmented table?, please help
Do you invoke the qlc in the activity context?
tfn_match(Id) ->
Search = #person{address=#address{tel=Id, _ = '_'}, _ = '_'},
trans(fun() -> mnesia:match_object(Search) end).
tfn_qlc(Id) ->
Q = qlc:q([ X || X <- mnesia:table(person), (X#person.address)#address.tel == Id]),
trans(fun() -> qlc:e(Q) end).
trans(Fun) ->
try Res = mnesia:activity(transaction, Fun, mnesia_frag),
{atomic, Res}
catch exit:Error ->
{aborted, Error}
end.