I am using mnesia table.This table has two attributes(primary key and its value).
Now i am trying delete a tuple from mnesia table.I am using delete/1 function of mnesia for deletion purpose.This function takes table name and key corresponding to tuple fro which deletion has to be made.My problem is how can i handle the scenrio when tuple corresponding to passed key is not present.This delete function gives {atomic,ok} every time?
For your case you have to read the record first and delete it only after that. To prevent an access to the record from other transactions between 'read' and 'delete' operations use 'write' lock kind when you are reading the record. It gives your transaction an exclusive access to it:
delete_record(Table, Key) ->
F = fun () ->
case mnesia:read(Table, Key, write) of
[Record] ->
mnesia:delete({Table, Key}),
{ok, Record};
[] ->
mnesia:abort(not_exist)
end
end,
mnesia:transaction(F).
Related
Problem statement
I have a mnesia backup file and would like to extract values from it. There are 3 tables(to make it simple), Employee, Skills, and attendance. So the mnesia back up file contains all those data from these three tables.
Emplyee table is :
Empid (Key)
Name
SkillId
AttendanceId
Skill table is
SkillId (Key)
Skill Name
Attendance table is
Code (Key)
AttendanceId
Percentage
What i have tried
I have used
ets:foldl(Fetch,OutputFile,Table)
Fetch : is separate function to traverse the record fetched to bring in desired output format.
OutputFile : it writes to this file
Table : name of the table
Expecting
I am gettig records with AttendanceId(as this is the key) where as i Want to get code only. It displays employee informations and attendance id.
Help me out.
Backup and restore is described in the mnesia user guide here.
To read an existing backup, without restoring it, use mnesia:traverse_backup/4.
1> mnesia:backup(backup_file).
ok
2> Fun = fun(BackupItems, Acc) -> {[], []} end.
#Fun<erl_eval.12.90072148>
3> mnesia:traverse_backup(backup_file, mnesia_backup, [], read_only, Fun, []).
{ok,[]}
Now add something to the Fun to get what you want.
I have a table with the following attributes:
SortCode Index Created
SortCode is the primary key and Index is secondary key. Given an Index value, how do I get the associated SortCode value?
I have tried ets:lookup/3 but it takes only a primary key.
There is not such thing as a secondary index in ets. You can do:
full scan using ets:match or ets:select or
make you own reverse index ets table or
use mnesia with added (secondary) index.
Adding to what Hynek -Pichi- Vychodil has said.
There is no solution in ets to fetch record using some other attribute apart from the key. It can be done using mnesia:dirty_index_read().
If you want to use ets only then you can do as above suggestion or follwoing code. Assuming your record pattern is omething like : {"one",1,"27092015"}
Key is "one" but you have to fetch using 1.
FilterSuspCodeFun = fun ({_,I,_}) when I == 1 -> true ; (_) -> false end,
ListData = ets:tab2list(susp_code),
{SortCode,_,created}= lists:filter(FilterSuspCodeFun,ListData),
I have in my database mnesia two table which have this syntax :
-record(person, {firstname, lastname,adress}).
-record(personBackup, {firstname, lastname,adress}).
I want to transfer the data from the table person to the table personBackup
I think that I should create the two tables with this syntax ( I'm agree with your idea)
mnesia:create_table(person,
[{disc_copies, [node()]},
{attributes, record_info(fields, person)}]),
mnesia:create_table(person_backup,
[{disc_copies, [node()]},
{attributes, record_info(fields, person)},
{record_name, person}]),
now I have a function named verify
in this function I will do a test and if the test is verified I should transfert data from person to person_backup and then I should do a reset
this is my function
verify(Form)->
if Form =:= 40 ->
%%here I should transert data from person to person_backup : read all lines from person and write this lines into person_backup
reset();
Form =/= 40 ->
io:format("it is ok")
end.
this is the function reset :
reset() ->
stop(),
destroy(),
create(),
start(),
{ok}.
You don't have to use a separate record definition for each table. mnesia:create_table takes a record_name option, so you could create your tables like this:
mnesia:create_table(person,
[{disc_copies, [node()]},
{attributes, record_info(fields, person)}]),
mnesia:create_table(person_backup,
[{disc_copies, [node()]},
{attributes, record_info(fields, person)},
{record_name, person}]),
The value for record_name defaults to the name of the table, so there's no need to specify it for person. (I changed personBackup to person_backup, as Erlang atoms are usually written without camel case, unlike variables.)
Then you can put the same kind of records in both tables. Read or select from person, and write to person_backup, no conversion necessary.
There is no need for a seperated record definition for each table. Variables 90 and 80 will do the trick. If you wish to take a option of recod_name, you could use mnesia:create_table
# legoscia, you are correct for everything except for line 6.
mnesia:create_table(person, if player value = 1
this way the outcome can disc all copies and nodes.
I have a record:
-record(bigdata, {mykey,some1,some2}).
Is doing a
mnesia:match_object({bigdata, mykey, some1,'_'})
the fastest way fetching more than 5000 rows?
Clarification:
Creating "custom" keys is an option (so I can do a read) but is doing 5000 reads fastest than match_object on one single key?
I'm curious as to the problem you are solving, how many rows are in the table, etc., without that information this might not be a relevant answer, but...
If you have a bag, then it might be better to use read/2 on the key and then traverse the list of records being returned. It would be best, if possible, to structure your data to avoid selects and match.
In general select/2 is preferred to match_object as it tends to better avoid full table scans. Also, dirty_select is going to be faster then select/2 assuming you do not need transactional support. And, if you can live with the constraints, Mensa allows you to go against the underlying ets table directly which is very fast, but look at the documentation as it is appropriate only in very rarified situations.
Mnesia is more a key-value storage system, and it will traverse all its records for getting match.
To fetch in a fast way, you should design the storage structure to directly support the query. To Make some1 as key or index. Then fetch them by read or index_read.
The statement Fastest Way to return more than 5000 rows depends on the problem in question. What is the database structure ? What do we want ? what is the record structure ? After those, then, it boils down to how you write your read functions. If we are sure about the primary key, then we use mnesia:read/1 or mnesia:read/2 if not, its better and more beautiful to use Query List comprehensions. Its more flexible to search nested records and with complex conditional queries. see usage below:
-include_lib("stdlib/include/qlc.hrl").
-record(bigdata, {mykey,some1,some2}).
%% query list comprehenshions
select(Q)->
%% to prevent against nested transactions
%% to ensure it also works whether table
%% is fragmented or not, we will use
%% mnesia:activity/4
case mnesia:is_transaction() of
false ->
F = fun(QH)-> qlc:e(QH) end,
mnesia:activity(transaction,F,[Q],mnesia_frag);
true -> qlc:e(Q)
end.
%% to read by a given field or even several
%% you use a list comprehension and pass the guards
%% to filter those records accordingly
read_by_field(some2,Value)->
QueryHandle = qlc:q([X || X <- mnesia:table(bigdata),
X#bigdata.some2 == Value]),
select(QueryHandle).
%% selecting by several conditions
read_by_several()->
%% you can pass as many guard expressions
QueryHandle = qlc:q([X || X <- mnesia:table(bigdata),
X#bigdata.some2 =< 300,
X#bigdata.some1 > 50
]),
select(QueryHandle).
%% Its possible to pass a 'fun' which will do the
%% record selection in the query list comprehension
auto_reader(ValidatorFun)->
QueryHandle = qlc:q([X || X <- mnesia:table(bigdata),
ValidatorFun(X) == true]),
select(QueryHandle).
read_using_auto()->
F = fun({bigdata,SomeKey,_,Some2}) -> true;
(_) -> false
end,
auto_reader(F).
So i think if you want fastest way, we need more clarification and problem detail. Speed depends on many factors my dear !
I want to create the following schema in Mnesia. Have three tables, called t1, t2 and t3, each of them storing elements of the following record:
-record(pe, {pid, event}).
I tried creating the tables with:
Attrs = record_info(fields, pe),
Tbls = [t1, t2, t3],
[mnesia:create_table(Tbl, [{attributes, Attrs}]) || Tbl <- Tbls],
and then write some content using the following line (P and E have values):
mnesia:write(t1, #pe{pid=P, event=E}, write)
but I got a bad type error. (Relevant commands were passed to transactions, so it's not a sync problem.)
All the textbook examples of Mnesia show how to create different tables for different records. Can someone please reply with an example for creating different tables for the same record?
regarding your "DDT" for creating the tables, I don't see any mystake at first sight, just remember that using tables with names different from the record names makes you lose the "simple" commands (like mnesia:write/1) because they use element(1, RecordTuple) to retrieve table name.
When defining tables, you can use option {record_name, RecordName} (in your case: {record_name, pe}) to tell mnesia that first atom in tuple representing records in table is not the table name, but instead the atom you passed with record_name; so in case of your table t1 it makes mnesia expecting 'pe' records when inserting or looking up for records.
If you want to insert a record in all tables, you might use a script similar to the one used to create table (but in a function wrapper for mnesia transaction context):
insert_record_in_all_tables(Pid, Event, Tables) ->
mnesia:transaction(fun() -> [mnesia:write(T, #pe{pid=Pid, event=Event}, write) || T <- Tables] end).
Hope this helps!