mnesia memory allocation - erlang

i was testing the application by inserting some 1000 users and each user having 1000 contacts in a database table under mnesia and during insertion at some part the error i got is as follows:
Crash dump was written to: erl_crash.dump
binary_alloc: Cannot allocate 422879872 bytes of memory (of type "binary").
Aborted
i started the erl emulator with erl +MBas af (B-binary allocator af- a fit) and tried again but the error was same,
note:: i am using erlang r12b version and the system ram is 8gb on ubuntu 10.04
so may i know how to solve it?
the records definitions are:
%% database
-record(database,{dbid,guid,data}).
%% changelog
-record(changelog,{dbid,timestamp,changelist,type}).
here data is a vcard(contact info) , dbid and type is "contacts", guid is an integer automatically generated by the server
the database record contains all the vcard data of all users.if there are 1000 users and each user having 1000 contacts then we will have 10^6 records.
the changelog record will contain what are the changes done on the database table at that timestamp
the code for creation of tables are::
mnesia:create_table(database, [{type,bag}, {attributes,Record_of_database},
{record_name,database},
{index,guid},
{disc_copies,[node()]}])
mnesia:create_table(changelog, [{type,set}, {attributes,Record_of_changelog},
{record_name,changelog},
{index,timestamp},
{disc_copies,[node()]}])
the insertion of records on table is:
commit_data(DataList = [#database{dbid=DbID}|_]) ->
io:format("commit data called~n"),
[mnesia:dirty_write(database,{database,DbId,Guid,Key})|| {database,DbId,Guid,X}<-DataList].
write_changelist(Username,Dbname,Timestamp,ChangeList) ->
Type="contacts",
mnesia:dirty_write(changelog,{changelog,DbID,Timestamp,ChangeList,Type}).

I suppose that the list DataList is huge and should not be sent at once from a remote node. It should be sent in small pieces. The client can send one by one item from the DataList generated at the client. Also, because this problem occurs during insertion, i think that we should parallelise the list comprehension. We could have a parallel map where for each item in the list, the insertion is done in a separate process. Then, i also think that something is still wrong with the list comprehension. Variable Key is unbound and variable X is unused. Otherwise, probably the entire methodology needs a change. Lets see what others think. Thanks

This error normally occurs when there is no memory to allocate for binary heap by ERTS memory allocator called binary_alloc. Check the current binary heap size using erlang:system_info() or erlang:memory() or erlang:memory(binary) commands. If the binary heap size is huge then run erlang:garbage_collect() to free all non-referenced binary objects in binary heap. This will free the memory ..

In case you use long strings (it is just list in erlang) for vcard or somewehre else, they consumes much memory.
If this is the case, you change them to binary to suppress memory usage (use list_to_binary before insert to mnesia).
This may be not helpfull, because I don't know about your data structure (type, length and so on)...

Related

In how many ways we can create a PS file in Mainframes?

How many ways we can create A PS file using jcl.strong text
z/OS supports a couple of different data set organizations, short DSORGs. Data set organizations are
Physical Sequential, DSORG=PS
Partitioned Organized, DSORG=PO. There are two subtypes: PDS and PDS/E
Direct Access, DSORG=DA
Virtual Storage Access Method (the name is irritating, it is a DISK data set), DSORG=VS. There are a couple of subtypes.
The creation of a new data set is called allocation. Allocation is finally performed by calling internal system services. That means, you can code an assembler of C program to offer the allocation of a new data set. Therefore, the number of ways is unlimited.
However, the following ways are offered by z/OS, so you don't need to write a program on your own.
1. JCL
You can allocate a new PS, PO, or DA data set with JCL keywords on a DD statement. The main indicator that you want to allocate a new data set is DISP=(NEW,...). (Any other DISP= option allocates an existing data set.) The keyword DSORG= determines the type of data set. If it is omitted, the keyword SPACE= determines whether a PS or a PDS is allocated. If SPACE=specifies primary, secondary, and directory space amounts, a PDS is allocated, else a PS.
With limitations, you can also allocate a new VSAM data set with JCL keywords.
2. TSO/E
You can use the TSO/E ALLOC command to do (almost) the same as with JCL DD statement keywords (see above). Again, the keyword DISP(NEW,...) is used to allocate a new data set.
3. ISPF
You can use ISPF panels in a TSO/E session to allocate a new data sets.
4. Program IDCAMS
You can run program IDCAMS in a batch job to allocate a new data set. This is the preferred method to allocate VSAM data sets.
Other
There are possibilites, depending on the tools that are available at your installation.
Note that members of PDS, and PDS/E data sets are *not allocated. Once a PDS, or PDS/E has been allocated, members can be created. But we do not say "allocate a member". A new allocation is tied to reserving some disk space for he data set. Members make use of that space.
There is more behind allocation; too much to write it all down here.

Talend- Memory issues. Working with big files

Before admins start to eating me alive, I would like to say to my defense that I cannot comment in the original publications, because I do not have the power, therefore, I have to ask about this again.
I have issues running a job in talend (Open Studio for BIG DATA!). I have an archive of 3 gb. I do not consider that this is too much since I have a computer that has 32 GB in RAM.
While trying to run my job, first I got an error related to heap memory issue, then it changed for a garbage collector error, and now It doesn't even give me an error. (just do nothing and then stops)
I found this SOLUTIONS and:
a) Talend performance
#Kailash commented that parallel is only on the condition that I have to be subscribed to one of the Talend Platform solutions. My comment/question: So there is no other similar option to parallelize a job with a 3Gb archive size?
b) Talend 10 GB input and lookup out of memory error
#54l3d mentioned that its an option to split the lookup file into manageable chunks (may be 500M), then perform the join in many stages for each chunk. My comment/cry for help/question: how can I do that, I do not know how to split the look up, can someone explain this to me a little bit more graphical
c) How to push a big file data in talend?
just to mention that I also went through the "c" but I don't have any comment about it.
The job I am performing (thanks to #iMezouar) looks like this:
1) I have an inputFile MySQLInput coming from a DB in MySQL (3GB)
2) I used the tFirstRows to make it easier for the process (not working)
3) I used the tSplitRow to transform the data form many simmilar columns to only one column.
4) MySQLOutput
enter image description here
Thanks again for reading me and double thanks for answering.
From what I understand, your query returns a lot of data (3GB), and that is causing an error in your job. I suggest the following :
1. Filter data on the database side : replace tSampleRow by a WHERE clause in your tMysqlInput component in order to retrieve fewer rows in Talend.
2. MySQL jdbc driver by default retrieves all data into memory, so you need to use the stream option in tMysqlInput's advanced settings in order to stream rows.

eheap allocated in erlang

I use recon_alloc:memory(allocated_types) and get info like below.
34> recon_alloc:memory(allocated_types).
[{binary_alloc,1546650440},
{driver_alloc,21504840},
{eheap_alloc,28704768840},
{ets_alloc,526938952},
{fix_alloc,145359688},
{ll_alloc,403701800},
{sl_alloc,688968},
{std_alloc,67633992},
{temp_alloc,21504840}]
The eheap_alloc is using 28G. But sum up with heap_size of all process
>lists:sum([begin {_, X}=process_info(P, heap_size), X end || P <- processes()]).
683197586
Only 683M !Any idea where is the 28G ?
You are not comparing the right values. From erlang:process_info
{heap_size, Size}
Size is the size in words of youngest heap generation of the
process. This generation currently include the stack of the process.
This information is highly implementation dependent, and may change if
the implementation change.
recon_alloc:memory(allocated_types) is in bytes by default. You can change it using set_unit. It is not the memory that is currently used but it is the memory reserved by the VM grouped into different allocators. You can use recon_alloc:memory(used) instead. More details in allocator() - Recon Library
Searching through the Erlang source code for the eheap_alloc keyword I didn't come up with much. The most relevant piece of code was this XML code from erts_alloc.xml (https://github.com/erlang/otp/blob/172e812c491680fbb175f56f7604d4098cdc9de4/erts/doc/src/erts_alloc.xml#L46):
<tag><c>eheap_alloc</c></tag>
<item>Allocator used for Erlang heap data, such as Erlang process heaps.</item>
This says that process heaps are stored in eheap_alloc but it doesn't say what else is stored in eheap_alloc. The eheap_alloc stores everything your application needs to run along with some extra memory along with some additional space, so the VM doesn't have to request more memory from the OS every time something needs to be added. There are things the VM must keep in memory that aren't associated with a specific process. For example, large binaries, even though they may used within a process, are not stored inside that processes heap. They are stored in a shared process binary heap called binary_alloc. The binary heap, along with the process heaps and some extra memory, are what make up eheap_alloc.
In your case it looks like you have a lot of memory in your binary_alloc. binary_alloc is probably using a significant portion of your eheap_alloc.
For more details on binary handling checkout these pages:
http://blog.bugsense.com/post/74179424069/erlang-binary-garbage-collection-a-love-hate
http://www.erlang.org/doc/efficiency_guide/binaryhandling.html#id65224

Sorting 20GB of data

In the past I had to work with big files, somewhere about in the 0.1-3GB range. Not all the 'columns' were needed so it was ok to fit the remaining data in RAM.
Now I have to work with files in 1-20GB range, and they will probably grow as the time will pass. That is totally different because you cannot fit the data in RAM anymore.
My file contains several millions of 'entries' (I have found one with 30 mil entries). On entry consists in about 10 'columns': one string (50-1000 unicode chars) and several numbers. I have to sort the data by 'column' and show it. For the user only the top entries (1-30%) are relevant, the rest is low quality data.
So, I need some suggestions about in which direction to head out. I definitively don't want to put data in a DB because they are hard to install and configure for non computer savvy persons. I like to deliver a monolithic program.
Showing the data is not difficult at all. But sorting... without loading the data in RAM, on regular PCs (2-6GB RAM)... will kill some good hours.
I was looking a bit into MMF (memory mapped files) but this article from Danny Thorpe shows that it may not be suitable: http://dannythorpe.com/2004/03/19/the-hidden-costs-of-memory-mapped-files/
So, I was thinking about loading only the data from the column that has to be sorted in ram AND a pointer to the address (into the disk file) of the 'entry'. I sort the 'column' then I use the pointer to find the entry corresponding to each column cell and restore the entry. The 'restoration' will be written directly to disk so no additional RAM will be required.
PS: I am looking for a solution that will work both on Lazarus and Delphi because Lazarus (actually FPC) has 64 bit support for Mac. 64 bit means more RAM available = faster sorting.
I think a way to go is Mergesort, it's a great algorithm for sorting a
large amount of fixed records with limited memory.
General idea:
read N lines from the input file (a value that allows you to keep the lines in memory)
sort these lines and write the sorted lines to file 1
repeat with the next N lines to obtain file 2
...
you reach the end of the input file and you now have M files (each of which is sorted)
merge these files into a single file (you'll have to do this in steps as well)
You could also consider a solution based on an embedded database, e.g. Firebird embedded: it works well with Delphi/Windows and you only have to add some DLL in your program folder (I'm not sure about Lazarus/OSX).
If you only need a fraction of the whole data, scan the file sequentially and keep only the entries needed for display. F.I. lets say you need only 300 entries from 1 million. Scan the first first 300 entries in the file and sort them in memory. Then for each remaining entry check if it is lower than the lowest in memory and skip it. If it is higher as the lowest entry in memory, insert it into the correct place inside the 300 and throw away the lowest. This will make the second lowest the lowest. Repeat until end of file.
Really, there are no sorting algorithms that can make moving 30gb of randomly sorted data fast.
If you need to sort in multiple ways, the trick is not to move the data itself at all, but instead to create an index for each column that you need to sort.
I do it like that with files that are also tens of gigabytes long, and users can sort, scroll and search the data without noticing that it's a huge dataset they're working with.
Please finde here a class which sorts a file using a slightly optimized merge sort. I wrote that a couple of years ago for fun. It uses a skip list for sorting files in-memory.
Edit: The forum is german and you have to register (for free). It's safe but requires a bit of german knowledge.
If you cannot fit the data into main memory then you are into the realms of external sorting. Typically this involves external merge sort. Sort smaller chunks of the data in memory, one by one, and write back to disk. And then merge these chunks.

Neo4j throws java heap exception while creating relationships

I done bulk upload on Neo4j of two different file, say file "A"(contaning 10000 records) and file "B" (contaning 9000 records).
Now I have third file say file "C" (having 10 million record (rows))
File "C" describes the relation between file "A" and file "B".
When the processing starts for file "C" it throws Java heap size exception, I have 4 GB of ram and increased heap size up to 3 GB. Althoug if I reduce the size of file "C" up to 2 million records then it works fine
Iam using Neo4j 1.9 version.
Please suggest why is it so? and how to sole it.
Thanks in advance :-)
Are you doing this with Neo4j's normal API, or with the bulk inserter? I'm assuming the normal API, and I'm assuming you're doing everything in one transaction? Either use the bulk inserter, or break up your transactions, as transactions are kept in Memory until flushed to disk on commit, which is most likely causing your Heap errors.

Resources