I have a table in mnesia and read that the size limit of a table is only 4gb. I read that to store more data in a single table, fragmentation has to be done in mnesia. Also when using a table without fragmentation I noted that the cpu usage is high(disc_only_copies) not sure why though.
I wanted to know if adding more fragments will improve mnesia performance, reduce the cpu usage or is it just to store more data in a single table?
You didn't specify what kind of table you use:
disk_only: uses DETS and is limited to 2GB (don't use this!)
ram_copies: only in RAM (ETS table) limited to < 4 GB on 32bit machines, much larger possible on 64bit Erlang VMs limited by available memory
disk_copies: in RAM and in a transaction log on disk, doesn't have DETS limitations but the RAM limitations remain, but if you have enough RAM and use a 64bit VM you are fine
for more details see: LYSE on mnesia table types
Related
I'd like to use a neo4j database in a docker container with Odroid XU4. The database is not big, approximately 20.000 nodes will be in it. The Odroid has only 2G memory, and I'd like to have a samba server, some nodejs applications and at least one PgSQL database too, so the system is short on memory. I read in the neo4j manual that 2G memory is the minimum, but I read by docker examples that it is used with 512M, so I am a little confused about this. What is the minimum memory I can use the neo4j docker image with?
I have similar troubles with the disk space. The system is on a 32GB SD card. I'd like to save database data there and backup on an external hard drive, so I could spend max 16GB for the neo4j. The data certainly does not require that kind of space, I am not sure why neo4j needs it (according to the manual again).
First you can use http://neo4j.com/hardware-sizing-calculator/ to get rough estimate for memory and disk usage.
Second option is to do some math. You can use information on page 12 in http://graphaware.com/assets/bachman-msc-thesis.pdf
You should keep in mind it's good to have all data in the memory for the performance reasons.
From my point of view you shouldn't have problem with the memory, but you can't expect great performance.
It's better to try it by yourself before you ask here ;)
Can I set up a replica set in MongoDB 1.8 using servers with different amounts of RAM?
server1: 5gb
server2: 2gb
server3: 4gb
If yes, what are the pros and cons?
No, you do not need equal RAM. (Yes, you could set up a replica set as described.)
MongoDB uses memory-mapped files for all caching, which means that cache paging is handled by the operating system. The replicas with more memory will keep more of the database in memory; those with less will page more to disk.
MongoDB will eventually bring the entire database into memory if it can. If you're using two replicas for reads and one for writes, you might want to use the 5gb and 4gb machines for reads, so they are more likely to be hitting RAM.
Yes, you can configure a replica set this way.
If yes, what are the pros and cons?
Here's a doc explaining the major features of replica sets. Let's take a look at these in light of the RAM differences.
Pros:
More computers means better data redundancy. Having that 2GB node at least means that you have one more copy of the data.
Having a full 3 nodes on a replica set makes it easier to take one down for maintenance.
Cons:
Having servers of different sizes isn't great for automated failover. Let's say that your 5GB server is the primary. What happens when it goes down and the 2GB server wins the election? You still have automated fail-over, but your performance has probably dropped dramatically.
Read scaling may not work very well. Depending on your read patterns, sending reads to the 2GB server may result in lots of extra disk hits and slower performance.
So, the big problem here, is really one of performance. If you're just doing this for a dev setup, then it will basically work. But in production you run the risk of completely tanking your app. If your app is used to living on 4GB+ of RAM and then suddenly drops to 2GB, it may become unusable.
Most production setups want to fail over to another "equally-powered" computer.
I try to compare Mnesia with more traditional databases.
As I understand it tables in Mnesia can be located to (see Memory consumption in Mnesia):
ram_copies - tables are stored in ets, so no durability as in ACID.
disc_copies - tables are located to ets and dets, so the table can not be bigger than the available memory? And if the table are fragmented, the database can not be bigger than the available memory?
disc_only_copies - tables are located dets, so no caching in memory and worse performance. And the size of the table are limited to the size of dets or the table has to be fragmented.
So if I want the performance of doing reads from RAM and the durability of writes to disc, then the size of the tables are very limited compared to a traditional RDBMS like MySQL or PostgreSQL.
I know that Mnesia aren't meant to replace traditional RDBMS:s, but can it be used as a big RDBMS or do I have to look for another database?
The server I will use is a VPS with limited amount of memory, around 512MB, but I want good database performance.
Are disc_copies and the other types of tables in Mnesia so limited as I have understood? Can´t the database be partially on memory and a full copy on disc?
The storage capacity of the Mnesia database for the different types of tables has been discussed in this previous SO question:
What is the storage capacity of a Mnesia database?
where a great answer is already available.
Obviously (but I guess you've already seen it) the official doc is available at:
http://www.erlang.org/doc/man/mnesia.html
Also, reading from the Mnesia FAQ:
11.5 How much data can be stored in Mnesia?
Dets uses 32 bit integers for file
offsets, so the largest possible
mnesia table (for now) is 4Gb.
In practice your machine will slow to
a crawl way before you reach this
limit.
Finally, Mnesia tables can be fragmented. This is discussed here and there.
These are my 2p.
I'm working on a desktop application that will produce several in-memory datasets as an intermediary before being committed to a database.
Obviously I'm going to try to keep the size of these to a minimum, but are there any guidelines on thresholds I shouldn't cross for good functionality on an 'average' machine?
Thanks for any help.
There is no "average" machine. There is a wide range of still-in-use computers, including those that run DOS/Win3.1/Win9x and have less than 64MB of installed RAM.
If you don't set any minimum hardware requirements for your application, at least consider the oldest OS you're planning to support, and use the official minimum hardware requirements of that OS to gain a lower-bound assesment.
Generally, if your application is going to consume a considerable amount of RAM, you may want to let the user configure the upper bounds of the application's memory management mechanism.
That said, if you decide to dynamically manage the upper bounds based on realtime data, there are quite a few things you can do.
If you're developing a windows application, you can use WMI to get the system's total memory amount, and base your limitations on that value (say, use up to 5% of the total memory).
In .NET, if your data structures are complex and you find it hard to assess the amount of memory you consume, you can query the Garbage Collector for the amount of allocated memory using GC.GetTotalMemory(false), or use a System.Diagnostics.Process object.
Some places state 2GB period. Some places state it depends up the number of nodes.
Quite large if your question is "what's the storage capacity of an mnesia database made up of a huge number of disc_only_copies tables" - you're largely limited by available disk space.
An easier question to answer is what's the maximum capacity of a single mnesia table of different types. ram_copies tables are limited by available memory. disc_copies tables are limited by their dets backend (Hakan Mattsson on Mnesia) - this limit is 4Gb of data at the moment.
So the simple answer is that simple disc_copies table can store up to 4Gb of data before they run into problems. (Mnesia doesn't actually crash if you exceed the on-disk size limit - the ram_copies portion of the table continues running, so you can repair this by deleting data or making other arrangements at runtime)
However if you consider other mnesia features, then the answer is more complicated.
local_content tables. If the
table is a local_content table,
then it can have different contents
on each node in the mnesia cluster,
so the capacity of the table is
4Gb * <number of nodes>
fragmented tables. Mnesia supports user configurable table partitioning or sharding using table fragments. In this case you can effectively distribute and redistribute the data in your table over a number of primitive tables. These primitive tables can each have their own configuration - say one ram_copies table and the rest disc_only_copies tables. These primitive tables have the same size limits as mentioned earlier and now the effective capacity of the fragmented table is 4Gb * <number of fragments>. (Sadly if you fragment your table, you then have to modify your table access code to use mnesia:activity/4 instead of mnesia:write and friends, but if you plan this in advance it's managable)
external copies If you like living on the extreme bleeding edge, you could apply the mnesiaex patches to mnesia and store your table data in an external system such as Amazon S3 or Tokyo Cabinet. In this case the capacity of the table is limited by the backend storage.
TL;DR: the storage capacity of a Mnesia database is limited only* by available RAM.
* Assuming you use table types ram_copies or disc_copies. Also, if you store a lot of data in a disc_copies table, it needs to be read from disk at startup, which might increase startup time beyond what's acceptable.
This answer contradicts the two existing answers when it comes to tables of type disc_copies. Let me first get a few general points out of the way:
A mnesia table of type ram_copies is only limited by available RAM (except if you're on a 32-bit machine). Data is stored in an ETS table.
A mnesia table of type disc_only_copies is stored in a Dets table. Dets tables are limited to 2 GB, because of limits in the file format.
The obvious way to circumvent that limit is to create more tables, possibly through table fragmentation.
The schema is also stored in a Dets table, so the information describing all existing tables is also limited to 2 GB. You are likely to run into other limits before you hit that one, though.
A mnesia table of type disc_copies is stored both in RAM and on disk, so it is limited by available RAM - and perhaps something else?
I'm going to try to show below that there is no specific limit imposed by Mnesia on the size of a disc_copies table. Note however that many Erlang programmers believe that disc_copies tables are limited to 2 GB. That is stated in the accepted answer to this question, which at the time of writing outscores this answer by a factor of 7.
disc_copies moved from dets to disk_log in 2001
It is commonly believed that disc_copies tables are backed by Dets tables. As far as I can tell, this was the case until Erlang/OTP R7B-4 (released on 30th September 2001). From the README:
-- mnesia -----------------------------------------------------------------
OTP-3712 - Speed/load improvements disc_copies tables are not
implemented with dets anymore.
Look at the diff for more details, in particular mnesia_lib.erl and mnesia_loader.erl.
Sources supporting dets and a 2 / 4 GB limit
archelaus's answer draws from http://erlang.org/~hakan/mnesia_consumption.txt, which explains that disc_copies tables reside in ets and dets tables. However, looking at the index for the directory, we see that this document is dated 1999:
[TXT] mnesia_consumption.txt 26-Oct-1999 10:57 10k
It makes sense that it would say this, as it was written two years before the change.
Ray Boosen's answer draws from the Erlang FAQ:
11.5 How much data can be stored in Mnesia?
Dets uses 32 bit integers for file offsets, so the largest possible mnesia table (for now) is 4Gb.
In practice your machine will slow to a crawl way before you reach this limit.
The FAQ has been saying that since at least January 2001 (see the earliest copy in the Wayback Machine). That means that this FAQ entry dates from before the switch to disk_log, and hasn't been updated for a long time. (Anyway, the Dets table size limit is 2 GB, not 4 GB.) I submitted a pull request for the FAQ.
Sources supporting higher limits
The Learn You Some Erlang chapter on Mnesia says:
ram_copies
This option makes it so all data is stored exclusively in ETS, so memory only. Memory should be limited to a theoretical 4GB (and practically around 3GB) for virtual machines compiled on 32 bits, but this limit is pushed further away on 64 bits virtual machines, assuming there is more than 4GB of memory available.
disc_only_copies
This option means that the data is stored only in DETS. Disc only, and as such the storage is limited to DETS' 2GB limit.
disc_copies
This option means that the data is stored both in ETS and on disk, so both memory and the hard disk. disc_copies tables are not limited by DETS limits, as Mnesia uses a complex system of transaction logs and checkpoints that allow to create a disk-based backup of the table in memory.
I'm not sure when this was written, but the text above exists in the earliest Wayback Machine copy, dated April 2012.
In a post on erlang-questions titled "beating mnesia to death (was RE: Using 4Gb of ram with Erlang VM)", dated 7th November 2005, Ulf Wiger writes:
On a 16 GB machine, you can:
run 6 million simultaneous processes
(through use of erlang:hibernate, I was actually
able to run 20 million - spawn time: 6.3 us,
message passing time: 5.3 us, and I had
1.8 GB to spare.)
populate mnesia with at least 12 GB of data, but
think through how you want to represent it, since
the 64-bit word size blows things up a bit.
keep a 10 GB+ disc_copy table in mnesia. The
load times and log dump cost seem acceptable
(10 minutes to load, dumping takes a while but
runs in the background quite nicely.)
Conclusions
The confusion seems to stem from missing or out-dated information from official sources:
The Mnesia documentation doesn't mention any table size limits
The Erlang FAQ says that Mnesia is subject to a 4 GB Dets size limit, but this answer was written before the dets to disk_log change
The only other document on the erlang.org domain is Håkan Mattsson's document, dating from before the dets to disk_log change
LYSE seems to be the first "authoritative" source that mentions disc_copies tables not being subject to the Dets table size limit.
As per the documentation, this is 4GB. Section 11.5
http://erlang.org/faq/mnesia.html