Number of triples in Jena Fuseki - jena

I have to upload 3 billion triples but the maximum would seem to be 1.7 billion https://www.w3.org/wiki/LargeTripleStores#Jena_TDB_.281.7B.29
It's correct? What is the maximum number of triples I can load in fuseki?

(FYI: None of that page is not kept up to date that I can see.)
3 billion is possible into TDB2.
There isn't a specific hard limit - the system just gets slower.
The database will be very big (it is data dependent and I don't have figures).
There are a number of bulk loaders for TDB2. The best choice depends on hardware.
In the next Apache Jena release, there is one specifically designed for large loads when loading to rotating disk on modest hardware (called colloquially "xloader").
Ask on the users#jena mailing list to talk to others who have worked with large datasets.

Related

ios: large number of database records - possible?

I may need to process a large number of database records, stored locally on an iPad, using Swift. I've yet to choose the database (but likely sqlite unless suggestions point to other). The records could number up to 700,000 and would need to be processed by way of adding up totals, working out percentages etc.
Is this even possible on an iPad with limited processing power? Is there a software storage limit? Is the iPad up to processing such large amounts of data on the fly?
Another option may be to split the data into smaller chunks or around 30,000 records and work with that. Even then I am not sure its a practical thing to attempt.
Any advice on how, or if, to approach this and what limitations may apply?

Neo4j sharding aspect

I was looking on the scalability of Neo4j, and read a document written by David Montag in January 2013.
Concerning the sharding aspect, he said the 1st release of 2014 would come with a first solution.
Does anyone know if it was done or its status if not?
Thanks!
Disclosure: I'm working as VP Product for Neo Technology, the sponsor of the Neo4j open source graph database.
Now that we've just released Neo4j 2.0 (actually 2.0.1 today!) we are embarking on a 2.1 release that is mostly oriented around (even more) performance & scalability. This will increase the upper limits of the graph to an effectively unlimited number of entities, and improve various other things.
Let me set some context first, and then answer your question.
As you probably saw from the paper, Neo4j's current horizontal-scaling architecture allows read scaling, with writes all going to master and fanning out. This gets you effectively unlimited read scaling, and into the tens of thousands of writes per second.
Practically speaking, there are production Neo4j customers (including Snap Interactive and Glassdoor) with around a billion people in their social graph... in all cases behind an active and heavily-hit web site, being handled by comparatively quite modest Neo4j clusters (no more than 5 instances). So that's one key feature: the Neo4j of today an incredible computational density, and so we regularly see fairly small clusters handling a substantially large production workload... with very fast response times.
More on the current architecture can be found here: www.neotechnology.com/neo4j-scales-for-the-enterprise/
And a list of customers (which includes companies like Wal-Mart and eBay) can be found here: neotechnology.com/customers/ One of the world's largest parcel delivery carriers uses Neo4j to route all of their packages, in real time, with peaks of 3000 routing operations per second, and zero downtime. (This arguably is the world's largest and most mission-critical use of a graph database and of a NOSQL database; though unfortunately I can't say who it is.)
So in one sense the tl;dr is that if you're not yet as big as Wal-Mart or eBay, then you're probably ok. That oversimplifies it only a bit. There is the 1% of cases where you have sustained transactional write workloads into the 100s of thousands per second. However even in those cases it's often not the right thing to load all of that data into the real-time graph. We usually advise people to do some aggregation or filtering, and bring only the more important things into the graph. Intuit gave a good talk about this. They filter a billion B2B transactions into a much smaller number of aggregate monthly transaction relationships with aggregated counts and currency amounts by direction.
Enter sharding... Sharding has gained a lot of popularity these days. This is largely thanks to the other three categories of NOSQL, where joins are an anti-pattern. Most queries involve reading or writing just a single piece of discrete data. Just as joining is an anti-pattern for key-value stores and document databases, sharding is an anti-pattern for graph databases. What I mean by that is... the very best performance will occur when all of your data is available in memory on a single instance, because hopping back and forth all over the network whenever you're reading and writing will slow things significantly down, unless you've been really really smart about how you distribute your data... and even then. Our approach has been twofold:
Do as many smart things as possible in order to support extremely high read & write volumes without having to resort to sharding. This gets you the best and most predictable latency and efficiency. In other words: if we can be good enough to support your requirement without sharding, that will always be the best approach. The link above describes some of these tricks, including the deployment pattern that lets you shard your data in memory without having to shard it on disk (a trick we call cache-sharding). There are other tricks along similar lines, and more coming down the pike...
Add a secondary architecture pattern into Neo4j that does support sharding. Why do this if sharding is best avoided? As more people find more uses for graphs, and data volumes continue to increase, we think eventually it will be an important and inevitable thing. This would allow you to run all of Facebook for example, in one Neo4j cluster (a pretty huge one)... not just the social part of the graph, which we can handle today. We've already done a lot of work on this, and have an architecture developed that we believe balances the many considerations. This is a multi-year effort, and while we could very easily release a version of Neo4j that shards naively (that would no doubt be really popular), we probably won't do that. We want to do it right, which amounts to rocket science.
TL;DR With 2018 is days away neo4j still does not support sharding as it is typically considered.
Details Neo4j still requires all data to fit on a single node. The node contents can be replicated within a cluster - but actual sharding is not part of the picture.
When neo4j talks of sharding they are referring to caching portions of the database in memory: different slices are cached on different replicated nodes. That differs from say mysql sharding in which each node contains only a portion of the total data.
Here is a summary of their "take" on scalability: their product term is "High Availability" https://neo4j.com/blog/neo4j-scalability-infographic/
. Note that High Availability should not be the same as Scalability: so they do not actually support the latter in the traditional understanding of the term.

What is the status on Neo4j's horizontal scalability project Rassilon?

Just wondering if anyone has any information on the status of project Rassilon, Neo4j's side project which focuses on improving horizontal scalability of Neo4j?
It was first announced in January 2013 here.
I'm particularly interested in knowing more about when the graph size limitation will be removed and when sharding across clusters will become available.
The node & relationship limits are going away in 2.1, which is the next release post 2.0 (which now has a release candidate).
Rassilon is definitely still in the mix. That said, that work is not taking precedence over things like the significant bundle of new features that are in 2.0. The reason is that Neo4j as it stands today is extremely capable of scaling, using the variety of architecture features outlined below (with some live examples):
www.neotechnology.com/neo4j-scales-for-the-enterprise/
There's lots of cleverness in the current architecture that allows the graph to perform & scale well without sharding. Because once you start sharding, you are destined to traverse over the network, which is a bad thing (for latency, query predictability etc.) So while there are some extremely large graphs that, largely for write throughput reasons, must trade off performance for uber scale (by sharding), the happy thing is that most graphs don't require this compromise. Sharding is required only in the 1% case, which means that nearly everyone can have their cake and eat it too. There are currently Neo4j clusters in production customers with 1B+ individuals in their graph, backing web applications with tens of millions of users. These use comparatively small (but very fast, very efficient) clusters. To give you some idea of the kinds of price-performance we regularly see: we've had users tell us that a single Neo4j instance could the same work as 10 Oracle instances, only faster.
A well-tuned Neo4j cluster can support upwards of 10K transactional writes per second, and an arbitrarily high number of reads per second. Read throughput scales linearly as instances are elastically plugged in. Cache sharding is a design pattern that ensures that you don't have to keep the entire graph in memory.

System trading applications - caching historical data in local database?

Most trading applications receive datafeed from commerical providers such as IQFeed or brokerages that support trading API. Is there merit in storing it in the local database? Intraday datafeed is just massive in size, and the database would grow exponentially with 1 minute data for just 50 stocks, never mind tick-by-tick data. I suspect this would be a nightmare for database backup and may impact performance.
If you get historical data in text files on DVD or online, then storing it in the database is the only logical choice, but would it be still a good idea if you get it through API?
Its all about storage space really. You can definitely do it through API, but make sure you don't do it using the same application that is doing the automated trading for you.
As you said Tick Data is pretty much out of question, for a 1 minute data that would mean approximately 400 bars/day and 20000 bars for 50 symbols.
The calculation space can be calculated based on that, if you are storing OLHC it can be achieved with four values of type Int.
As the other answer pointed out, performance may be an issue with more and more symbols but shouldn't be a problem with 50 symbols on 1 minute bars.
This is a performance question. If the API is fast enough, then use that. If it's not and caching will help, then cache it. Only your application and your usage patterns can determine how much truth and necessity apply to these statements.

breadth-first-search on huge graph with little ram

I currently have a graph that has about 10 million nodes and 35 million edges. For now the complete graph is loaded into memory at program start. This takes a couple of minutes (it is Java after all) and needs about half a gigabyte of RAM. For now it runs on a machine with a dual core processor and 4 gigabytes of RAM.
When the graph is searched using a breadth-first-search the memory usage rises to a peak of one gigabyte and it takes ten seconds on average.
I would like to deploy the program on a couple of computers. The functionality apart from the graph search does take very little resources. My target system is very miniature and has only 512 megabytes of RAM.
Any suggestions on how to implement a method (probably using a database) to search that graph without consuming too much memory? The program is idle most of the time as it is accessing a hardware device, so the path-finding could take about 5 minutes max for the mentioned graph...
Thanks for any thoughts thrown in my direction.
UPDATE:
Just found neo4j. Anybody knows if it would be suitable for this kind of humongous graph?
Your question is a little vague, but in general, a good strategy that mostly follows breadth first semantics while using the same amount of memory as depth-first search is Iterative Deepening. The idea is that you do a depth-first search limited to 1 level at first; if that fails to find a solution, start from scratch and limit it to 2 levels; if that fails, try 3 levels, and so on.
This may seem a bit redundant at first, but since you're doing a depth-first search, you keep much fewer nodes in memory, and always search one less level than a straightforward breadth-first search. Since the amount of nodes in a level grows exponentially, on larger graphs, it's very likely that saving that one last extra level pays off for trying all preceding layers redundantly.
I would say that Neo4j is definitely a good way to go when you have a decent sized graph such as this. Not only does it have built-in BFS algorithms you will also persist you data on disk, thus reducing your start-up time.
Check this out on highscalability.com: NEO4J - A GRAPH DATABASE THAT KICKS BUTTOX
I've used Neo4j and their documentation is very good, and they provide some nice getting started examples, which really do take just a few minutes to get going.
Check out their - Getting started in 10 minutes guide
Neo4j stores data in database as graph, it becomes persisted and you can access using the Graph Traversal Api (BFS , DBS , A* Dijkstra ... ), or Using Cypher query language.

Resources