How much does the architecture of data affect the speed of a query - neo4j

I have the following nodes and relationships in Neo4j database.
The grey and the pink node are furtherly connected with more nodes. Running the following query:
MATCH (n:RealNode {gid:'$obj_id'})-[:CONTAINS*..3]-(z)
RETURN DISTINCT ID(z), z.id,n.id as InternalID"
I get a result very fast (the node n:RealNode is not one of the nodes in the image).
If I increase the depth to 4 like:
MATCH (n:RealNode {gid:'$obj_id'})-[:CONTAINS*..4]-(z)
RETURN DISTINCT ID(z), z.id,n.id as InternalID"
The response gets extremely slow. I will never get a response with depth 5 etc.
The depth 4 is actually the relationship between the blue-pink node. So my question is: can the architecture of data (in this case) affect in such a great level the speed of the query? If yes what should I do?
I have tried to run the query also using parameters but the result was the same. Also the gid of n:RealNode is an indexed value.

The architecture of your data has a huge, no...massive impact on query performance. There's a lot you can do with improving performance by reformulating your query, but you can do even more than that by changing your data model.
The model needs to be chosen in a way that's an accurate depiction of the real-world domain, but it often also has to make certain concessions to usage patterns. If you know you're going to do certain queries over and over, it makes sense to choose a data model that makes it easy for the DBMS to answer that query. In the RDBMS world, that entire line of thinking gets summarized in the word "denormalization". In graph databases, the concept is the same but the way you go about it is different.
The thing to keep in mind when adjusting your data model is that neo4j is good at traversing relationships fast, and that with all queries, the less data you have to consider, the faster the query will go.
So in your case, I don't know how many nodes branch off of each other node by a :CONTAINS relationship, but I'm guessing that at each level of the hierarchy you have many items below it. So going from level 4 to level 5 probably doesn't just add a fixed number of additional nodes, but if say each level of the hierarchy has 3x the number of nodes as the level above, the deeper you go, the more you're multiplying how much data you have to consider. If it's 10x...then ouch.
You have many different options. One is to create short-cut relationships, and "pre-materialize" the query. Imagine creating :grandfather and :greatgrandfather relationships to "hop" levels of the tree. That would make it faster. Another way would be to filter intermediate nodes, or the return nodes, so that you're not considering everything, but some subset.
In the end, really huge queries will always take longer than really small ones. You must first begin with a careful understanding of what data you want, and how often you have to run this query. I would not attempt to optimize your data model for infrequently run queries, but if you do this all the time, you should look at your options. Your query to me looks like it's going to return a whole lot of data no matter what you do.

Related

Is there a definite answer for the performance of 1st level depth query search in Neo4j (or any other graph db) vs. SQL/NoSQL?

I am testing the use of Neo4j for social-like graph, but I also have many use-cases that require 1st level depth queries (e.g. get my likes / views). Consequently, I wish to decide whether I need another SQL/NoSQL to support (in terms of performance) this type of queries.
Up until now, I was only able to find benchmarks and quantitive data concerning > 1st level searches (i.e. friends of my friends ...)
Is it common knowledge that SQL/NoSQL db will have better performance for such queries? are there any research/benchmarks about this?
A depth of 1 shouldn't result in much difference in performance, I'd think. However, that does depend on two factors: the indexes you've set up, and the depth of expansion.
For both Neo4j and your relational db, you would want a supporting index on the starting node in the graph (the person or post whose likes/views you want to get). For the relational db, you would also want an index to support the join operation being used to get at the connected nodes.
For Neo4j, the expansion to the connected nodes is directly proportional to the number of nodes you are expanding to, since this is just pointer chasing between the nodes and relationships forming your graph. No indexes are used for that.
For a relational database, the relationship would likely be modeled as a table join (which should be index-backed), and that cost will be proportional to the size of the tables being joined, so as more data is added to the graph (no matter of whether it is connected to the user who you are querying for), it will be impacting your execution time.
Thankfully for your case only a single table join would be needed. You may not see a big difference between a graph db and a relational db. Neo4j tends to shine when many (possibly an unbounded number) of traversals are needed, like the friend-of-a-friend queries or those with longer patterns. If your use cases include longer patterns, especially if the types of the node expanded to are not known ahead of time, then Neo4j would be very helpful, especially as the data in your database grows, since traversal performance is proportional only to the directly connected data, not the total number of nodes of the given labels.

When should inferred relationships and nodes be used over explicit ones?

I was looking up how to utilise temporary relationships in Neo4j when I came across this question: Cypher temp relationship
and the comment underneath it made me wonder when they should be used and since no one argued against him, I thought I would bring it up here.
I come from a mainly SQL background and my main reason for using virtual relationships was to eliminate duplicated data and do traversals to get properties of something instead.
For a more specific example, let's say we have a robust cake recipe, which has sugar as an ingredient. The sugar is what makes the cake sweet.
Now imagine a use case where I don't like sweet cakes so I want to get all the ingredients of the recipe that make the cake sweet and possibly remove them or find alternatives.
Then there's another use case where I just want foods that are sweet. I could work backwards from the sweet ingredients to get to the food or just store that a cake is sweet in general, which saves time from traversal and makes a query easier. However, as I mentioned before, this duplicates known data that can be inferred.
Sorry if the example is too strange, I suck at making them. I hope the main question comes across, though.
My feeling is that the only valid scenario for creating redundant "shortcut" relationships is this:
Your use case has a stringent time constraint (e.g., average query time must be less than 200ms), but your neo4j query -- despite optimization -- exceeds that constraint, and you have verified that adding "shortcut" relationships will indeed make the response time acceptable.
You should be aware that adding redundant "shortcut" relationships comes with its own costs:
Queries that modify the DB would need to be more complex (to modify the redundant relationships) and also slower.
You'd always have to add the redundant relationships -- even if actually you never need some (most?) of them.
If you want to make concurrent updates to the DB, the chances that you may lose some updates and introduce inconsistencies into the DB would increase -- meaning that you'd have to work even harder to avoid inconsistencies.
NOTE: For visualization purposes, you can use virtual nodes and relationships, which are temporary and not actually stored in the DB.

(neo4j) Best practice for the number of properties in relationships and nodes

I've just started using neo4j and, having done a few experiments, am ready to start organizing the database in itself. Therefore, I've started by designing a basic diagram (on paper) and came across the following doubt:
Most examples in the material I'm using (cypher and neo4j tutorials) present only a few properties per relationship/node. But I have to wonder what the cost of having a heavy string of properties is.
Q: Is it more efficient to favor a wide variety of relationship types (GOODFRIENDS_WITH, FRIENDS_WITH, ACQUAINTANCE, RIVAL, ENEMIES, etc) or fewer types with varying properties (SEES_AS type:good friend, friend, acquaintance, rival, enemy, etc)?
The same holds for nodes. The first draft of my diagram has a staggering amount of properties (title, first name, second name, first surname, second surname, suffix, nickname, and then there's physical characteristics, personality, age, jobs...) and I'm thinking it may lower the performance of the db. Of course some nodes won't need all of the properties, but the basic properties will still be quite a few.
Q: What is the actual, and the advisable, limit for the number of properties, in both nodes and relationships?
FYI, I am going to remake my draft in such a way as to diminish the properties by using nodes instead (create a node :family names, another for :job and so on), but I've only just started thinking it over as I'll need to carefully analyse which 'would-be properties' make sense to remain, even because the change will amplify the number of relationship types I'll be dealing with.
Background information:
1) I'm using neo4j to map out all relationships between the people living in a fictional small town. The queries I'll perform will mostly be as follow:
a. find all possible paths between 2 (or more) characters
b. find all locations which 2 (or more) characters frequent
c. find all characters which have certain types of relationship (friends, cousins, neighbors, etc) to character X
d. find all characters with the same age (or similar age) who studied in the same school
e. find all characters with the same age / first name / surname / hair color / height / hobby / job / temper (easy to anger) / ...
and variations of the above.
2) I'm not a programmer, but having self-learnt HTML and advanced excel, I feel confident I'll learn the intuitive Cypher quickly enough.
First off, for small data "sandbox" use, this is a moot point. Even with the most inefficient data layout, as long as you avoid Cartesian Products and its like, the only thing you will notice is how intuitive your data is to yourself. So if this is a "toy" scale project, just focus on what makes the most organizational sense to you. If you change your mind later, reformatting via cypher won't be too hard.
Now assuming this is a business project that needs to scale to some degree, remember that non-indexed properties are basically invisible to the Cypher planner. The more meaningful and diverse your relationships, the better the Cypher planner is going to be at finding your data quickly. Favor relationships for connections you want to be able to explore, and favor properties for data you just want to see. Index any properties or use labels that will be key for finding a particular (or set of) node(s) in your queries.

Neo4J using properties on relationships for quicker lookup?

I am yet trying to make use of neo4j to perform a complex query (similar to shortest path search except I have very strange conditions applied to this search like minimum path length in terms of nodes traversed count).
My dataset contains around 2.5M nodes of one single type and around 1.5 billion edges (One single type as well). Each given node has on average 1000 directional relation to a "next" node.
Yet, I have a query that allows me to retrieve this shortest path given all of my conditions but the only way I found to have decent response time (under one second) is to actually limit the number of results after each new node added to the path, filter it, order it and then pursue to the next node (This is kind of a greedy algorithm I suppose).
I'd like to limit them a lot less than I do in order to yield more path as a result, but the problem is the exponential complexity of this search that makes going from LIMIT 40 to LIMIT 60 usually a matter of x10 ~ x100 processing time.
This being said, I am yet evaluating several solutions to increase the speed of the request but I'm quite unsure of the result they will yield as I'm not sure about how neo4j really stores my data internally.
The solution I think about yet is to actually add a property to my relationships which would be an integer in between 1 and 15 because I usually will only query the relationships that have one or two max different values for this property. (like only relationships that have this property to 8 or 9 for example).
As I can guess yet, for each relationship, neo4j then have to gather the original node properties and use it to apply my further filters which takes a very long time when crossing 4 nodes long path with 1000 relationships each (I guess O(1000^4)). Am I right ?
With relationship properties, will it have direct access to it without further data fetching ? Is there any chance it will make my queries faster? How are neo4j edges properties stored ?
UPDATE
Following #logisima 's advice I've written a procedure directly with the Java traversal API of neo4j. I then switched to the raw Java procedure API of Neo4J to leverage even more power and flexibility as my use case required it.
The results are really good : the lower bound complexity is overall a little less thant it was before but the higher bound is like ten time faster and when at least some of the nodes that will be used for the traversal are in the cache of Neo4j, the performances just becomes astonishing (depth 20 in less than a second for one of my tests when I only need depth 4 usually).
But that's not all. The procedures makes it very very easily customisable while keeping the performances at their best and optimizing every single operation at its best. The results is that I can use far more powerful filters in far less computing time and can easily update my procedure to add new features. Last but not least Procedures are very easily pluggable with spring-data for neo4j (which I use to connect neo4j to my HTTP API). Where as with cypher, I would have to auto generate the queries (as being very complex, there was like 30 java classes to do the trick properly) and I should have used jdbc for neo4j while handling a separate connection pool only for this request. Cannot recommend more to use the awesome neo4j java API.
Thanks again #logisima
If you're trying to do a custom shortespath algo, then you should write a cypher procedure with the traversal API.
The principe of Cypher is to make pattern matching, and you want to traverse the graph in a specific way to find your good solution.
The response time should be really faster for your use-case !

One Node to Gather Them All

Say I am managing collectibles. I have thousands of baseball trading cards, thousands still of gaming cards (think Magic: the Gathering), and then thousands and thousands of doilies.
The part of me that's been steeped in relational databases for 20+ years is uncomfortable with the idea of thousands of Neo4J nodes floating out in space.
So I am inclined to gather them all with a node such as (:BASEBALL_CARDS), (:MTG_CARDS), and of course (:DOILIES). The idea is that these are singletons.
Now if I want all baseball cards that perhaps refer to a certain player, I could do something like:
(:BASEBALL_CARDS)-[GATHERS]->(:BASEBALL_CARD)-[:FEATURES]->(p:PLAYER {name: '...'})
It's very comforting to have the :BASEBALL_CARDS singleton, but does it do anything more than could be accomplished by indexing :BASEBALL_CARD?
(:BASEBALL_CARD)-[:FEATURES]->(p:PLAYER {name: '...'})
Is it best-practice to have thousands of free-ranging nodes?
One exceptional strong point of the graph database is the local query: the relationship lives in the instance, not in the type. A particular challenge (apart from modelling well) is determining the starting point of the local query (and keeping it local, i.e., avoiding path explosions). In Neo4j 1.x your One Node was a way to achieve a starting point for a certain kind of query. With 2.x and the introduction of labels, indexing :BaseballCard is the standard way to accomplish the same. If the purpose of that One Node is as a starting point for the kind of query in your example, then you are better off using a label index. A common problem in 1.x was that a node with an increasing number of relationships of the same type and direction eventually becomes a bottle neck for traversals. People started partitioning your One Node into A Paged Handful of Nodes, something like
(:BaseballCards)-[:GATHERS]->(:BaseballCards1to10000)-[:GATHERS]->(:BaseballCard)
The purpose of finding a starting point for the local query is often better served by labels, perhaps in combination with a basic, ordinary, local traversal, than by A Handful of Nodes. Then again, if it calms your nerves or satisfies your sense of the epic to have such a node, by all means have it. Because of the locality of queries, it will do you no harm.
In your example, however, neither the One Node nor an index on :BaseballCard would best serve as the starting point of the local query. The most particular pattern of interest is instead the name of the player. If you index (:Player) on name you will get the best starting point. The traversal across the one or handful* of [:FEATURES] relationships is very cheap and with a simple test on the other end for the :BaseballCard label, you are done. You could of course maintain the One Node for all players that share a name...
In my most humble opinion there is little need for discomfort. I do, however, want to affirm and commend your unease, in this one regard: that the graph is most powerful for connected data. The particular connection gathering the baseball cards doesn't seem to add new understanding or improve performance, but wherever there is disconnected data there is the potential for discovering exciting and meaningful patterns. Perhaps in the future the cards will be connected through patterns that signify their range of value, or the quality of their lamination, or a linked list of previous owners, or how well they work as conversations starters on a date. The absence of relationships is a call to find that One Missing Link that brings tremendous insight and value into your data.
* Handful, assuming that more than one baseball card features the same player, or some baseball players are also featured on cards of Magic: The Gathering. I'm illiterate in both domains, so I want to at least allow for the possibility.
It is ironic that you are concerned about nodes "floating out in space", when the whole idea behind graph DBs is making the connections between nodes a first class DB construct.
But I think your actual concern is that nodes do not "belong to a table" (in relational DB parlance). So, you would feel more comfortable in creating a special singleton node that in some sense takes the place of a table, from which you can access all the nodes that ought belong to that table.
A node label can be seen as the equivalent of a "table name". So, not only is there no need for you to also create a singleton "table node", doing so would be wasteful in DB resources, and complicate and slow down your queries. And neo4j can quickly access all the nodes with the same label.

Resources