How to partition a single Neo4j database? - neo4j

Can one Neo4j database be divided up so that there are multiple starting points in one database so that all queries can be isolated, instead of having multiple databases?
I have thought about this and I think it can work up to a point, but once things like labels are used then the idea will not work, as a label query will always span the whole database.
Anyway I would like to know if anyone has successfully done this and how they did it.

What you are describing sounds like multitenancy. Neo4j 2.0.1 does not at this time support multitenancy as a feature. There are various methods and strategies for implementing a multitenant architecture within your Neo4j database instance.
You can partition sets of your property graph by label. Since nodes can have multiple labels, you can label one partition with a unique identifying label for that partition.
Please refer to documentation on labels here: http://docs.neo4j.org/chunked/milestone/graphdb-neo4j-labels.html
Things to note with this strategy are to ensure that all your Cypher calls contain the partition identifier for the label, to ensure that the two partitions are isolated from one another within the graph. It's important to ensure that relationships from one partition do not span into another partition.
For example, partition 1 could be the label Partition1. Assuming your application context is operating on Partition1:
MERGE (user:User:Partition1 { name: 'Peter' })
RETURN user
Assuming your application context is operating on Partition2:
MERGE (user:User:Partition2 { name: 'Peter' })
RETURN user
When executing these two queries, two separate Peters are created for Partition1 and Partition2.
You'll just need to ensure that the partition label your application is operating on appends its label to each one of your queries. While this is tedious, it is the suggested way to go about multitenancy at this time.

Related

Neo4J - Which is better to store element as a property of user or as a node & relationship?

I got a problem when designing a graph model with million users. I need to store information that user is registered or non-register.
As I see we have 2 options:
Store a property "register = true/false" in each user node. So with 1 million user, we have 1 million properties "register".
Store a Registered node then make relationship just for registered user to this node. So we have number of relationship equal exactly with the registered user.
Which option is better in performance searching also about minimum storage?
Thanks in advance,
Modeling your data as a graph is a difficult thing to pin down exactly. Typically, when it comes to NoSQL databases, the most important thing to consider is how you will be using your data, and to model it based on that.
Using the external node might run into performance problems, as Neo4J typically starts to run into issues during traversing as it approaches around 10,000 relationships in a single node. You will be well above that limit with an external "Registered" node; on the other hand as long as you are not anchoring your search to that node, it should be okay.
No matter which route you go, the query you described in the comments will likely anchor on (start with) the user, then traverse to who their friends are, and then for each friend, it will check whether it
A. has the "registered" property set to 'true'
B. has a relationship to the "Registered" node.
Each of these methods appears to have a similar execution time, and indexing on the "registered" property will have negligible impact because it is not being used as an anchor (presumably; you would have to PROFILE your query with both methods to find out for sure). So, like you mentioned, one might consider the space restraints.
Besides that, there is not much difference from a performance analysis perspective between the two methods that I can see.
A third option, mentioned by #InverseFalcon, is to use an additional label, ':Registered' on those nodes that are registered. This might well result in a faster comparison time than keeping it in a property, as labels will be inlined in the node store and can be checked there, whereas properties might have an additional level of indirection to the property store.

Is there a race condition when creating unique paths?

I recently discovered that a race condition exists when executing concurrent MERGE statements. Specifically, duplicate nodes can be created in the scenario where a node is created after the MATCH step but before the CREATE step of a given MERGE.
This can be worked around in some instances using unique constraints on the merged nodes; however, this falls short in scenarios where:
There is no single unique property to enforce (e.g. pairs of properties need to be unique but individual ones don't).
Trying to merge relationships and paths.
Does using CREATE UNIQUE solve this problem (or do the same pitfalls exist)? If so, is it the only option? It feels like the usefulness of MERGE is fairly heavily diminished when it effectively can't guarantee the uniqueness of the path or node being merged...
When MERGE statements are executed concurrently, these situations may occur. Basically, each transaction gets a view of the graph at the first point of reading, and won't see updates made after that point (with some variations). The main exception to this are uniquely constrained nodes, where Neo4j will initialise a fresh reader from the index when reading, regardless of what was previously read in the transaction.
A workaround could be to create a 'dummy' property and a unique constraint on it and one of the node labels. In Neo4j 2.2.5, this should work to get around your problem.

Are there Label properties - Is there a Neo4j schema for Labels?

How to ensure that all nodes of a label have some common properties ?
For example, I want to create a property "name" for all nodes of a label "Person", but I can make a mistake in writing of property name (namee ! for example)
There is no such mechanism built in Neo4j today (the current version of Neo4j at the time of writing is 2.1.6). What you are describing is some sort of schema (if you compare e.g. DDL for a RDBMS) and Neo4j is basically schema free. This type of structural integrity is quite often handled in the application layer for NoSQL databases.
The only schema operations that are available today for Neo4j are described here.
Currently they include:
Unique - e.g. CREATE CONSTRAINT ON (p:Person) ASSERT p.name IS UNIQUE
Indexes - create an index on a label e.g. CREATE INDEX ON :Person(name)
A comment on this answer from Michael Hunger who is part of team behind Neo4j indicates that more constraints will be available for Neo4j in the future. Furthermore, Michael points to the following alternatives:
Take a look at Structr, a layer above Neo4j that among other things enforces a stricter schema (check the schema docs here)
SylvaDB, an easy-to-use layer above Neo4j that also has schema support. Seems very
In addition to this, FrobberOfBits pointed to the tool NeoProfiler that contains a number of profilers, most of which run very simple Cypher queries against your database and provide summary statistics. Some profilers will actually discover data in your graph and then spawn other profilers which will run later. For example, if a label called "Person" is discovered in the data, a label profiler will be added to the run queue to inspect the population of nodes with that label.

Creating nodes and relationships at the same time in neo4j

I am trying to build an database in Neo4j with a structure that contains seven different types of nodes, in total around 4-5000 nodes and between them around 40000 relationships. The cypher code i am currently using is that i first create the nodes with the code:
Create (node1:type {name:'example1', type:'example2'})
Around 4000 of that example with unique nodes.
Then I've got relationships stated as such:
Create
(node1)-[:r]-(node51),
(node2)-[:r]-(node5),
(node3)-[:r]-(node2);
Around 40000 of such unique relationships.
With smaller scale graphs this has not been any problem at all. But with this one, the Executing query never stops loading.
Any suggestions on how I can make this type of query work? Or what i should do instead?
edit. What I'm trying to build is a big graph over a product, with it's releases, release versions, features etc. in the same way as the Movie graph example is built.
The product has about 6 releases in total, each release has around 20 releaseversion. In total there is 371 features and of there 371 features there is also 438 featureversions. ever releaseversion (120 in total) then has around 2-300 featureversions each. These Featureversions are mapped to its Feature whom has dependencies towards a little bit of everything in the db. I have also involed HW dependencies such as the possible hw to run these Features on, releases on etc. so basicaly im using cypher code such as:
Create (Product1:Product {name:'ABC', type:'Product'})
Create (Release1:Release {name:'12A', type:'Release'})
Create (Release2:Release {name:'13A, type:'release'})
Create (ReleaseVersion1:ReleaseVersion {name:'12.0.1, type:'ReleaseVersion'})
Create (ReleaseVersion2:ReleaseVersion {name:'12.0.2, type:'ReleaseVersion'})
and below those i've structured them up using
Create (Product1)<-[:Is_Version_Of]-(Release1),
(Product1)<-[:Is_Version_Of]-(Release2),
(Release2)<-[:Is_Version_Of]-(ReleaseVersion21),
All the way down to features, and then I've also added dependencies between them such as:
(Feature1)-[:Requires]->(Feature239),
(Feature239)-[:Requires]->(Feature51);
Since i had to find all this information from many different excel-sheets etc, i made the code this way thinking i could just put it together in one mass cypher query and run it on the /browser on the localhost. it worked really good as long as i did not use more than 4-5000 queries at a time. Then it created the entire database in about 5-10 seconds at maximum, but now when I'm trying to run around 45000 queries at the same time it has been running for almost 24 hours, and are still loading and saying "executing query...". I wonder if there is anyway i can improve the time it takes, will the database eventually be created? or can i do some smarter indexes or other things to improve the performance? because by the way my cypher is written now i cannot divide it into pieces since everything in the database has some sort of connection to the product. Do i need to rewrite the code or is there any smooth way around?
You can create multiple nodes and relationships interlinked with a single create statement, like this:
create (a { name: "foo" })-[:HELLO]->(b {name : "bar"}),
(c {name: "Baz"})-[:GOODBYE]->(d {name:"Quux"});
So that's one approach, rather than creating each node individually with a single statement, then each relationship with a single statement.
You can also create multiple relationships from objects by matching first, then creating:
match (a {name: "foo"}), (d {name:"Quux"}) create (a)-[:BLAH]->(d);
Of course you could have multiple match clauses, and multiple create clauses there.
You might try to match a given type of node, and then create all necessary relationships from that type of node. You have enough relationships that this is going to take many queries. Make sure you've indexed the property you're using to match the nodes. As your DB gets big, that's going to be important to permit fast lookup of things you're trying to create new relationships off of.
You haven't specified which query you're running that isn't "stopping loading". Update your question with specifics, and let us know what you've tried, and maybe it's possible to help.
If you have one of the nodes already created then a simple approach would be:
MATCH (n: user {uid: "1"}) CREATE (n) -[r: posted]-> (p: post {pid: "42", title: "Good Night", msg: "Have a nice and peaceful sleep.", author: n.uid});
Here the user node already exists and you have created a new relation and a new post node.
Another interesting approach might be to generate your statements directly in Excel, see http://blog.bruggen.com/2013/05/reloading-my-beergraph-using-in-graph.html?view=sidebar for an example. You can run a lot of CREATE statements in one transaction, so this should not be overly complicated.
If you're able to use the Neo4j 2.1 prerelease milestones, then you should try using the new LOAD CSV and PERIODIC COMMIT features. They are designed for just this kind of use case.
LOAD CSV allows you to describe the structure of your data with one or more Cypher patterns, while providing the values in CSV to avoid duplication.
PERIODIC COMMIT can help make large imports more reliable and also improve performance by reducing the amount of memory that is needed.
It is possible to use a single cypher query to create a new node as well as relate it to an existing now.
As an example, assume you're starting with:
an existing "One" node which has an "id" property "1"
And your goal is to:
create a second node, let's call that "Two", and it should have a property id:"2"
relate the two nodes together
You could achieve that goal using a single Cypher query like this:
MATCH (one:One {id:'1'})
CREATE (one) -[:RELATED_TO]-> (two:Two {id:'2'})

spring data neo4j 3.0.0 - why two labels set by default

I need to write batch importing utility for my Neo4j database but I don't want to lose the repository feature of SDN. To achieve this goal I want to insert such nodes that can be still queried using auto generated repository methods.
I inserted some nodes to my database and I looked at their properties and labels to see how they are set and I noticed that SDN inserted nodes have two labels. For example nodes representing class SomeClass have labels: ["_SomeClass", "SomeClass"]. My question is: why set two, almost identical labels for each node?
Oh that's actually simple. We somehow have to note if the current node is of type SomeClass, which we do by prepending the "_". As there are labels added for each super-type you need to differentiate what the actual type of the node in Spring Data Neo4j is.
So you could have: _Developer, Developer, Employee, Person for a class hierarchy from Person down to Developer. And then there could be additional labels for interfaces.
When you now do: DeveloperRepository.findAll() then you only want those with _Developer back, not ones that derived from Developer.

Resources