I want to do something in neo4j that I hope will work ok: I want to make "fuzzy" path matches; the links will sometimes count as a relationship, and sometimes not, depending on the query.
Here's an example: let's say I have a (p:Person)-[:HAS]->(n:Name). A search has found a Person (say, by phone number). I want to go from this Person to other Persons with similar names, to get their phone numbers. Also, I want the similarity to be adjustable, so the user might ask to match very similar names, or not very similar names.
I could get the first person's name, and then do a search against other names with some lucene patterns - this is easy enough, but it means doing a full lucene search on the Name values, which in my use case is not ideal as I think it might be a bit slow (there are very many names - let's say a billion, remembering this is just an example). I hope there is a better way.
One approach I can imagine is having a "similarity" relationship between Names. Whenever a new Name node is added, we check for similar names and link them (creating these relationships would be slow, but we could push it onto a batch process, and it's ok if it takes some minutes). We would only link names that were fairly similar (so the number of links would hopefully not get too large). I suppose we could then craft a query on this, matching similarities greater than my threshold. Something like this:
MATCH (p1:Person {phone:"555-234234"})-->(n1:Name)-[s:SIMILAR]->(n2:Name)-->(p2:Person)
WHERE s.matchLevel >=2
RETURN p2.phone;
Is this approach better or worse than just doing the lucene search? Has anyone else wanted to do something like this?
Also, based on the suggestion at http://graphaware.com/neo4j/2013/10/24/neo4j-qualifying-relationships.html, I believe I'll be better off having many relationships (SIMILAR_1, SIMILAR_2 ..) instead of using a "match level" attribute on my relationship.
BTW, I know there are many similar questions to this (eg. Neo4j 2 Cypher fuzzy search), but afaik this exact question isn't on stackoverflow (and I have looked).
Related
I've just started using neo4j and, having done a few experiments, am ready to start organizing the database in itself. Therefore, I've started by designing a basic diagram (on paper) and came across the following doubt:
Most examples in the material I'm using (cypher and neo4j tutorials) present only a few properties per relationship/node. But I have to wonder what the cost of having a heavy string of properties is.
Q: Is it more efficient to favor a wide variety of relationship types (GOODFRIENDS_WITH, FRIENDS_WITH, ACQUAINTANCE, RIVAL, ENEMIES, etc) or fewer types with varying properties (SEES_AS type:good friend, friend, acquaintance, rival, enemy, etc)?
The same holds for nodes. The first draft of my diagram has a staggering amount of properties (title, first name, second name, first surname, second surname, suffix, nickname, and then there's physical characteristics, personality, age, jobs...) and I'm thinking it may lower the performance of the db. Of course some nodes won't need all of the properties, but the basic properties will still be quite a few.
Q: What is the actual, and the advisable, limit for the number of properties, in both nodes and relationships?
FYI, I am going to remake my draft in such a way as to diminish the properties by using nodes instead (create a node :family names, another for :job and so on), but I've only just started thinking it over as I'll need to carefully analyse which 'would-be properties' make sense to remain, even because the change will amplify the number of relationship types I'll be dealing with.
Background information:
1) I'm using neo4j to map out all relationships between the people living in a fictional small town. The queries I'll perform will mostly be as follow:
a. find all possible paths between 2 (or more) characters
b. find all locations which 2 (or more) characters frequent
c. find all characters which have certain types of relationship (friends, cousins, neighbors, etc) to character X
d. find all characters with the same age (or similar age) who studied in the same school
e. find all characters with the same age / first name / surname / hair color / height / hobby / job / temper (easy to anger) / ...
and variations of the above.
2) I'm not a programmer, but having self-learnt HTML and advanced excel, I feel confident I'll learn the intuitive Cypher quickly enough.
First off, for small data "sandbox" use, this is a moot point. Even with the most inefficient data layout, as long as you avoid Cartesian Products and its like, the only thing you will notice is how intuitive your data is to yourself. So if this is a "toy" scale project, just focus on what makes the most organizational sense to you. If you change your mind later, reformatting via cypher won't be too hard.
Now assuming this is a business project that needs to scale to some degree, remember that non-indexed properties are basically invisible to the Cypher planner. The more meaningful and diverse your relationships, the better the Cypher planner is going to be at finding your data quickly. Favor relationships for connections you want to be able to explore, and favor properties for data you just want to see. Index any properties or use labels that will be key for finding a particular (or set of) node(s) in your queries.
My database contains hotels, reviews of hotels, terms (i.e. words) in reviews and topics (e.g. there could be a topic talking "Staff" containing terms describing the hotel staff) as nodes. Indices on all nodes are present. Relationships as follows: Hotel<--Review-->Term-->Topic
I am currently trying to find an efficient way of querying for topics that have paths to two or more specified hotels. In other words, I am interested in the common topics of two hotels. If hotel A has paths to topics 1,2,3 and hotel B has paths to topics 2,3,4 then the result should be 2,3.
I tried the following below but this seems very inefficient which is very likely due to the amount of possible paths between hotels and topics. Basically each word in a review could create a new path that has to be checked.
// show all topics that two hotels have in common
MATCH (h2:Hotel)<--(r2:Review)-->(t2:Term)-->(to:Topic)<--(t1:Term)<--(r1:Review)-->(h1:Hotel)
WHERE h1.id IN ["id1","id2"] AND h2.id IN ["id1","id2"] AND NOT h1.id=h2.id
RETURN h1.id,to.topic, count(to) AS topic_mentions
I am wondering if there's a faster way of dealing with this, if I were to implement this in java or similar language I'd probably try doing a BFS starting at each hotel and then taking the overlap of what I find. I am fairly certain that adding the transitive edges as direct edges Hotel-->Topic would speed this up, but my limited database design knowledge told me that this might be unnecessarily redundant and not a good practice?
I tried to do the id matching before the pattern matching with another MATCH and WITH clause, but this didnt speed up anything; I think the problem really lies in the pattern matching itself.
I created something similar for searching KB's, and a direct relationship between Hotels and Topics will make this search dead easy, and it'll be faster. For example, your search for all topics with more than one Hotel in common, you'd use:
MATCH (h1:Hotel)-[:TOPIC]->(t:Topic)
MATCH (h2:Hotel)-[:TOPIC]->(t:Topic)
WHERE h1 <> h2
RETURN h1.id, h2.id, t.topic, count(t) AS topic_mentions
Note that this will return a count of all topics these two hotels have in common, which may or may not be what you want.
I am fairly certain that adding the transitive edges as direct edges
Hotel--Topic would speed this up, but my limited database design
knowledge told me that this might be unnecessarily redundant and not a
good practice?
All that would be doing is making an implicit relationship explicit, which is one of things that make graph db's so powerful. There is the maintenance aspect to be concerned about - namely if someone updates the words in a review, then you have to make sure that the (hotel)-[:TOPIC]->(topic) relationships are still valid - but you'd have to do that in your original design anyway, so no loss there.
I'm trying to create an application to find the best paths to use when traveling by bus within my local city. I have found some useful answers on here so far, but I'm currently struggeling with my approach in general and I'd like to get some feedback.
Current data model
There are stations and stages modeled as nodes and two relationships between stations and stops. Each stage node has a start and an end time as a string in "HH:mm" format and belongs to some higher-level structure which I call routes, that are connected to these stage nodes to describe a trajectory along stations with time details. Each :FROM relationship has a property duration to model the travel time for reduce statements.
So the following query would return something like this. The stage nodes show the start property in the picture.
match (from:Station {name: "Glosberg"})
match (to:Station {name: "Knellendorf"})
match paths=((from)-[:FROM|:TO*..10]->(to))
return paths;
Problems so far:
ShortestPath/AllShortestPaths is not a valid option as smallest number of hops does not mean best path. What I want is a reduction of travel duration, which I can achive with a Reduce statement, which I have already done. Since I have to check out all paths I'm using the general pattern matcher with a limit (as seen above). The limit I use in my queries is actually the length of the shortest paths between from and to plus 10% or so to also include paths that might consist of more hops but take less time. This is not necessarily accurate but seems like a fair trade-off.
Using dijkstra gives me all paths from A to B. Since stage nodes have a form of time data on them, most of the paths do not make sense, because they are either combined in reversed order (2pm -> 1pm) or produce long waiting times (2pm -> 4pm), which are not necessary. Therefore I have to filter out bad paths, either in cypher or at some api level. However, with the current data model there simply are too many paths to check for validity. With some sample data, which would also run in production, I have a route that visits 24 stations 2 times a day, resulting in 2^23 paths to take. I'm pretty sure that my data model is the problem, but I can't see any ways to solve this; any ideas?
Questions:
More of a side problem: How would you solve ordering paths with stages that go past 0am? As "23:59" is bigger than "00:01" but not chronologically.
What would you change about the data model?
Would you suggest any trade-offs in how the path finding works to reduce the complexity (eg. simply using shortestpath)?
Would you suggest seperating the actual route data (timetable, who stops where and when) from the infrastructure data (stations and which stations are close to which)? That way I'd have to and use neo4j to find a path/set of stations to travel along and then try to find a suiting set of elements from a timetable, similiar to wanderu's approach.
I've read that the traversal api is a better way to describe how the graph should be accessed instead of using cypher, which only describes what to look for, but I'd like to receive feedback on my thoughs until now before I dive into that.
I'm pretty new to Neo4j; I've only gotten as far as writing a hello world. Before I proceed, I want to make sure I have the right idea about how Neo4j works and what it can do for me.
As an example, say you wanted to write a Neo4j back end for a site like this. Questions would be nodes. Naïvely, tags would be represented by an array property on the question node. If you wanted to find questions with a certain tag, you'd have to scan every question in the database.
I think a better approach would to represent tags as nodes. If you wanted to find all questions with a certain tag, you'd start at the tag node and follow the relationships to the questions. If you wanted to find questions with all of a set of tags, you'd start at one of the tag nodes (preferably the least common/most specific one, if you know which one that is), follow its relationships to questions, and then select the questions with relationships to the other tags. I don't know how to express that in Cypher yet, but is that the right idea?
In my real application, I'm going to have entities with a potentially long list of tags, and I'm going to want to find entities that have all of the requested tags. Is this something where Neo4j would have significant advantages over SQL?
Kevin, correct.
You'd do it like that.
I even created a model some time ago for stackoverflow that does this.
For Cypher you can imagine queries like these
Find the User who was most active
MATCH (u:User)
OPTIONAL MATCH (u)-[:AUTHORED|ASKED|COMMENTED]->()
RETURN u,count(*)
ORDER BY count(*) DESC
LIMIT 5
Find co-used Tags
MATCH (t:Tag)
OPTIONAL MATCH (t)<-[:TAGGED]-(question)-[:TAGGED]->(t2)
RETURN t.name,t2.name,count(distinct question) as questions
ORDER BY questions DESC
MATCH (t:Tag)<-[r:TAGGED]->(question)
RETURN t,r,question
START names = node(*),
target=node:node_auto_index(target_name="TARGET_1")
MATCH names
WHERE NOT names-[:contains]->()
AND HAS (names.age)
AND (names.qualification =~ ".*(?i)B.TECH.*$"
OR names.qualification =~ ".*(?i)B.E.*$")
CREATE UNIQUE (names)-[r:contains{type:"declared"}]->(target)
RETURN names.name,names,names.qualification
Iam consisting of nearly 1,80,000 names nodes, i had iterated the above process to create unique relationships above 100 times by changing the target. its taking too much amount of time.How can i resolve it..
i build the query with java and iterated.iam using neo4j 2.0.0.5 and java 1.7 .
I edited your cypher query because I think I understand it, but I can barely read the rest of your question. If you edit it with white spaces and punctuation it might be easier to understand what you are trying to do. Until then, here are some thoughts about your query being slow.
You bind all the nodes in the graph, that's typically pretty slow.
You bind all the nodes in the graph twice. First you bind universally in your start clause: names=node(*), and then you bind universally in your match clause: MATCH names, and only then you limit your pattern. I don't quite know what the Cypher engine makes of this (possibly it gets a migraine and goes off to make a pot of coffee). It's unnecessary, you can at least drop the names=node(*) from your start clause. Or drop the match clause, I suppose that could work too, since you don't really do anything there, and you will still need a start clause for as long as you use legacy indexing.
You are using Neo4j 2.x, but you use legacy indexing instead of labels, at least in this query. Without knowing your data and model it's hard to know what the difference would be for performance, but it would certainly make it much easier to write (and read) your queries. So, that's a different kind of slow. It's likely that if you had labels and label indices, the query performance would improve.
So, first try removing one of the universal bindings of nodes, then use the 2.x schema tools to structure your data. You should be able to write queries like
MATCH target:Target
WHERE target.target_name="TARGET_1"
WITH target
MATCH names:Name
WHERE NOT names-[:contains]->()
AND HAS (names.age)
AND (names.qualification =~ ".*(?i)B.TECH.*$"
OR names.qualification =~ ".*(?i)B.E.*$")
CREATE UNIQUE (names)-[r:contains{type:"declared"}]->(target)
RETURN names.name,names,names.qualification
I have no idea if such a query would be fast on your data, however. If you put the "Name" label on all your nodes, then MATCH names:Name will still bind all nodes in the database, so it'll probably still be slow.
P.S. The relationships you create have a TYPE called contains, and you give them a property called type with value declared. Maybe you have a good reason, but that's potentially very confusing.
Edit:
Reading through your question and my answer again I no longer think that I understand even your cypher query. (Why are you returning both the bound nodes and properties of those nodes?) Please consider posting sample data on console.neo4j.org and explain in more detail what your model looks like and what you are trying to do. Let me know if my answer meets your question at all or I'll consider removing it.