Neo4j Traversal API vs. Cypher - neo4j

When should I choose Neo4j’s traversal framework over Cypher?
For example, for a friend-of-a-friend query I would write a Cypher query as follows:
MATCH (p:Person {pid:'56'})-[:FRIEND*2..2]->(fof)
WHERE NOT (p)-[:FRIEND]->(fof)
RETURN fof.pid
And the corresponding Traversal implementation would require two traversals for friends_at_depth_1 and friends_at_depth_2 (or a core API call to get the relationships) and find the difference of these two sets using plain java constructs, outside of the traversal description. Correct me if I’m wrong here.
Any thoughts?

The key thing to remember about Cypher vs. the traversal API is that the traversal API is an imperative way of accessing a graph, and Cypher is a declarative way of accessing a graph. You can read more about that difference here but the short version is that in imperative access, you're telling the database exactly how to go get the graph. (E.g. I want to do a depth first search, prune these branches, stop when I hit certain nodes, etc). In declarative graph query, you're instead specifying what you want, and you're delegating all aspects of how to get it to the Cypher implementation.
In your query, I'd slightly revise it:
MATCH (p:Person {pid:'56'})-[:FRIEND*2..2]->(fof)
WHERE NOT (p)-[:FRIEND]->(fof) AND
p <> fof
RETURN fof.pid
(I added making sure that p<>fof because friend links might go back to the original person)
To do this in a traverser, you wouldn't need to have two traverser, just one. You'd traverse only FRIEND relationships, stop at depth 2, and accumulate a set of results.
Now, I'm going to attempt to argue that you should almost always use Cypher, and never use the traversal API unless you have very specific circumstances. Here are my reasons:
Declarative query is very powerful, in that it frees you from thinking about the how. All you need to know is what you want. This means you spend more time focusing on what your code is supposed to do, and less time in implementation detail.
The cypher query executor is getting better all the time (version 2.2 will have a cost based planner) and of course they put a lot of effort into making sure cypher exploits all available indexes. I'ts possible that for many queries, cypher would do a better job of finding your data than your traversal, unless you were very careful in coding the traversal.
Cypher is just way less code than writing your own traversal, which will frequently require you to implement certain classes to do specialized stop conditions, etc.
At present, cypher can run in embedded databases, or on the server. If you want to run a traversal, you can't send that remotely to a server to be executed; maybe at best you could write a server extension that did the traversal. So I think cypher is more flexible at present.
OK so when should you use traversal? Two key cases that I know of (others may suggest others)
Sometimes you need to execute a complex custom java code operation on everything you traverse. In this case, you're using the traverser as a "visitor function" of sorts, and sometimes traversals are more convenient to use than cypher, depending on the nature of the java you're running on the nodes.
Sometimes your performance requirements are so intense, you need to hand-traverse the graph, because there's some aspect of graph structure that you can exploit in the traverser to make it go faster that Cypher can't take advantage of. This does happen, but going to this first usually isn't a good idea.

An excerpt from the book
Core API, Traversal Framework or Cypher?
The Core API allows developers to fine-tune their queries so that they exhibit high
affinity with the underlying graph. A well-written Core API query is often faster than
any other approach. The downside is that such queries can be verbose, requiring considerable
developer effort. Moreover, their high affinity with the underlying graph
makes them tightly coupled to its structure. When the graph structure changes, they
can often break. Cypher can be more tolerant of structural changes—things such as
variable-length paths help mitigate variation and change.
The Traversal Framework is both more loosely coupled than the Core API (because it
allows the developer to declare informational goals), and less verbose, and as a result
a query written using the Traversal Framework typically requires less developer effort
than the equivalent written using the Core API. Because it is a general-purpose
framework, however, the Traversal Framework tends to perform marginally less well
than a well-written Core API query.
If we find ourselves in the unusual situation of coding with the Core API or Traversal
Framework (and thus eschewing Cypher and its affordances), it’s because we are
working on an edge case where we need to finely craft an algorithm that cannot be
expressed effectively using Cypher’s pattern matching. Choosing between the Core
API and the Traversal Framework is a matter of deciding whether the higher abstraction/
lower coupling of the Traversal Framework is sufficient, or whether the close-tothe-
metal/higher coupling of the Core API is in fact necessary for implementing an
algorithm correctly and in accordance with our performance requirements.
Ref: Graph Databases, New Opportunities for Connected Data, p161
What is cypher?
Definition goes in developer doc as follows: cypher is a declarative, SQL-inspired language for describing patterns in graphs visually using an ascii-art syntax.
You can find more about it here.
What is core API practically?
I found this page having following sentence:
Besides an object-oriented API to the graph database, working with Node, Relationship, and Path objects, it also offers highly customizable, high-speed traversal- and graph-algorithm implementations.
So practically speaking core API deals with basic objects such as Node, Relationship which belongs to org.neo4j.graphdb package.
You can find more at its developer guide.
What is traversal API practically?
Traversal API adds more interfaces to core API to help us conveniently perform traversal, instead of writing the whole traversal logic from scratch. These interfaces are contained in org.neo4j.graphdb.traversal package.
You can find more at its developer guide.
The relation between all three
According to this answer:
The Traversal API is built on the Core API, and Cypher is build on the Traversal API; So anything you can do in Cypher, can be done with the other 2.
Same example done with all three
This tutorial from 2012 shows all three in action for performing same task, with Core API being fastest. It includes a quote from Andres Taylor:
Cypher is just over a year old. Since we are very constrained on developers, we have had to be very picky about what we work on the focus in this first phase has been to explore the language, and learn about how our users use the query language, and to expand the feature set to a reasonable level.
I believe that Cypher is our future API. I know you can very easily outperform Cypher by handwriting queries. like every language ever created, in the beginning you can always do better than the compiler by writing by hand but eventually, the compiler catches up
Article's conclusion:
So far I was only using the Java Core API working with neo4j and I will continue to do so.
If you are in a high speed scenario (I believe every web application is one) you should really think about switching to the neo4j Java core API for writing your queries. It might not be as nice looking as Cypher or the traverser Framework but the gain in speed pays off.
Also I personally like the amount of control that you have when traversing over the core yourself.

Related

Does GraphQL negate the need for Graph Databases

Most of the reasons for using a graph database seem to be that relational databases are slow when making graph like queries.
However, if I am using GraphQL with a data loader, all my queries are flattened and combined using the data loader, so you end up making simpler SELECT * FROM X type queries instead of doing any heavy joins. I might even be using a No-SQL database which is usually pretty fast at these kinds of flat queries.
If this is the case, is there a use case for Graph databases anymore when combined with GraphQL? Neo4j seems to be promoting GraphQL. I'd like to understand the advantages if any.
GraphQL doesn't negate the need for graph databases at all, the connection is very powerful and makes GraphQL more performant and powerful.
You mentioned:
However, if I am using GraphQL with a data loader, all my queries are flattened and combined using the data loader, so you end up making simpler SELECT * FROM X type queries instead of doing any heavy joins.
This is a curious point, because if you do a lot of SELECT * FROM X and the data is connected by a graph loader, you're still doing the joins, you're just doing them in software outside of the database, at another layer, by another means. If even that software layer isn't joining anything, then what you gain by not doing joins in the database you're losing by executing many queries against the database, plus the overhead of the additional layer. Look into the performance profile of sequencing a series of those individual "easy selects". By not doing those joins, you may have lost 30 years value of computer science research...rather than letting the RDMBS optimize the query execution path, the software layer above it is forcing a particular path by choosing which selects to execute in which order, at which time.
It stands to reason that if you don't have to go through any layer of formalism transformation (relational -> graph) you're going to be in a better position. Because that formalism translation is a cost you must pay every time, every query, no exceptions. This is sort of equivalent to the obvious observation that XML databases are going to be better at executing XPath expressions than relational databases that have some XPath abstraction on top. The computer science of this is straightforward; purpose-built data structures for the task typically outperform generic data structures adapted to a new task.
I recommend Jim Webber's article on the motivations for a native graph database if you want to go deeper on why the storage format and query processing approach matters.
What if it's not a native graph database? If you have a graph abstraction on top of an RDBMS, and then you use GraphQL to do graph queries against that, then you've shifted where and how the graph traversal happens, but you still can't get around the fact that the underlying data structure (tables) isn't optimized for that, and you're incurring extra overhead in translation.
So for all of these reasons, a native graph database + GraphQL is going to be the most performant option, and as a result I'd conclude that GraphQL doesn't make graph databases unnecessary, it's the opposite, it shows where they shine.
They're like chocolate and peanut butter. Both great, but really fantastic together. :)
Yes GraphQL allows you to make some kind of graph queries, you can start from one entity, and then explore its neighborhood, and so on.
But, if you need performances in graph queries, you need to have a native graph database.
With GraphQL you give a lot of power to the end-user. He can make a deep GraphQL query.
If you have an SQL database, you will have two choices:
to compute a big SQL query with a lot of joins (really bad idea)
make a lot of SQL queries to retrieve the neighborhood of the neighborhood, ...
If you have a native graph database, it will be just one query with good performance! It's a graph traversal, and native graph database are made for this.
Moreover, if you use GraphQL, you consider your data model as a graph. So to store it as graph seems obvious and gives you less headache :)
I recommend you to read this post: The Motivation for Native Graph Databases
Answer for Graph Loader
With Graph loader you will do a lot of small queries (it's the second choice on my above answer) but wait no, ... there is a cache record.
Graph loaders just do batch and cache.
For comparaison:
you need to add another library and implement the logic (more code)
you need to manage the cache. There is a lot of documentation about this topic. (more memory and complexity)
due to SELECT * in loaders, you will always get more data than needed Example: I only want the id and name of a user not his email, birthday, ... (less performant)
...
The answer from FrobberOfBits is very good. There are many reasons to add (or avoid) using GraphQL, whether or not a graph database is involved. I wanted to add a small consideration against putting GraphQL in front of a graph. Of course, this is just one of what ought to be many other considerations involved with making a decision.
If the starting point is a relational database, then GraphQL (in front of that datbase) can provide a lot of flexibility to the caller – great for apps, clients, etc. to interact with data. But in order to do that, GraphQL needs to be aligned closely with the database behind it, and specifically the database schema. The database schema is sort of "projected out" to apps, clients, etc. in GraphQL.
However, if the starting point is a native graph database (Neo4j, etc.) there's a world of schema flexibility available to you because it's a graph. No more database migrations, schema updates, etc. If you have new things to model in the data, just go ahead and do it. This is a really, really powerful aspect of graphs. If you were to put GraphQL in front of a graph database, you also introduce the schema concept – GraphQL needs to be shown what is / isn't allowed in the data. While your graph database would allow you to continue evolving your data model as product needs change and evolve, your GraphQL interactions would need to be updated along the way to "know" about what new things are possible. So there's a cost of less flexibility, and something else to maintain over time.
It might be great to use a graph + GraphQL, or it might be great to just use a graph by itself. Of course, like all things, this is a question of trade-offs.

database solution for multiple isolated graphs

I have an interesting problem that I don't know how to solve.
I have collected a large dataset of 80 million graphs (they are CFG as in Control Flow Graph produced by programs I have analysed from Github) which I need to be able to search efficiently.
I looked into existing solutions like Neo4j but they are all designed to store a global single graph.
In my case this is the opposite all graphs are independent -like rows in a table - but I need to search through all of them efficiently.
For example I want to find all CFGs that has a particular IF condition or a WHILE loop with a particular condition.
What's the best database for this use case?
I don't think that there's a reason not to simply store all those graphs in a single graph, whether it's Neo4j or a different graph database. It's not a problem to have many disparate graphs in a single graph where the disparate graphs are disconnected from one another.
As for searching them efficiently, you would either (1) identify properties in your CFGs that you want to search on and convert them to some indexed value of the graph or (2) introduce some graph structure (additional vertices/edges) between the CFGs that will allow you to do the searches you want via graph traversal.
Depending on what you need to search on approach 1 may not be flexible enough for you especially, if what you intend to search on is not completely known at the time of loading the data. Also, it is important to note that with approach 2 you do not really lose the fact that you have 80 million distinct graphs just because you provided some connection between them. Those physical connections don't change that basic logical fact. You just need to consider those additional connections when you write traversals that you expect to occur only within a single CFG.
I'm not sure what Neo4j supports in this area, but with Apache TinkerPop (an open source graph processing framework that lets you write vendor agnostic code over different graph databases, including Neo4j), you might consider doing some form of graph partitioning to help with approach 2. Or you might subgraph() the larger graph to only contain the CFG and then operate with that purely in memory when querying. Both of these approaches will help you to blind your query to just the individual CFG you want to traverse.
Ultimately, however, I see this issue as a modelling problem. You will just need to make some choices on how to best establish the schema for your use case and virtually any graph database should be able to support that.

General Cypher performance

After taking part in a very interesting tutorial with a focus on Cypher, I was pleasantly surprised by the declarativeness of the Cypher query language. It's a very natural way of retrieving data from Neo4J in my opinion.
Before that, I had only used the native API. And while that is less declarative, you sort of get used to it after a while. The complex constructions are all very similar and vary only in the details for my specific project.
Still, Cypher looked more natural to me and so I am contemplating on building the second version of my application with mainly Cypher queries to interact with my database. But I encountered an issue.
There are numerous ways to convert my application into Cypher and after having tried several possible queries, all with the desired result, it appears even the fastest query is still about 20 times slower than the native API.
Now, I don't mind giving up some performance for declarativeness, but times 20 is a little bit to much for me in an application that's already struggling with performance. Is there a workaround for this issue, or do I just have to stick with the native API?
Your conclusion sounds very familiar to me. I've also had performance issues when I used Neo4j and Spring Data Neo4j together. In the parts where performance really mattered, I switched to the core Traversal API which right now is significantly faster than an average Cypher query. This has a lot to do with the fact that there's no processing of a query and the fact that you control every aspect of the traversal. Cypher can only guess what the most optimal strategy is. I'm convinced that it will gain speed in the (near) future, but if performance really matters, I'd say stick with the core API.
Also, If you would be using java and spring data neo4j, consider using the advanced mapping mode (AspectJ) which is a lot faster than the simple mapping mode.

Neo4j - Cypher vs Gremlin query language

I'm starting to develop with Neo4j using the REST API.
I saw that there are two options for performing complex queries - Cypher (Neo4j's query language) and Gremlin (the general purpose graph query/traversal language).
Here's what I want to know - is there any query or operation that can be done by using Gremlin and can't be done with Cypher? or vice versa?
Cypher seems much more clear to me than Gremlin, and in general it seems that the guys in Neo4j are going with Cypher.
But - if Cypher is limited compared to Gremlin - I would really like to know that in advance.
For general querying, Cypher is enough and is probably faster. The advantage of Gremlin over Cypher is when you get into high level traversing. In Gremlin, you can better define the exact traversal pattern (or your own algorithms) whereas in Cypher the engine tries to find the best traversing solution itself.
I personally use Cypher because of its simplicity and, to date, I have not had any situations where I had to use Gremlin (except working with Gremlin graphML import/export functions). I expect, however, that even if i would need to use Gremlin, I would do so for a specific query I would find on the net and never come back to again.
You can always learn Cypher really fast (in days) and then continue with the (longer-run) general Gremlin.
We have to traverse thousands of nodes in our queries. Cypher was slow. Neo4j team told us that implementing our algorithm directly against the Java API would be 100-200 times faster. We did so and got easily factor 60 out of it. As of now we have no single Cypher query in our system due to lack of confidence. Easy Cypher queries are easy to write in Java, complex queries won't perform. The problem is when you have multiple conditions in your query there is no way in Cypher to tell in which order to perform the traversals. So your cypher query may go wild into the graph in a wrong direction first.
I have not done much with Gremlin, but I could imagine you get much more execution control with Gremlin.
The Neo4j team's efforts on Cypher have been really impressive, and it's come a long way. The Neo team typically pushes people toward it, and as Cypher matures, Gremlin will probably get less attention. Cypher is a good long-term choice.
That said- Gremlin is a Groovy DSL. Using it through its Neo4j REST endpoint allows full, unfettered access to the underlying Neo4j Java API. It (and other script plugins in the same category) cannot be matched in terms of low-level power. Plus, you can run Cypher from within the Gremlin plugin.
Either way, there's a sane upgrade path where you learn both. I'd go with the one that gets you up and running faster. In my projects, I typically use Gremlin and then call Cypher (from within Gremlin or not) when I need tabular results or expressive pattern matching- both are a pain in the Gremlin DSL.
I initially started using Gremlin. However, at the time, the REST interface was a little unstable, so I switched to Cypher. It has much better support for Neo4j. However, there are some types of queries that are simply not possible with Cypher, or where Cypher can't quite optimize the way you can with Gremlin.
Gremlin is built over Groovy, so you can actually use it as a generic way to get Neo4j to execute 'Java' code and perform various tasks from the server, without having to take the HTTP hit from the REST interface. Among others, Gremlin will let you modify data.
However, when all I want is to query data, I go with Cypher as it is more readable and easier to maintain. Gremlin is the fallback when a limitation is reached.
Gremlin queries can be generated programmatically.
(See http://docs.sqlalchemy.org/en/rel_0_7/core/tutorial.html#intro-to-generative-selects to know what I mean.)
This seems to be a bit more tricky with Cypher.
Cypher only works for simple queries. When you start incorporating complex business logic into your graph traversals it becomes prohibitively slow or stops working altogether.
Neo4J clearly knows that Cypher isn't cutting it, because they also provide the APOC procedures which include an alternate path expander (apoc.path.expand, apoc.path.subgraphAll, etc).
Gremlin is harder to learn but it's more powerful than Cypher and APOC. You can implement any logic you can think of in Gremlin.
I really wish Neo4J shipped with a toggleable Gremlin server (from reading around, this used to be the case). You can get Gremlin running against a live Neo4J instance, but it involves jumping through a lot of hoops. My hope is that since Neo4J's competitors are allowing Gremlin as an option, Neo4J will follow suit.
Cypher is a declarative query language for querying graph databases. The term declarative is important because is a different way of programming than programming paradigms like imperative.
In a declarative query language like Cypher and SQL we tell the underlying engine what data we want to fetch and we do not specify how we want the data to be fetched.
In Cypher a user defines a sub graph of interest in the MATCH clause. Then underlying engine runs a pattern matching algorithm to search for the similar occurrences of sub graph in the graph database.
Gremlin is both declarative and imperative features. It is a graph traversal language where a user has to give explicit instructions as to how the graph is to be navigated.
The difference between these languages in this case is that in Cypher we can use a Kleene star operator to find paths between any two given nodes in a graph database. In Gremlin however we will have to explicitly define all such paths. But we can use a repeat operator in Gremlin to find multiple occurrences of such explicit paths in a graph database. However, doing iterations over explicit structures in not possible in Cypher.
If you use gremlin, then it allow you to migrate the to different graph databases,
Since most of the graph databases supports the gremlin traversal, Its good idea to chose the gremlin.
Long answer short : Use cypher for query and gremlin for traversal. You will see the response timing yourself.

BigData Vs Neo4J

I´ve been looking for a triple store for my project. In this project i want to store my data according to certain ontologies (OWL).
From my research i ended up with two tecnologies Neo4J and BigData that seems to fit well in this case.
I want to know if any of this two is more apropriated to use with RDF, RDFS, OWL and SPARQL Queries.
Neo4j can be used to store as entity-relationship-entity form. In case of Bigdata, you should not be upload your whole data into Neo4j because it will become very heavy and process will be very much slow. You should use complimentary db for storing actual data and store ids and some params into Neo4j for Graph traversal to perform sort of Graph Analytics. Neo4j is mainly build up for Graph Analytics that its power or you have to use Graph engine e.g GraphX (Spark).
Thanks,
You might want to try out the SparQL plugin for Neo4j, see here for a HTTP based test, and this Berlin Dataset Test for embedded usage.
Neo4J is a specific technology, while big data is more a generic term. I think what you're asking about OLAP and OLTP. As data gets bigger, there are differences between use cases for RDF style graph databases, which are often used for OLAP (On-line Analytical Processing) style analytics. In short, OLAP is designed for analytics that look across an big data set, while OLTP is more aimed at INSERT/DELETEs (on potentially big data).
OLAP-based traversals tend to process the entire graph, while OLTP based traversals tend to process smaller data sets by starting with one or a handful of vertices and traversing from there.
For example, let’s say you wanted to calculate the average age of friends of one particular user. Great use case for OLTP, since the query data set is small. However, if you wanted to calculate the average age of everyone on the database, OLAP is the preferred technology.
OLAP is optimal for deep analysis of a lot of data, while OLTP is better suited for fast running queries and a lot of INSERTs. If you’re trying to achieve a SLA where the analytics must complete within a certain timeframe, consider the type of analytics and which one is better suited. Or maybe you need both.

Resources