I have a Neo4j application that uses the legacy Lucene indexes on certain relationship properties. Whenever I query these I am looking for exact matches, and all of them. While doing some profiling I discovered that the application is spending a highly disproportionate amount of time retrieving these results as it is pulling them in chunks from a prioritized queue. Given that I do not care about the ordering and want all of the results, what can I do to change the underlying behavior?
From my own searching, I came across Lucene's Collector implementations and it seems like a custom one that collects everything and never bothers scoring could be the answer, but I do not know how I can inject one into Neo4j. I am not opposed to using reflection or other means if it is not actually supported by Neo4j.
The application accesses Neo4j via the embedded Java methods.
We're working on some of that as part of our upgrade to Lucene5, there custom collectors for some of these use-case will be implemented. Hopefully we can make something available in the next weeks.
Related
I want to present release data complexity which is associated with each node like at epic, userstory etc in grafana in form of charts but grafana do not support neo4j database.Is there any way Directly or indirectly to present neo4j database in grafana?
I'm having the same issues and found this question among others. From my research I cannot agree with this answer completely, so I felt I should point some things out, here.
Just to clarify: a graph database may seem structurally different from a relational or time series database, but it is possible to build Cypher queries that basically return graph data as tables with proper columns as it would be with any other supported data source. Therefore this sentence of the above mentioned answer:
So what you want to do is just not possible.
is not absolutely true, I'd say.
The actual problem is, there is no datasource plugin for Neo4j available at the moment. You would need to implement one on your own, which will be a lot of work (as far as I can see), but I suspect it to be possible. For me at least, this will be too much work to do, so I won't use any approach to read data directly from Neo4j into Grafana.
As a (possibly dirty) workaround (in my case), a service will regularly copy relevant portions of the Neo4j graph into a relational database (or a time series database, if the data model is sufficiently simple for that), which Grafana is aware of (see datasource plugins), so I can query it from there. This is basically the replication idea also given in the above mentioned answer. In this case you obviously end up with at least 2 different database systems and an additional service, which is not so insanely great, but at the moment it seems to be the quickest way to resolve the problem with the missing datasource plugin. Maybe this is applicable in your case, too.
Using neo4j's graphite metrics you can actually configure data to be sent to grafana, and from there build whichever dashboards you like.
Up until recently, graphite/grafana wasn't supported, but it is now (in the recent 3.4 series releases), along with prometheus and other options.
Update July 2021
There is a new plugin called Node Graph Panel (currently in beta) that can visualise graph structures in Grafana. A prerequisite for displaying your graph is to make sure that you have an API that exposes two data frames, one for nodes and one for edges, and that you set frame.meta.preferredVisualisationType = 'nodeGraph' on both data frames. See the Data API specification for more information.
So, one option would be to setup an API around your Neo4j instance that returns the nodes and edges according to the specifications above. Note that I haven't tried it myself (yet), but it seems like a viable solution to get Neo4j data into Grafana.
Grafana support those databases, but not Neo4j : Graphite, InfluxDB, OpenTSDB, Prometheus, Elasticsearch, CloudWatch
So what you want to do is just not possible.
You can replicate your Neo4j data inside of those database, but the datamodel is really different ... (timeseries vs graph).
If you just want to have some charts, you can use Apache Zeppeline for that.
There have been a number of times when I wanted to create a custom PCollectionView. Is this possible? For now, the only workaround I have is to create a PTransform, return a PCollection, and then apply a PCollectionView.asSingleton() transform, but I've noticed (at least several months ago) that this is much slower than running a native PCollectionView transform, such as View.AsList(). And since I'll be calling this PCollectionView method millions of times, it makes a difference if it takes a few milliseconds vs say a second.
How do you want to view the contents of your PCollection? The answer to this question will determine how you should approach things.
Cloud Dataflow (more generally, any Apache Beam backend) has a few ways that it will materialize your PCollection to allow you to efficiently access it as a side input. So list, singleton, map, and multimap are each pretty efficient for their usual access patterns (iteration, key lookup, etc). The architecture of Dataflow (now Beam) is such that you can define custom views, but if it requires a new access pattern then it will require backend support to be efficient.
Also you might care to know that after the first access to a singleton sided input, the value will usually be cached.
What are the comparative advantages of querying a neo4j DB via
REST API
JDBC
as a Spring Data plugin
Performance will be better within Java using JDBC as opposed to a REST API. Here's a good explanation of why:
When you add complexity the code will run slower. Introducing a REST
service if it's not required will slow the execution down as the
system is doing more.
Abstracting the database is good practice. If you're worried about
speed you could look into caching the data in memory so that the
database doesn't need to be touched to handle the request.
Before optimizing performance though I'd look into what problem you're
trying to solve and the architecture you're using, I'm struggling to
think of a situation where the database options would be direct access
vs REST.
Regarding using neo4j as a plugin you can certainly do so, but I have to imagine the performance would not be as good as using JDBC.
From the book "Graph Databases" - Ian Robinson
Queries run fastest when the portions of the graph needed to satisfy
them reside in main memory (that is, in the filesystem cache and the
object cache). A single graph database instance today can hold many
billions of nodes, relationships, and properties, meaning that some
graphs will be just too big to fit into main memory.
If you add another layer to the app, this will be reflected in performance, so the bare you can consumes your data the better the performance but also the complexity and understanding of the code.
I'm using Spring Data Neo4j in my project and I've noticed it takes too much time while saving my node entity classes (>300ms/node), which actually are pretty simple (they only contain one property, a simple long id). The relationships between nodes are also quite simple (I'm simply trying to represent a social network). For the rest, I use cypher queries and the timings are pretty much faster and acceptable (~3-30ms).
This turns out to be a huge problem, since a basic part of my project is populating the graph and only then "trigger" queries. Any suggestions what the reason(s) could be? The Spring Data Neo4j version I'm using is 2.1.0.RELEASE and I'm using the repositories approach.
Thank you in advance!
It depends on the mapping mode you use, simple mapping is much slower as it has to merge your object graph back to Neo4j. Advanced mapping is much faster as it is a thin layer on top of Neo4j (read and write through).
You should create the transaction on a higher level anyway that spans a business operation.
I'm hoping to hear from any of you who have architected and implemented a decent sized Neo4j app (10's millions nodes/rels) - and what your recommendations are particularly w.r.t modelling and the various APIs (vanilla java/groovy Neo4j vs Spring-Data-Neo4j vs Grails GORM/Neo4j).
I'm interested if it actually pays off to add the extra OGM (object-graph-mapping) layer and associated abstractions?
Has anyone's experience been that it is best to stick to 'plain' graph-modelling with nodes+properties, relationships+properties, traversals and (e.g.) Cypher to model and store their data?
My concern is that 'forcing' a particular OGM abstraction onto a graph database will affect future flexibility in adapting/changing the domain model and/or flexibility in querying the data.
We're a Grails shop, and I have experimented with GORM/Neo4J and also with spring-data-neo4j.
The primary purpose for the dataset will be to model and query relationships amongst v.large numbers of people, their aliases, their associates and all sorts of criminal activity and history. There will be more than 50 main domain classes. There must be flexibility in the model (which will need to evolve rapidly in the early phases of the project) and in speed and flexibility of querying.
I have to confess, I'm struggling to find a compelling reason to use a OGM layer when I can use (e.g.) POJOs or POGOs, a little Groovy magic and some simple hand-rolled domain object <-> node/relationship mapping code. As far as I can tell, I think I would be happy just dealing with nodes & traversals & Cypher (aka KISS). But I would be very happy to hear others' experiences and recommendations.
Thanks for your time & thoughts,
TP
since I'm the author of the Grails Neo4j plugin, I might be biased. The main reason for creating the plugin was to apply the ease of Grails domain classes with their powerful out-of-the-box scaffolding to Neo4j for ~80% of the use cases. For the other 20% where specific requirements require stuff like traversals etc. we're using Neo4j APIs directly (traversals/cypher) and do not use the GORM API.
The current version of the Neo4j plugin suffers from a supernode issue since each domain instance is connected to a subreference node. If multiple concurrent requests (aka threads) add new domain instances there is chance to get a locking exception. I'm about to fix that either by a sub-subreference approach or by using indexing.
Cypher can also be used in the Neo4j Grails plugin.
Spring-Data-Neo4j on the other hand is a more advanced approach with finer control over mapping details, but requires usage of specific annotations. And I found no easy way to integrate that into Grails in a way scaffolding works.
We're using the predecessor version of the plugin in a productive application with ~60k users and ~10^6 rels. Due to NDA I cannot provide more details on that.
We do not use grails, but do use a hybrid plain neo4j / spring-data-neo4j solution. The reason is based on the fact that some of our domain data has a fixed schema and some doesn't. SDN takes a lot of the burden away and can be mixed with plain neo4j if the need arises.
We have classes that describe a data model, the objects for these classes we persist using SDN, with no additional tricks, we just use the basics from SDN. Then we have classes that contain the data for the model that is not known beforehand. These are stored in nodes contain special properties for describing what model type the data refers to. When neo4j 2 gets released, we will probably move that info into labels. Between these nodes there can be relations, also described by the aforementioned data model managed by sdn. We also have relations from the generic nodes to SDN nodes, which works fine, as everything ends up being the same things: nodes.
We have not encountered any issues yet using this approach. The thing we love the most is that the data of which we do not know in advanced how it will be modelled, is stored in the way you would have wanted to store data when you would have known it in advance, making the data actually match the model chosen, which is very hard to do when using any other type of (non-graph) database.