Explain plan in gremlin Tinkerpop3 (DSE graph) - datastax-enterprise

I have written one query which is taking longer than expected time.
g.V().hasLabel('Person').has('name','Person1').out('BELONGS').in('HAS').dedup().as('x').in('HAS').filter(__.in('HAS').has('name','App1')).store('y').select('x').dedup().in('HAS').hasLabel('Org').repeat(out()).until(outE().hasLabel('IS')).store('a').cap('y').unfold().in('HAS').hasLabel('Class').repeat(inE('IS').dedup().otherV()).until(inE('HAS')).where(within('a'))
Can we do an explain plan to look out what is making this query slower?
Regards
Varun Tahin

You have several tools at your disposal when picking apart a Gremlin traversal. You can uses the explain() step and/or the profile() step. The explain() step will show how the traversal is composed and modified by Traversal Strategies which optimize its execution. The profile() step will provide statistics on traversal execution itself.
I would also call the Gremlin Console itself a "tool". Debugging Gremlin sometimes takes me down a path of executing smaller chunks of a traversal so that I can determine what it is returning at any given point in the list of steps. The Gremlin Console, as it is a REPL, provides the ability to get immediate feedback on code execution, thus getting you out of the longer development cycle of your IDE.

Related

How to check the amount of times a neo4J index was used?

So I received a dated schema that used to work well at the beginning but it's experiencing some scaling issues.
Among of them, the space used by the indexes is catching my attention so I would like to know if they are being used, how many times, etc.
Other that explaining/profiling queries, is there anything else I could use to have this kind of information?
The information you are looking for would be under metrics monitoring, but index accesses is not one of the available metrics Neo4j provides. (Neo4j supports Prometheus, but I don't know if Prometheus captures that info either)
But there are some indirect ways you can get this data.
Assuming you have a test server that replicates production, with appropriate load tests, you can try removing the index and seeing how it affects the load tests. (This way is a bit cumbersome, but probably gives the most accurate measure of how varies DB changes affect performance, but only if the load tests accurately reflect production use.)
Alternatively, for a more static analysis, you should only be executing pre-defined, parameterized cyphers. So you can Profile/Explain those Cyphers against the DB at different scales, and compare those notes to the Cypher logs (either calling end, or using Neo4j metrics monitoring) to get an idea of how often each one is called.

Simple inquiry about streaming data directly into Cloud SQL using Google DataFlow

So I am working on a little project that sets up a streaming pipeline using Google Dataflow and apache beam. I went through some tutorials and was able to get a pipeline up and running streaming into BigQuery, but I am going to want to Stream it into a full relational DB(ie: Cloud SQL). I have searched through this site and throughout google and it seems that the best route to achieve that would be to use the JdbcIO. I am a bit confused here because when I am looking up info on how to do this it all refers to writing to cloud SQL in batches and not full out streaming.
My simple question is can I stream data directly into Cloud SQL or would I have to send it via batch instead.
Cheers!
You should use JdbcIO - it does what you want, and it makes no assumption about whether its input PCollection is bounded or unbounded, so you can use it in any pipeline and with any Beam runner; the Dataflow Streaming Runner is no exception to that.
In case your question is prompted by reading its source code and seeing the word "batching": it simply means that for efficiency, it writes multiple records per database call - the overloaded use of the word "batch" can be confusing, but here it simply means that it tries to avoid the overhead of doing an expensive database call for every single record.
In practice, the number of records written per call is at most 1000 by default, but in general depends on how the particular runner chooses to execute this particular pipeline on this particular data at this particular moment, and can be less than that.

Debugging slow reads from BigQuery on Google Cloud Dataflow

Background:
We have a really simple pipeline which reads some data from BigQuery (usually ~300MB) filters/transforms it and puts it back to BigQuery. in 99% of cases this pipeline finishes in 7-10minutes and is then restarted again to process a new batch.
Problem:
Recently, the job has started to take >3h once in a while, maybe 2 times in a month out of 2000 runs. When I look at the logs, I can't see any errors and in fact it's only the first step (read from BigQuery) that is taking so long.
Does anyone have a suggestion on how to approach debugging of such cases? Especially since it's really the read from BQ and not any of our transformation code. We are using Apache Beam SDK for Python 0.6.0 (maybe that's the reason!?)
Is it maybe possible to define a timeout for the job?
This is an issue on either Dataflow side or BigQuery side depending on how one looks at it. When splitting the data for parallel processing, Dataflow relies on an estimate of the data size. The long runtime happens when BigQuery sporadically gives a severe under-estimate of the query result size, and Dataflow, as a consequence, severely over-splits the data and the runtime becomes bottlenecked by the overhead of reading lots and lots of tiny file chunks exported by BigQuery.
On one hand, this is the first time I've seen BigQuery produce such dramatically incorrect query result size estimates. However, as size estimates are inherently best-effort and can in general be arbitrarily off, Dataflow should control for that and prevent such oversplitting. We'll investigate and fix this.
The only workaround that comes to mind meanwhile is to use the Java SDK: it uses quite different code for reading from BigQuery that, as far as I recall, does not rely on query size estimates.

How do I make sure my Dataflow pipeline scales?

We've often seen people write Dataflow pipelines that don't scale well. This is frustrating since Dataflow is meant to scale transparently, but there still are some antipatterns in Dataflow pipelines that make it difficult to scale. What are some common antipatterns and tips for avoiding them?
Scaling Your Dataflow Pipeline
Hi, Reuven Lax here. I’m a member of the Dataflow engineering team, where I lead the design and implementation of our streaming runner. Prior to Dataflow I led the team that built MillWheel for a number of years. MillWheel was described in this VLDB 2013 paper, and is the basis for the streaming technology underlying Dataflow.
Dataflow usually removes the need for you to think too much about how to make a pipeline scale. A lot of work has gone into sophisticated algorithms that can automatically parallelize and tune your pipeline across many machines. However as with any such system, there are some anti-patterns that can bottleneck your pipeline at scale. In this post we will go over three of these anti-patterns, and discuss how to address them. It’s assumed that you are already familiar with the Dataflow programming model. If not, I recommend beginning with our Getting Started guide and Tyler Akidau’s Streaming 101 and Streaming 102 blog posts. You may also read the Dataflow model paper published in VLDB 2015.
Today we’re going to talk about scaling your pipeline - or more specifically, why your pipeline might not scale. When we say scalability, we mean the ability of the pipeline to operate efficiently as input size increases and key distribution changes. The scenario: you’ve written a cool new Dataflow pipeline, which the high-level operations we provide made easy to write. You’ve tested this pipeline locally on your machine using DirectPipelineRunner and everything looks fine. You’ve even tried deploying it on a small number of Compute VMs, and things still look rosy. Then you try and scale up to a larger data volume, and the picture becomes decidedly worse. For a batch pipeline, it takes far longer than expected for the pipeline to complete. For a streaming pipeline, the lag reported in the Dataflow UI keeps increasing as the pipeline falls further and further behind. We’re going to explain some reasons this might happen, and how to address them.
Expensive Per-Record Operations
One common problem we see is pipelines that perform needlessly expensive or slow operations for each record processed. Technically this isn’t a hard scaling bottleneck - given enough resources, Dataflow can still distribute this pipeline on enough machines to make it perform well. However when running over many millions or billions of records, the cost of these per-record operations adds up to an unexpectedly-large number. Usually these problems aren’t noticeable at all at lower scale.
Here’s an example of one such operation, taken from a real Dataflow pipeline.
import javax.json.Json;
...
PCollection<OutType> output = input.apply(ParDo.of(new DoFn<InType, OutType>() {
public void processElement(ProcessContext c) {
JsonReader reader = Json.createReader();
// Perform some processing on entry.
...
}
}));
At first glance it’s not obvious that anything is wrong with this code, yet when run at scale this pipeline ran extremely slowly.
Since the actual business logic of our code shouldn't have caused a slowdown, we suspected that something was adding per-record overhead to our pipeline. To get more information on this, we had to ssh to the VMs to get actual thread profiles from workers. After a bit of digging, we found threads were often stuck in the following stack trace:
java.util.zip.ZipFile.getEntry(ZipFile.java:308)
java.util.jar.JarFile.getEntry(JarFile.java:240)
java.util.jar.JarFile.getJarEntry(JarFile.java:223)
sun.misc.URLClassPath$JarLoader.getResource(URLClassPath.java:1005)
sun.misc.URLClassPath$JarLoader.findResource(URLClassPath.java:983)
sun.misc.URLClassPath$1.next(URLClassPath.java:240)
sun.misc.URLClassPath$1.hasMoreElements(URLClassPath.java:250)
java.net.URLClassLoader$3$1.run(URLClassLoader.java:601)
java.net.URLClassLoader$3$1.run(URLClassLoader.java:599)
java.security.AccessController.doPrivileged(Native Method)
java.net.URLClassLoader$3.next(URLClassLoader.java:598)
java.net.URLClassLoader$3.hasMoreElements(URLClassLoader.java:623)
sun.misc.CompoundEnumeration.next(CompoundEnumeration.java:45)
sun.misc.CompoundEnumeration.hasMoreElements(CompoundEnumeration.java:54)
java.util.ServiceLoader$LazyIterator.hasNextService(ServiceLoader.java:354)
java.util.ServiceLoader$LazyIterator.hasNext(ServiceLoader.java:393)
java.util.ServiceLoader$1.hasNext(ServiceLoader.java:474)
javax.json.spi.JsonProvider.provider(JsonProvider.java:89)
javax.json.Json.createReader(Json.java:208)
<.....>.processElement(<filename>.java:174)
Each call to Json.createReader was searching the classpath trying to find a registered JsonProvider. As you can see from the stack trace, this involves loading and unzipping JAR files. Doing this per record on a high-scale pipeline is not likely to perform very well!
The solution here was for the user to create a static JsonReaderFactory and use that to instantiate the individual reader objects. You might be tempted to create a JsonReaderFactory per bundle of records instead, inside Dataflow’s startBundle method. However, while this will work well for a batch pipeline, in streaming mode the bundles may be very small - sometimes just a few records. As a result, we don’t recommend doing expensive work per bundle either. Even if you believe your pipeline will only be used in batch mode, you may in the future want to run it as a streaming pipeline. So future-proof your pipelines, by making sure they’ll work well in either mode!
Hot Keys
A fundamental primitive in Dataflow is GroupByKey. GroupByKey allows one to group a PCollection of key-value pairs so that all values for a specific key are grouped together to be processed as a unit. Most of Dataflow’s built-in aggregating transforms - Count, Top, Combine, etc. - use GroupByKey under the cover. You might have a hot key problem if a single worker is extremely busy (e.g. high CPU use determined by looking at the set of GCE workers for the job) while other workers are idle, yet the pipeline falls farther and farther behind.
The DoFn that processes the result of a GroupByKey is given an input type of KV<KeyType, Iterable<ValueType>>. This means that the entire set of all values for that key (within the current window if using windowing) is modeled as a single Iterable element. In particular, this means that all values for that key must be processed on the same machine, in fact on the same thread. Performance problems can occur in the presence of hot keys - when one or more keys receive data faster than can be processed on a single cpu. For example, consider the following code snippet
p.apply(Read.from(new UserWebEventSource())
.apply(new ExtractBrowserString())
.apply(Window.<Event>into(FixedWindow.of(1, Duration.standardSeconds(1))))
.apply(GroupByKey.<String, Event>create())
.apply(ParDo.of(new ProcessEventsByBrowser()));
This code keys all user events by the user’s web browser, and then processes all events for each browser as a unit. However there is a small number of very popular browsers (such as Chrome, IE, Firefox, Safari), and those keys will be very hot - possibly too hot to process on one CPU. In addition to performance, this is also a scalability bottleneck. Adding more workers to the pipeline will not help if there are four hot keys, since those keys can processed on at most four workers. You’ve structured your pipeline so that Dataflow can’t scale it up without violating the API contract.
One way to alleviate this is to structure the ProcessEventsByBrowser DoFn as a combiner. A combiner is a special type of user function that allows piecewise processing of the iterable. For example, if the goal was to count the number of events per browser per second, Count.perKey() can be used instead of a ParDo. Dataflow is able to lift part of the combining operation above the GroupByKey, which allows for more parallelism (for those of you coming from the Database world, this is similar to pushing a predicate down); some of the work can be done in a previous stage which hopefully is better distributed.
Unfortunately, while using a combiner often helps, it may not be enough - especially if the hot keys are very hot; this is especially true for streaming pipelines. You might also see this when using the global variants of combine (Combine.globally(), Count.globally(), Top.largest(), among others.). Under the covers these operations are performing a per-key combine on a single static key, and may not perform well if the volume to this key is too high. To address this we allow you to provide extra parallelism hints using the Combine.PerKey.withHotKeyFanout or Combine.Globally.withFanout. These operations will create an extra step in your pipeline to pre-aggregate the data on many machines before performing the final aggregation on the target machines. There's no magic number for these operations, but the general strategy would be to split any hot key into enough sub-shards so that any single shard is well under the per-worker throughput that your pipeline can sustain.
Large Windows
Dataflow provides a sophisticated windowing facility for bucketing data according to time. This is most useful in streaming pipelines when processing unbounded data, however, it is fully supported for batch, bounded pipelines as well. When a windowing strategy has been attached to a PCollection, any subsequent grouping operation (most notably GroupByKey) performs a separate grouping per window. Unlike other systems that provide only globally-synchronized windows, Dataflow windows the data for each key separately. This is what us to provide flexible per-key windows such as sessions. For more information, I recommend that you read the windowing guide in the Dataflow documentation.
As a consequence of the fact that windows are per key, Dataflow buffers elements on the receiver side while waiting for each window to close. If using very-long windows - e.g. a 24-hour fixed window - this means that a lot of data has to be buffered, which can be a performance bottleneck for the pipeline. This can manifest as slowness (like for hot keys), or even as out of memory errors on the workers (visible in the logs). We again recommend using combiners to reduce the data size. The difference between writing this:
pcollection.apply(Window.into(FixedWindows.of(1, TimeUnit.DAYS)))
.apply(GroupByKey.<KeyType, ValueType>create())
.apply(ParDo.of(new DoFn<KV<KeyType, Iterable<ValueType>>, Long>() {
public void processElement(ProcessContext c) {
c.output(c.element().size());
}
}));
… and this ...
pcollection.apply(Window.into(FixedWindows.of(1, TimeUnit.DAYS)))
.apply(Count.perKey());
… isn’t just brevity. In the latter snippet Dataflow knows that a count combiner is being applied, and so only needs to store the count so far for each key, no matter how long the window is. In contrast, Dataflow understands less about the first snippet of code and is forced to buffer an entire day’s worth of data on receivers, even though the two snippets are logically equivalent!
If it’s impossible to express your operation as a combiner, then we recommend looking at the triggers API. This will allow you to optimistically process portions of the window before the window closes, and so reduce the size of buffered data.
Note that many of these limitations do not apply to the batch runner. However as mentioned above, you're always better off future proofing your pipeline and making sure it runs well in both modes.
We've talked about hot keys, large windows, and expensive per-record operations. Other guidance can be found in our documentation. Although this post has focused on challenges you may encounter with scaling your pipeline, there are many benefits to Dataflow that are largely transparent -- things like dynamic work rebalancing to minimize straggler effects, throughput-based autoscaling, and job resource management adapt to many different pipeline and data shapes without user intervention. We're always trying to make our system more adaptive, and plan to automatically incorporate some of the above strategies into the core execution engine over time. Thanks for reading, and happy Dataflowing!

Does neo4j have a "trigger" mechanism via Cypher? (similar to percolators in ElasticSearch)

I am looking for a method to store cypher queries and when adding nodes and relations be notified when it matches said query? Can this be done currently? Something similar to ElasticSearch percolators would be great.
http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/search-percolate.html
Update
The answer below was accurate in 2014. It's mostly accurate in 2018.
But now there is a way of implementing triggers in Neo4j provided by Max DeMarzi which is pretty good, and will get the job done.
Original answer below.
No, it doesn't.
You might be able to get something similar to what you want by using a TransactionEventHandler object, which basically lets you bind a piece of code (in java) to the processing of a transaction.
I'd be really careful with running cypher in this context though. Depending on what kind of matching you want to do, you could really slaughter performance by running that each time new data is created in the graph. Usually triggers in an RDBMS are specific to inserts or updates on a particular table. In Neo4J, the closest equivalent you might have is on creating/modifying a node of a certain type of label. If your app has any number of different node classes, it wouldn't make sense to run your trigger code whenever new relationships/nodes are created, because most of the time the node type probably wouldn't be relevant to the trigger code.
Related reading: Do graph databases support triggers? and a feature request for triggers in neo4j
Neo4j 3.5 supports triggers.
To use this functionality - Enable apoc.trigger.enabled=true in $NEO4J_HOME/config/neo4j.conf.
you have to add APOC to the server - it's not there by default.
In a trigger you register Cypher statements that are called when data in Neo4j is changed (created, updated, deleted). You can run them before or after commit.
Here is the help doc -
https://neo4j-contrib.github.io/neo4j-apoc-procedures/#triggers

Resources