Complex join with google dataflow - join

I'm a newbie, trying to understand how we might re-write a batch ETL process into Google Dataflow. I've read some of the docs, run a few examples.
I'm proposing that the new ETL process would be driven by business events (i.e. a source PCollection). These would trigger the ETL process for that particular business entity. The ETL process would extract datasets from source systems and then pass those results (PCollections) onto the next processing stage. The processing stages would involve various types of joins (including cartesian and non-key joins, e.g. date-banded).
So a couple of questions here:
(1) Is the approach that I'm proposing valid & efficient? If not what would be better, I havent seen any presentations on real-world complex ETL processes using Google Dataflow, only simple scenarios.
Are there any "higher-level" ETL products that are a better fit? I've been keeping an eye on Spark and Flink for a while.
Our current ETL is moderately complex, though there are only about 30 core tables (classic EDW dimensions and facts), and ~1000 transformation steps. Source data is complex (roughly 150 Oracle tables).
(2) The complex non-key joins, how would these be handled?
I'm obviously attracted to Google Dataflow because of it being an API first and foremost, and the parallel processing capabilities seem a very good fit (we are being asked to move from batch overnight to incremental processing).
A good worked example of Dataflow for this use case would really push adoption forward!
Thanks,
Mike S

It sounds like Dataflow would be a good fit. We allow you to write a pipeline that takes a PCollection of business events and performs the ETL. The pipeline could either be batch (executed periodically) or streaming (executed whenever input data arrives).
The various joins are for the most part relatively expressible in Dataflow. For the cartesian product, you can look at using side inputs to make the contents of a PCollection available as an input to the processing of each element in another PCollection.
You can also look at using GroupByKey or CoGroupByKey to implement the joins. These flatten multiple inputs, and allow accessing all values with the same key in one place. You can also use Combine.perKey to compute associative and commutative combinations of all the elements associated with a key (eg., SUM, MIN, MAX, AVERAGE, etc.).
Date-banded joins sound like they would be a good fit for windowing which allows you to write a pipeline that consumes windows of data (eg., hourly windows, daily windows, 7 day windows that slide every day, etc.).
Edit: Mention GroupByKey and CoGroupByKey.

Related

Difference Between Cloud Data fusion and DataFlow on GCP

What is the difference between GCP pipeline services:
Cloud Dataflow and Cloud Data fusion ...
which to you when?
I did a high level pricing taking 10 instances with Basic in Data fusion.
and 10 instance cluster (n1-standard-8) in Dataflow.
The pricing is more than double for Datafusion.
What are the pros and cons for each over one another
Cloud Dataflow is purpose built for highly parallelized graph processing. And can be used for batch processing and stream based processing. It is also built to be fully managed, obfuscating the need to manage and understand underlying resource scaling concepts e.g how to optimize shuffle performance or deal with key imbalance issues. The user/developer is responsible for building the graph via code; creating N transforms and or operations to achieve desired goal. For example: read files from storage, process each line in file, extract data from line, cast data to numeric, sum data in groups of X, write output to data lake.
Cloud Data Fusion is focused on enabling data integration scenarios => reading from source (via extensible set of connectors) and writing to targets e.g. BigQuery, storage, etc. It does have parallelization concepts, but they are not fully managed like Cloud Dataflow. CDF rides on top of Cloud Dataproc which is a managed version for Hadoop based processing. It's sweet spot is visual based graph development leveraging an extensible set of connectors and operators.
Your question is based on "cost" concepts. My advice is to take a step back and define what your processing/graph goal(s) look like. Then look at each products value. If you want full control over processing semantics with greater focus on analytics and want to run in batch and or must have streaming focus on Dataflow. If you want point and click data movement, with less focus need on data analytics AND do not need streaming then look at CDF.

Neo4j: Cypher Performance testing/Benchmarking

I created a Neo4j 3 database that includes some test data and also a small application that will send http cypher requests to Neo4j. These requests are always of the same time. Acutally its a query template that just differs by some attributes. I am interested in the performance of these statements.
I know that I can use the PROFILE to get some information in the browser. But I want to execute a set of statements, e. g. 10 example queries, several times and calculate the average performance. Is there an easy way or a tool to do this or do I have to write e. g. a Python script that collects these values? It does not have to be a big application, I just want to see some general performance metrics.
I don't think there is an out-of-the-box tool for benchmarking Neo4j yet. So your best option is to implement your own solution - but you have to be careful if you want to get results that are (to some degree) representative:
Check the docs on performance.
Give the Neo4j JVM sufficient time to warmup. This means that you'll want to run a warmup phase with the queries and discard the execution times of them.
Instead of using a client-server architecture, you can also opt to use Neo4j in embedded mode, which will give you a better idea of the query performance (without the overhead of the driver and the serialization/deserialization process). However, in this case you have to implement benchmark over the JVM (in Java or possibly Jython).
Run each query multiple times. Do not use the average as it is more sensitive to outlier values (you can get high values for a number of reasons, e.g. if the OS scheduler starts some job in the background during a particular query execution).
A good paper in the topic, How not to lie with statistics: the correct way to summarize benchmark results, argues that you should use the geometric mean.
It is also common practice in performance experiments in computer science papers to use the median value. I tend to use this option - e.g. this figure shows the execution times of two simple SPARQL queries on in-memory RDF engines (Jena and Sesame), for their first executions and the median values of 5 consecutive executions.
Note however, that Neo4j employs various caching mechanisms, so if you only run the same query multiple times, it will only need to compute the results on the first execution and following executions will use the cache - unless the database is updated between the query executions.
As a good approximation, you can design the benchmark to resemble your actual workload as closely as possible - in many cases, application-specific macrobenchmarks make more sense than microbenchmarks. So if each query will only be evaluated once by the application, it is perfectly acceptable to benchmark only the first evaluation.
(Bonus.) Another good read in the topic is The Benchmark Handbook - chapter 1 discusses the most important criteria for domain-specific benchmarks (relevance, portability, scalability and simplicity). These are probably not required for your benchmark but these are nice to now.
I worked on a cross-technology benchmark considering relational, graph and semantic databases, including Neo4j. You might find some useful ideas or code snippets in the repository: https://github.com/FTSRG/trainbenchmark

Neo4j partition

Is the a way to physically separate between neo4j partitions?
Meaning the following query will go to node1:
Match (a:User:Facebook)
While this query will go to another node (maybe hosted on docker)
Match (b:User:Google)
this is the case:
i want to store data of several clients under neo4j, hopefully lots of them.
now, i'm not sure about whats is the best design for that but it has to fulfill few conditions:
no mixed data should be returned from a cypher query ( its really hard to make sure, that no developer will forget the ":Partition1" (for example) in a cypher query)
performance of 1 client shouldn't affect another client, for example, if 1 client has lots of data, and another client has small amount of data, or if a "heavy" query of 1 client is currently running, i dont want other "lite" queries of another client to suffer from slow slow performance
in other words, storing everything under 1 node, at some point in the future, i think, will have scalability problem, when i'll have more clients.
btw, is it common to have few clusters?
also whats the advantage of partitioning over creating different Label for each client? for example: Users_client_1 , Users_client_2 etc
Short answer: no, there isn't.
Neo4j has high availability (HA) clusters where you can make a copy of your entire graph on many machines, and then serve many requests against that copy quickly, but they don't partition a really huge graph so some of it is stored here, some other parts there, and then connected by one query mechanism.
More detailed answer: graph partitioning is a hard problem, subject to ongoing research. You can read more about it over at wikipedia, but the gist is that when you create partitions, you're splitting your graph up into multiple different locations, and then needing to deal with the complication of relationships that cross partitions. Crossing partitions is an expensive operation, so the real question when partitioning is, how do you partition such that the need to cross partitions in a query comes up as infrequently as possible?
That's a really hard question, since it depends not only on the data model but on the access patterns, which may change.
Here's how bad the situation is (quote stolen):
Typically, graph partition problems fall under the category of NP-hard
problems. Solutions to these problems are generally derived using
heuristics and approximation algorithms.[3] However, uniform graph
partitioning or a balanced graph partition problem can be shown to be
NP-complete to approximate within any finite factor.[1] Even for
special graph classes such as trees and grids, no reasonable
approximation algorithms exist,[4] unless P=NP. Grids are a
particularly interesting case since they model the graphs resulting
from Finite Element Model (FEM) simulations. When not only the number
of edges between the components is approximated, but also the sizes of
the components, it can be shown that no reasonable fully polynomial
algorithms exist for these graphs.
Not to leave you with too much doom and gloom, plenty of people have partitioned big graphs. Facebook and twitter do it every day, so you can read about FlockDB on the twitter side or avail yourself of relevant facebook research. But to summarize and cut to the chase, it depends on your data and most people who partition design a custom partitioning strategy, it's not something software does for them.
Finally, other architectures (such as Apache Giraph) can auto-partition in some senses; if you store a graph on top of hadoop, and hadoop already automagically scales across a cluster, then technically this is partitioning your graph for you, automagically. Cool, right? Well...cool until you realize that you still have to execute graph traversal operations all over the place, which may perform very poorly owing to the fact that all of those partitions have to be traversed, the performance situation you're usually trying to avoid by partitioning wisely in the first place.

Analyzing Sensor Data stored in cassandra and draw graphs

I'm collecting data from different sensors and write them to a Cassandra database.
The Sensor-ID accts as a partition key, the timestamp of the sensors data as clustering column. Additionally a value of the sensor is stored.
Each sensor collects something about 30000 to 60000 values a day.
The simplest thing I wane do is draw a graph showing this data. This is not a problem for a few hours but when showing a week or even a longer range, all the data has to be loaded into the backend (a rails application) for further processing. This isn't really fast with my test dataset and won't be faster in production I think.
So my question is, how to speed this up. I thought about pre-processing the data directly in the database but it seems, that Cassandra isn't able to do such things.
For a graph with a width of 1000px it isn't interesting to draw ten thousands of points - so it would be interesting to gather only relevant, pre-aggregated data from the database.
For example, when showing the data for a whole day in a graph with a width of 1000px, it would be enough to take 1000 average values (this would be an average clustered by 86seconds - 60*60*24 / 1000).
Is this a good approach? Or are there other techniques fasten this up? How would I handle this with database? Create a second Table and store some average values? But the resolution of the graph may change...
Other approaches would be drawing mean values by day, week, month and so on. Maybe vor this a second table could do a good job!
Cassandra is all about letting you write and read your data quickly. Think of it as just a data store. It can't (really) do any processing on that data.
If you want to do operations on it, then you are going to need to put the data into something else. Storm is quite popular for building computation clusters for processing data from Cassandra, but without knowing exactly the scale you need to operate at, then that may be overkill.
Another option which might suit you is to aggregate data on the way in, or perhaps in nightly jobs. This is how OLAP is often done with other technologies. This can work if you know in advance what you need to aggregate. You could build your sets into hourly, daily, whatever, then pull a smaller amount of data into Rails for graphing (and possibly aggregate it even further to exactly meet the desired graph requirements).
For the purposes of storing, aggregating, and graphing your sensor data, you might consider RRDtool which does basically everything you describe. Its main limitation is it does not store raw data, but instead stores aggregated, interpolated values. (If you need the raw data, you can still use Cassandra for that.)
AndySavage is onto something here when it comes to precomputing aggregate values. This does require you to understand in advance the sorts of metrics you'd like to see from the sensor values generally.
You correctly identify the limitation of a graph in informing the viewer. Questions you need to ask really fall into areas such as:
When you aggregate are you interested in the mean, median, spread of the values?
What's the biggest aggregation that you're interested in?
What's the goal of the data visualisation - is it really necessary to be looking at a whole year of data?
Are outliers the important part of the dataset?
Each of these questions will lead you down a different path with visualisation and the application itself too.
Once you know what you're wanting to do, an ETL process harnessing some form of analytical processing will be needed. This is where the Hadoop world would be useful investigating.
Regarding your decision to use Cassandra as your timeseries historian, how is that working for you? I'm looking at technical solutions for a similar requirement at the moment and it's one of the options on the table.

Neo4J Performance Benchmarking

I have created a basic implementation of high level client over Neo4J (https://github.com/impetus-opensource/Kundera/tree/trunk/kundera-neo4j) and want to compare its performance with Native neo4j driver (and maybe SpringData too). This way I would be able to determine overhead my library is putting over native driver.
I plan to create an extension of YCSB for Neo4J.
My question is: what should be considered as a basic unit of object to be written into neo4j (should it be a single node or a couple of nodes joined by an edge).
What's current practice in Neo4J world. How people benchmarking neo4j performance are doing it.
There's already been some work for benchmarking Neo4J with Gatling: http://maxdemarzi.com/2013/02/14/neo4j-and-gatling-sitting-in-a-tree-performance-t-e-s-t-ing/
You could maybe adapt it.
See graphdb-benchmarks
The project graphdb-benchmarks is a benchmark between popular graph dataases. Currently the framework supports Titan, OrientDB, Neo4j and Sparksee. The purpose of this benchmark is to examine the performance of each graph database in terms of execution time. The benchmark is composed of four workloads, Clustering, Massive Insertion, Single Insertion and Query Workload. Every workload has been designed to simulate common operations in graph database systems.
Clustering Workload (CW): CW consists of a well-known community detection algorithm for modularity optimization, the Louvain Method. We adapt the algorithm on top of the benchmarked graph databases and employ cache techniques to take advantage of both graph database capabilities and in-memory execution speed. We measure the time the algorithm needs to converge.
Massive Insertion Workload (MIW): Create the graph database and configure it for massive loading, then we populate it with a particular dataset. We measure the time for the creation of the whole graph.
Single Insertion Workload (SIW): Create the graph database and load it with a particular dataset. Every object insertion (node or edge) is committed directly and the graph is constructed incrementally. We measure the insertion time per block, which consists of one thousand edges and the nodes that appear during the insertion of these edges.
Query Workload (QW): Execute three common queries:
FindNeighbours (FN): finds the neighbours of all nodes.
FindAdjacentNodes (FA): finds the adjacent nodes of all edges.
FindShortestPath (FS): finds the shortest path between the first node and 100 randomly picked nodes.
One way to performance-test is to use e.g. http://gatling-tool.org/. There is work underway to create benchmark frameworks at http://ldbc.eu . Otherwise, benchmarking is highly dependent on your domain dataset and the queries you are trying to do. Maybe you could start at https://github.com/neo4j/performance-benchmark and improve on it?

Resources