We have several BigQuery tables that we're reading from through DataFlow. At the moment those tables are flattened and a lot of the data is repeated. In Dataflow, all operations must be idempotent, so any output only depends on the input to the function, there's no state kept anywhere else. This is why it makes sense to first group all the records together that belong together and in our case, this probably means creating complex objects.
Example of A complex object (there are many other types like this). We can have millions of instances of each type obviously:
Customer{
customerId
address {
street
zipcode
region
...
}
first_name
last_name
...
contactInfo: {
"phone1": {type, number, ... },
"phone2": {type, number, ... }
}
}
The examples we found for DataFlow only process very simple objects and the examples demonstrate counting, summing and averaging.
In our case, we eventually want to use DataFlow to perform more complicated processing in accordance with sets of rules. Those rules apply to the full contact of a customer, invoice or order for example and eventually produce a whole set of indicators, sums and other items.
We considered doing this 100% in BigQuery, but this gets very messy very quickly due to the rules that apply per entity.
At this time I'm still wondering whether DataFlow is really the right tool for this job. There are almost no examples for dataFlow that demonstrate how it's used for these type of more complex objects with one or two collections. The closest I found was the use of a "LogMessage" object for log processing, but this didn't have any collections and therefore didn't do any hierarchical processing.
The biggest problem we're facing is hierarchical processing. We're reading data like this:
customerid ... street zipcode region ... phoneid type number
1 a b c phone1 1 555-2424
1 a b c phone2 1 555-8181
And the first operation should be group those rows together to construct a single entity, so we can make our operations idempotent. What is the best way to do that in DataFlow, or point us to an example that does that?
You can use any object as the elements in a Dataflow pipeline. The TrafficMaxLaneFlow example uses a complex object (although it doesn't have a collection).
In your example you would do a GroupByKey to group the elements. The result is a KV<K, Iterable<V>>. The KV here is just an object and has a collection-like value inside. You could then take that KV<K, Iterable<V>> and turn it into whatever kind of objects you wanted.
The only thing to be aware of is that if you have very few elements that are really big you may run into some parallelism limits. Specifically, each element needs to be small enough to be processed on a single machine.
You may also be interested in withoutFlatteningResults on BigQueryIO. It only supports reading from a query (rather than a table) but it should provide the results without flattening.
Related
I have a complex data model consisting of around hundred tables containing business data. Some tables are very wide, up to four hundred columns. Columns can have various data types - integers, decimals, text, dates etc. I'm looking for a way to identify relevant / important information stored in these tables.
I fully understand that business knowledge is essential to correctly process a data model. What I'm looking for are some strategies to pre-process tables and identify columns that should be taken to later stage where analysts will actually look into it. For example, I could use data profiling and statistics to find and exclude columns that don't have any data at all. Or maybe all records have the same value. This way I could potentially eliminate 30% of fields. However, I'm interested in exploring how AI and Machine Learning techniques could be used to identify important columns, hoping I could identify around 80% of relevant data. I'm aware, that relevant information will depend on the questions I want to ask. But even then, I hope I could narrow the columns to simplify the manual assesment taking place in the next stage.
Could anyone provide some guidance on how to use AI and Machine Learning to identify relevant columns in such wide tables? What strategies and techniques can be used to pre-process tables and identify columns that should be taken to the next stage?
Any help or guidance would be greatly appreciated. Thank you.
F.
The most common approach I've seen to evaluate the analytical utility of columns is the correlation method. This would tell you if there is a relationship (positive or negative) among specific column pairs. In my experience you'll be able to more easily build analysis outputs when columns are correlated - although, these analyses may not always be the most accurate.
However, before you even do that, like you indicate, you would probably need to narrow down your list of columns using much simpler methods. For example, you could surely eliminate a whole bunch of columns based on datatype and basic count statistics.
Less common analytic data types (ids, blobs, binary, etc) can probably be excluded first, followed by running simple COUNT(Distinct(ColName)), and Count(*) where ColName is null . This will help to eliminating UniqueIDs, Keys, and other similar data types. If all the rows are distinct, this would not be a good field for analysis. Same process for NULLs, if the percentage of nulls is greater than some threshold then you can eliminate those columns as well.
In order to automate it depending on your database, you could create a fairly simple stored procedure or function that loops through all the tables and columns and does a data type, count_distinct and a null percentage analysis on each field.
Once you've narrowed down list of columns, you can consider a .corr() function to run the analysis against all the remaining columns in something like a Python script.
If you wanted to keep everything in the database, Postgres also supports a corr() aggregate function, but you'll only be able to run this on 2 columns at a time, like this:
SELECT corr(column1,column2) FROM table;
so you'll need to build a procedure that evaluates multiple columns at once.
Thought about this tech challenges for some time. In general it’s AI solvable problem since there are easy features to extract such as unique values, clustering, distribution, etc.
And we want to bake this ability in https://columns.ai, obviously we haven’t gotten there yet, the first step we have done though is to collect all columns stats upon a data connection, identify columns that have similar range of unique values and generate a bunch of query templates for users to explore its dataset.
If interested, please take a look, as we keep advancing this part, it will become closer to an AI model to find relevant columns. Cheers!
I'm struggling to find a real world example on how to use google cloud dataflow combiners to run a common ETL tasl which aggregates records on multiple keys (e.g. Date, Location) and sums values over different measures (e.g. GrossValue, NetValue, Quantity). I can only find examples with a typical Key/Value (e.g. Day/Value) aggregation. Any hints on how this is done with the Python SDK would be appreciated.
I'm not 100% sure I understand your question. Do you have separate elements you are trying to join the data together for, in which case you may wish to use CoGroupByKey? Or does a single element have multiple fields?
Hope some of this info helps,
I would suggest looking at windowing, which will allow you to subdivide a PCollection according to the timestamps of its individual elements. If you want to see all the events for particular day this may be useful. Python examples of windowing. You may want to window across a days worth of data. This link is useful as well to understand how you can use GroupByKey in different ways,
Another option is to determine what date your elements belongs to, and use a group by key to key it with "[location][date][other]". You may need to do something like this if you want to join the data based on multiple fields.
See this GroupByKey example, but change the key to use your multiple fields concatenated.
Here is an example for reducing with a custom combiner. You can add logic here to do a custom aggregation for multiple different measurements.
Google cloud dataflow supports what I would call a "full outer join" SQL like statement through their "CoGroupByKey"method. However, is there any way to implement in dataflow what would be in SQL a "range join"? For example, if I had a table called "people" in which there was a floating point field called "age". And let's say I wanted all the pairs of people in which their ages were within say five years from each other. I could write the following statement:
select p1.name, p1.age, p2.name, p2.age
from people p1, people p2
where p1.age between (p2.age - 5.0) and (p2.age + 5.0);
I couldn't determine if there was a way to accomplish this in dataflow. (Again, if I wanted a strict equality, that I could use a CoGroupByKey, but in this case it's not a strict equality condition).
For my particular use case, the "people" table is not too large – maybe 500,000 rows and approximately 50 megs of RAM required. So, I could, I think, simply run a asList() method to create a single object that sits in a single computer's RAM and then just sort the people object by age and then write some sort of routine that "walks through the list from the low stage to the highest age" and while walking through the list outputs those people whose ages are less than 10 years from each other. This would work, but it would be single threaded etc. I was wondering if there was a "better" way of doing it using the dataflow architecture. (And other developers may need to find a "dataflow" way of doing this operation if the object that they were dealing with dies not fit nicely into memory of one single computer, e.g. a people table of maybe 1 billion rows etc.)
The trick to making this work efficiently at scale is to partition your data into sets of potential matches. In your case, you could assign each person to two different keys, age rounded up to multiple of 5, and age rounded down to multiple of 5. Then, do a GroupByKey on these buckets, and emit all the pairs within each bucket that are actually close enough in age. You'll need to eliminate duplicates, since it's possible for two records to both end up in the same two buckets.
With this solution, the entire data does not need to fit in memory, just each subset of the data.
There are several possible ways I can think of to store and then query temporal data in Neo4j. Looking at an example of being able to search for recurring events and any exceptions, I can see two possibilities:
One easy option would be to create a node for each occurrence of the event. Whilst easy to construct a cypher query to find all events on a day, in a range, etc, this could create a lot of unnecessary nodes. It would also make it very easy to change individual events times, locations etc, because there is already a node with the basic information.
The second option is to store the recurrence temporal pattern as a property of the event node. This would greatly reduce the number of nodes within the graph. When searching for events on a specific date or within a range, all nodes that meet the start/end date (plus any other) criteria could be returned to the client. It then boils down to iterating through the results to pluck out the subset who's temporal pattern gives a date within the search range, then comparing that to any exceptions and merging (or ignoring) the results as necessary (this could probably be partially achieved when pulling the initial result set as part of the query).
Whilst the second option is the one I would choose currently, it seems quite inefficient in that it processes the data twice, albeit a smaller subset the second time. Even a plugin to Neo4j would probably result in two passes through the data, but the processing would be done on the database server rather than the requesting client.
What I would like to know is whether it is possible to use Cypher or Neo4j to do this processing as part of the initial query?
Whilst I'm not 100% sure I understand you requirement, I'd have a look at this blog post, perhaps you'll find a bit of inspiration there: http://graphaware.com/neo4j/2014/08/20/graphaware-neo4j-timetree.html
Mnesia has four methods of reading from database: read, match_object, select, qlc. Besides their dirty counterparts of course. Each of them is more expressive than previous ones.
Which of them use indices?
Given the query in one of this methods will the same queries in more expressive methods be less efficient by time/memory usage? How much?
UPD.
As I GIVE CRAP ANSWERS mentioned, read is just a key-value lookup, but after a while of exploration I found also functions index_read and index_write, which work in the same manner but use indices instead of primary key.
One at a time, though from memory:
read always uses a Key-lookup on the keypos. It is basically the key-value lookup.
match_object and select will optimize the query if it can on the keypos key. That is, it only uses that key for optimization. It never utilizes further index types.
qlc has a query-compiler and will attempt to use additional indexes if possible, but it all depends on the query planner and if it triggers. erl -man qlc has the details and you can ask it to output its plan.
Mnesia tables are basically key-value maps from terms to terms. Usually, this means that if the key part is something the query can latch onto and use, then it is used. Otherwise, you will be looking at a full-table scan. This may be expensive, but do note that the scan is in-memory and thus usually fairly fast.
Also, take note of the table type: set is a hash-table and can't utilize a partial key match. ordered_set is a tree and can do a partial match:
Example - if we have a key {Id, Timestamp}, querying on {Id, '_'} as the key is reasonably fast on an ordered_set because the lexicographic ordering means we can utilize the tree for a fast walk. This is equivalent of specifying a composite INDEX/PRIMARY KEY in a traditional RDBMS.
If you can arrange data such that you can do simple queries without additional indexes, then that representation is preferred. Also note that additional indexes are implemented as bags, so if you have many matches for an index, then it is very inefficient. In other words, you should probably not index on a position in the tuples where there are few distinct values. It is better to index on things with many different (mostly) distinct values, like an e-mail address for a user-column for instance.