I'm trying to find an efficient way to transform a DataFrame into a bunch of persisted Series (columns) in Dask.
Consider a scenario where the data size is much larger than the sum of worker memory and most operations will be wrapped by read-from-disk / spill-to-disk. For algorithms which operate only on individual columns (or pairs of columns), reading-in the entire DataFrame from disk for every column operation is inefficient. In such a case, it would be nice to locally switch from a (possibly persisted) DataFrame to persisted columns instead. Implemented naively:
persisted_columns = {}
for column in subset_of_columns_to_persist:
persisted_columns[column] = df[column].persist()
This works, but it is very inefficient because df[column] will re-read the entire DataFrame N = len(subset_of_columns_to_persist) times from disk. Is it possible to extract and persist multiple columns individually based on a single read-from-disk deserialization operation?
Note: len(subset_of_columns_to_persist) is >> 1, i.e., simply projecting the DataFrame to df[subset_of_columns_to_persist] is not the solution I'm looking for, because it still has a significant I/O overhead over persisting individual columns.
You can persist many collections at the same time with the dask.persist function. This will share intermediates.
columns = [df[column] for column in df.columns]
persisted_columns = dask.persist(*columns)
d = dict(zip(df.columns, persisted_columns))
Related
I want to set the index for a dask dataframe (from_delayed) using already known divisions. However, dask complains that the divisions are required to be unique. This constraint causes trouble for me since the partitions would turn out to be of about 5GB in size which is a bit too much for my taste.
Is there a way to work around this constraint or loosen it for certain operations?
You should view the divisions as an optimisation, which allows dask to know which data is expected in which partition for some operations (groupby, fetch particular index row, etc.).
If your data is not organised in a way that the divisions on the index are unique, you have a simple option: do not provide divisions at all. Then you will lose out on those certain optimisations, which are not appropriate to your case. Alternatively, you could decide to reorganise your data, either within data or before passing it to dask.
As a part of data workflow I need to modify values in a subset of dask dataframe columns and pass the results for further computation. In particular, I'm interested in 2 cases: mapping columns and mapping partitions. What is the recommended safe & performant way to act on the data? I'm running it a distributed setup on a cluster with multiple worker processes on each host.
Case1.
I want to run:
res = dataframe.column.map(func, ...)
this returns a data series so I assume that original dataframe is not modified. Is it safe to assign a column back to the dataframe e.g. dataframe['column']=res? Probably not. Should I make a copy with .copy() and then assign result to it like:
dataframe2 = dataframe.copy()
dataframe2['column'] = dataframe.column.map(func, ...)
Any other recommended way to do it?
Case2
I need to map partitions of the dataframe:
df.map_partitions(mapping_func, meta=df)
Inside the mapping_func() I want to modify values in chosen columns, either by using partition[column].map or simply by creating a list comprehension. Again, how do modify the partition safely and return it from the mapping function?
Partition received by mapping function is a Pandas dataframe (copy of original data?) but while modifying data in-place I'm seeing some crashes (no exception/error messages though). Same goes for calling partition.copy(deep=False), it doesn't work. Should partition be deep copied and then modified in-place? Or should I always construct a new dataframe out of new/mapped column data and original/unmodified series/columns?
You can safely modify a dask.dataframe
Operations like the following are supported and safe
df['col'] = df['col'].map(func)
This modifies the task graph in place but does not modify the data in place (assuming that the function func creates a new series).
You can not safely modify a partition
Your second case when you map_partitions a function that modifies a pandas dataframe in place is not safe. Dask expects to be able to reuse data, call functions twice if necessary, etc.. If you have such a function then you should create a copy of the Pandas dataframe first within that function.
I'm new to alasql (which is amazing). While the documentation shows you how, it doesn't provide a lot information on best practices.
To date I have simply been running queries against an array of arrays (of js objects). i haven't created a database object or table objects.
Are there performance (speed, memory, other) benefits of using database and table objects over an array of arrays?
Here is a real world example. I have 2 sets of data that I am loading: Employees (10 columns) and Employee Sales (5 columns), that are joined on an EmployeeID column. Employees will be relatively small (say, 100 rows), whereas Employee Sales will have 10,000 records. My current approach is to simply run a query where I join those 2 set of data together and end up with one big result set: 10,000 rows of data with 14 columns per row (repeating every column in the Employee data set), which I then pull data from using dynamic filters, interactivity, etc.
This big data set is stored in memory the whole time, but this has the advantage that I don't need to rerun that query over and over. Alternatively, I could simply run the join against the 2 data sets each time I need it, then remove it from memory after.
Also, if I am joining together multiple tables, can I create indexes on the join columns to speed up performance? I see in examples where indexes are created, but there is nothing else in the documentation. (Nothing on this page: https://github.com/agershun/alasql/wiki/Sql). What is the memory impact of indexes? What are the performance impacts of insertions?
Primary keys are supported, but there is no documentation. Does this create an index?
Are there performance (speed, memory, other) benefits of using database and table objects over an array of arrays?
If you put indexes on your tables then - Yes - you get performance benefits. How much depends on your data.
if I am joining together multiple tables, can I create indexes on the join columns to speed up performance?
Yes. And all other column your put into a "where" condition.
I have to develop a system for tracking/monitoring performance in a cellular network.
The domain includes a set of hierarchical elements, and each one has an associated set of counters that are reported periodically (every 15 minutes). The system should collect these counter values (available as large XML files) and periodically aggregate them on two dimensions: Time (from 15 to hour and from hour to day) and Hierarchy (lower level to higher level elements). The aggregation is most often a simple SUM but sometime requires average/min/max etc. Of course for the element dimension aggregation it needs to group by the hierarchy (group all children to one parent record). The user should be able to define and view KPIs (Key Performance Indicator) - that is, some calculations on the various counters. The KPI could be required for just one element, for several elements (producing a data-series for each) or as an aggregation for several elements (resulting in one data series of aggregated data.
There will be about 10-15 users to the system with probably 20-30 queries an hour. The query response time should be a few seconds (up to 10-15 for very large reports including many elements and long time period).
In high level, this is the flow:
Parse and Input Counter Data - there is a set of XML files which contains a periodical update of counters data for the elements. The size of all files is about 4GB / 15 minutes (so roughly 400GB/day).
Hourly Aggregation - once an hour all the collected counters, for all the elements should be aggregated - every 4 records related to an element are aggregated into one hourly record which should be stored.
Daily Aggregation - once a day, 2 all collected counters, for all elements should be aggregated - every 24 records related to an element are aggregated into one daily record.
Element Aggregation - with each one of the time-dimension aggregation it is possibly required to aggregate along the hierarchy of the elements - all records of child elements are aggregated into one record for the parent element.
KPI Definitions - there should be some way for the user to define a KPI. The KPI is a definition of a calculation based on counters from the same granularity (Time dimension). The calculation could (and will) involved more than one element level (e.g. p1.counter1 + sum(c1.counter1) where p1 is a parent of one or more records in c1).
User Interaction - the user can select one or more elements and one or more counters/KPIs, the granularity to use, the time period to view and whether or not to aggregate the selected data.
In case of aggregation, the results is one data-series that include the "added up" values for all the selected elements for each relevant point in time. In "SQL":
SELECT p1.time SUM(p1.counter1) / SUM(p1.counter2) * SUM(c1.counter1)
FROM p1_hour p1, c1_hour c1
WHERE p1.time > :minTime and p1.time < :maxTime AND p1.id in :id_list and join
GROUP BY p1.time
In case there is no aggregation need to keep the identifiers from p1 and have a data-series for each selected element
SELECT p1.time, p1.id, SUM(p1.counter1) / SUM(p1.counter2) * SUM(c1.counter1)
FROM p1_hour p1, c1_hour c1
WHERE p1.time > :minTime and p1.time < :maxTime AND p1.id in :id_list and join
The system has to keep data for 10, 100 and 1000 days for 15-min, hour and daily records. Following is a size estimate considering integer only columns at 4 bytes for storage with 400 counters for elements of type P, 50 for elements of type C and 400 for type GP:
As it adds up, I assume the based on DDL (in reality, DBs optimize storage) to 3.5-4 TB of data plus probably about 20-30% extra which will be required for indexes. For the child "tables", can get close to 2 billion records per table.
It is worth noting that from time to time I would like to add counters (maybe every 2-3 month) as the network evolves.
I once implemented a very similar system (though probably with less data) using Oracle. This time around I may not use a commercial DB and must revert to open source solutions. Also with the increase popularity of no-SQL and dedicated time-series DBs, maybe relational is not the way to go?
How would you approach such development? What are the products that could be used?
From a few days of research, I came up with the following
Use MySQL / PostGres
InfluxDB (or a similar product)
Cassandra + Spark
Others?
How could each solution would be used and what would be the advantages/disadvantages for each approach? If you can, elaborate or suggest also the overall (hardware) architecture to support this kind of development.
Comments and suggestions are welcome - preferably from people with hands on experience with similar project.
Going with Open Source RDBMS:
Using MySQL or Postgres
The table structure would be (imaginary SQL):
CREATE TABLE LEVEL_GRANULARITY (
TIMESTAMP DATE,
PARENT_ID INT,
ELEMENT_ID INT,
COUNTER_1 INT
...
COUNTER_N INT
PRIMARY_KEY (TIMESTAMP, PARENT_ID, ELEMENT_ID)
)
For example we will have P1_HOUR, GP_HOUR, P_DAY, GP_DAY etc.
The tables could be partitions by date to enhance query time and ease data management (can remove whole partitions).
To facilitate fast load, use loaders provided with the DB - these loaders are usually faster and insert data in bulks.
Aggregation could be done quite easily with `SELECT ... INTO ...' query (since the scope of the aggregation is limited, I don't think it will be a problem).
Queries are straight forward as aggregation, grouping and joining is built in. I am not sure about the query performance considering how large the tables are.
Since it is a write intensive I don't think the clustering could help here.
Pros:
Simple configuration (assuming no clusters etc).
SQL query capabilities - flexible
Cons:
Query performance - will it work?
Management overhead
Rigid Schema
Scaling?
Using InfluxDB (or something like that):
I have not used this DB and writing from playing around with it some
The model would be to create a time-series for every element in every level and granularity.
The data series name will include the identifiers of the element and the granularity.
For example P.P_ElementID.G.15MIN or P.P_ElementID.C.C1_ELEMENT_ID.G.60MIN
The data series will contain all the counters relevant for that level.
The input has to parse the XML and build the data series name before inserting the new data points.
InfluxDB has an SQL like query language. and allows to specify the calculation in an SQL like manner. It also supports grouping. To group by element would be possible by using regular expression, e.g. SELECT counter1/counter2 FROM /^P\.P_ElementID\.C1\..*G\.15MIN/ to get all children of ElementID.
There is a notion of grouping by time in general it is made for this kind of data.
Pros:
Should be fast
Support queries etc very similar to SQL
Support Deleting by Date (but have to do it on every series...)
Flexible Schema
Cons:
* Currently, seems not to support clusters very easily (
* Clusters = more maintenance
* Can it support millions of data-series (and still work fast)
* Less common, less documented (currently)
I am reading a big file (more than a billion records) and joining it with three other files, I was wondering if there is anyway the process can be made more efficient to avoid multiple reads on the big table.The smalltables may not fit in memory.
A = join smalltable1 by (f1,f2) RIGHT OUTER,massive by (f1,f2) ;
B = join smalltable2 by (f3) RIGHT OUTER, A by (f3) ;
C = join smalltable3 by (f4) ,B by (f4) ;
The alternative that I was thinking is to write a udf and replace values in one read, but I am not sure if a udf would be efficient since the small files won't fit in the memory. The implementation could be like:
A = LOAD massive
B = generate f1,udfToTranslateF1(f1),f2,udfToTranslateF2(f2),f3,udfToTranslateF3(f3)
Appreciate your thoughts...
Pig 0.10 introduced integration with Bloom Filters http://search-hadoop.com/c/Pig:/src/org/apache/pig/builtin/Bloom.java%7C%7C+%2522done+%2522exec+Tuple%2522
You can train a bloom filter on the 3 smaller files and filter big file, hopefully it will result in a smaller file. After that perform standard joins to get 100% precision.
UPDATE 1
You would actually need to train 2 Bloom Filters, one for each of the small tables, as you join on different keys.
UPDATE 2
It was mentioned in the comments that the outer join is used for augmenting data.
In this case Bloom Filters might not be the best thing, they are good for filtering and not adding data in outer joins, as you want to keep the non matched data. A better approach would be to partition all small tables on respective fields (f1, f2, f3, f4), store each partition into a separate file small enough to load into memory. Than Group BY massive table on f1, f2, f3, f4 and in a FOREACH pass the group (f1, f2, f3, f4) with associated bag to the custom function written in Java, that loads the respective partitions of the small files into RAM and performs augmentation.