I wonder why dd.from_bcolz() starts to do some processing (that grows alot when N columns goes up and there are string type columns) immediately when called.
And dd.read_hdf() doesn't do much processing when called, but only when dask.dataframe is used - then read_hdf() reads and process HDF5 chunk by chunk...
I like how read_hdf works now, the only problem that hdf5 table cannot have more then ~1200 columns, and dataframe does not support columns of arrays. And hdf5 format is not column based after all...
In [1]: import dask.dataframe as dd
In [2]: import pandas as pd
In [3]: import bcolz, random
In [4]: import numpy as np
In [5]: N = int(1e7)
In [6]: int_col = np.linspace(0, 1, N)
In [7]: ct_disk = bcolz.fromiter(((i,i) for i in range(N)), dtype="i8,i8",\
...: count=N, rootdir=r'/mnt/nfs/ct_.bcolz')
In [8]: for i in range(10): ct_disk.addcol(int_col)
In [9]: import dask.dataframe as dd
In [10]: %time dd.from_bcolz(r'/mnt/nfs/ct_.bcolz', chunksize=1000000, lock=False)
CPU times: user 8 ms, sys: 16 ms, total: 24 ms
Wall time: 32.6 ms
Out[10]: dd.DataFrame<from_bc..., npartitions=10, divisions=(0, 1000000, 2000000, ..., 9000000, 9999999)>
In [11]: str_col= [''.join(random.choice('ABCD1234') for _ in range(5)) for i in range(int(N/10))]*10
In [12]: ct_disk.addcol(str_col, dtype='S5')
In [13]: %time dd.from_bcolz(r'/mnt/nfs/ct_.bcolz', chunksize=1000000, lock=False)
CPU times: user 2.36 s, sys: 56 ms, total: 2.42 s
Wall time: 2.44 s
Out[13]: dd.DataFrame<from_bc..., npartitions=10, divisions=(0, 1000000, 2000000, ..., 9000000, 9999999)>
In [14]: for i in range(10): ct_disk.addcol(str_col, dtype='S5')
In [15]: %time dd.from_bcolz(r'/mnt/nfs/ct_.bcolz', chunksize=1000000, lock=False)
CPU times: user 25.3 s, sys: 511 ms, total: 25.8 s
Wall time: 25.9 s
Out[15]: dd.DataFrame<from_bc..., npartitions=10, divisions=(0, 1000000, 2000000, ..., 9000000, 9999999)>
And it's getting even worse when N (nrows) grows up.
It looks like as written today from_bcolz automatically categorizes object dtype columns. So it's doing a full read of all object dtype columns and calling unique on them. You can turn this off by setting categorize=False.
Please raise a github issue if you think that this behavior should be changed.
Related
I want to create features(additional columns) from a dataframe and I have the following structure for many functions.
Following this documentation https://docs.dask.org/en/stable/delayed-best-practices.html I have come up with the code below.
However I get the error message: concurrent.futures._base.CancelledError and many times I get the warning: distributed.utils_perf - WARNING - full garbage collections took 10% CPU time recently (threshold: 10%)
I understand that the object I am appending to delay is very large(it works ok when I use the commented out df) which is why the program crashes but is there a better way of doing it?
import pandas as pd
from dask.distributed import Client, LocalCluster
import dask.dataframe as dd
import numpy as np
import dask
def main():
#df = pd.DataFrame({"col1": np.random.randint(1, 100, 100000), "col2": np.random.randint(101, 200, 100000), "col3": np.random.uniform(0, 4, 100000)})
df = pd.DataFrame({"col1": np.random.randint(1, 100, 100000000), "col2": np.random.randint(101, 200, 100000000), "col3": np.random.uniform(0, 4, 100000000)})
ddf = dd.from_pandas(df, npartitions=100)
ddf = ddf.set_index("col1")
delay = []
def create_col_sth():
group = ddf.groupby("col1")["col3"]
#dask.delayed
def small_fun(lag):
return f"col_{lag}", group.transform(lambda x: x.shift(lag), meta=('x', 'float64')).apply(lambda x: np.log(x), meta=('x', 'float64'))
for lag in range(5):
x = small_fun(lag)
delay.append(x)
create_col_sth()
delayed = dask.compute(*delay)
for data in delayed:
ddf[data[0]] = data[1]
ddf.to_parquet("test", engine="fastparquet")
if __name__ == "__main__":
cluster = LocalCluster(n_workers=6,
threads_per_worker=2,
memory_limit='8GB')
client = Client(cluster)
main()
Not sure if this will resolve all of your issues, but generally you don't need to (and shouldn't) mix delayed and dask.datafame operations like this. Additionally, you shouldn't pass large data objects into delayed functions through closures like group in your example. Instead, include them as explicit arguments, or in this case, don't use delayed at all and use dask.dataframe native operations or in-memory operations with dask.dataframe.map_partitions.
Implementing these, I would rewrite your main function as follows:
df = pd.DataFrame({
"col1": np.random.randint(1, 100, 100000000),
"col2": np.random.randint(101, 200, 100000000),
"col3": np.random.uniform(0, 4, 100000000),
})
ddf = dd.from_pandas(df, npartitions=100)
ddf = ddf.set_index("col1")
group = ddf.groupby("col1")["col3"]
# directly assign the dataframe operations as columns
for lag in range(5):
ddf[f"col_{lag}"] = (
group
.transform(lambda x: x.shift(lag), meta=('x', 'float64'))
.apply(lambda x: np.log(x), meta=('x', 'float64'))
)
# this triggers the operation implicitly - no need to call compute
ddf.to_parquet("test", engine="fastparquet")
After long periods of frustration with Dask, I think I hacked the holy grail of refactoring your pandas transformations wrapped with dask.
Learning points:
Index intelligently. If you are grouping by or merging you should consider indexing the columns you use for those.
Partition and repartition intelligently. If you have a dataframe of 10k rows and another of 1m rows, they should naturally have different partitions.
Don't use dask data frame transformation methods except for example merge. The others should be in pandas code wrapped around map_partitions.
Don't accumulate too large graphs so consider saving after for example indexing or after making a complex transformation.
If possible filter the data frame and work with smaller subset you can always merge this back to the bigger data set.
If you are working in your local machine set the memory limits within the boundaries of system specifications. This point is very important. In the example below I create one million rows of 3 columns one is an int64 and two are float64 which are 8bytes each and 24bytes in total this gives me 24 million bytes.
import pandas as pd
from dask.distributed import Client, LocalCluster
import dask.dataframe as dd
import numpy as np
import dask
# https://stackoverflow.com/questions/52642966/repartition-dask-dataframe-to-get-even-partitions
def _rebalance_ddf(ddf):
"""Repartition dask dataframe to ensure that partitions are roughly equal size.
Assumes `ddf.index` is already sorted.
"""
if not ddf.known_divisions: # e.g. for read_parquet(..., infer_divisions=False)
ddf = ddf.reset_index().set_index(ddf.index.name, sorted=True)
index_counts = ddf.map_partitions(lambda _df: _df.index.value_counts().sort_index()).compute()
index = np.repeat(index_counts.index, index_counts.values)
divisions, _ = dd.io.io.sorted_division_locations(index, npartitions=ddf.npartitions)
return ddf.repartition(divisions=divisions)
def main(client):
size = 1000000
df = pd.DataFrame({"col1": np.random.randint(1, 10000, size), "col2": np.random.randint(101, 20000, size), "col3": np.random.uniform(0, 100, size)})
# Select appropriate partitions
ddf = dd.from_pandas(df, npartitions=500)
del df
gc.collect()
# This is correct if you want to group by a certain column it is always best if that column is an indexed one
ddf = ddf.set_index("col1")
ddf = _rebalance_ddf(ddf)
print(ddf.memory_usage_per_partition(index=True, deep=False).compute())
print(ddf.memory_usage(deep=True).sum().compute())
# Always persist your data to prevent big task graphs actually if you omit this step processing will fail
ddf.to_parquet("test", engine="fastparquet")
ddf = dd.read_parquet("test")
# Dummy code to create a dataframe to be merged based on col1
ddf2 = ddf[["col2", "col3"]]
ddf2["col2/col3"] = ddf["col2"] / ddf["col3"]
ddf2 = ddf2.drop(columns=["col2", "col3"])
# Repartition the data
ddf2 = _rebalance_ddf(ddf2)
print(ddf2.memory_usage_per_partition(index=True, deep=False).compute())
print(ddf2.memory_usage(deep=True).sum().compute())
def mapped_fun(data):
for lag in range(5):
data[f"col_{lag}"] = data.groupby("col1")["col3"].transform(lambda x: x.shift(lag)).apply(lambda x: np.log(x))
return data
# Process the group by transformation in pandas but wrapped with Dask if you use the Dask functions to do this you will
# have a variety of issues.
ddf = ddf.map_partitions(mapped_fun)
# Additional... you can merge ddf with ddf2 but on an indexed column otherwise you run into a variety of issues
ddf = ddf.merge(ddf2, on=['col1'], how="left")
ddf.to_parquet("final", engine="fastparquet")
if __name__ == "__main__":
cluster = LocalCluster(n_workers=6,
threads_per_worker=2,
memory_limit='8GB')
client = Client(cluster)
main(client)
Is there a way to do this but with the threshold as a delayed number?
import dask
import pandas as pd
import dask.dataframe as dd
threshold = 3
df = pd.DataFrame({'something': [1,2,3,4]})
ddf = dd.from_pandas(df, npartitions=2)
ddf[ddf['something'] >= threshold]
What if threshold is:
threshold = dask.delayed(3)
Atm it gives me:
TypeError('Truth of Delayed objects is not supported')
I want to keep the ddf as a dask dataframe, and not turn it into a pandas dataframe. Wondering if there was combinator forms that also took delayed values.
Dask has no way to know that the concrete value in that Delayed object is an integer, so there's no way to know what to do with it in the operation (align, broadcast, etc.)
If you use something like a size-0 array, things seem OK
In [32]: df = dd.from_pandas(pd.DataFrame({"A": [1, 2, 3, 4]}), 2)
In [33]: threshold = da.from_array(np.array([3]))[0]
In [34]: df.A > threshold
Out[34]:
Dask Series Structure:
npartitions=2
0 bool
2 ...
3 ...
Name: A, dtype: bool
Dask Name: gt, 8 tasks
In [35]: df[df.A > threshold].compute()
Out[35]:
A
3 4
I am new to Dask and having some troubles with it.
I am using a machine ( 4GB RAM, 2 cores) to analyse two csv files ( key.csv: ~2 million rows about 300Mb, sig.csv: ~12 million row about 600Mb). With these data, pandas can't fit in the memory, so I switch to use Dask.dataframe, What I expect is that Dask will process things in small chunks that can be fit in the memory ( the speed can be slower, i don't mind at all as long as it works), however, somehow, Dask still uses up all of the memory.
My code as below:
key=dd.read_csv("key.csv")
sig=dd.read_csv("sig.csv")
merge=dd.merge(key, sig, left_on=["tag","name"],
right_on=["key_tag","query_name"], how="inner")
merge.to_csv("test2903_*.csv")
# store results into a hard disk since it can't be fit in memory
Did I make any mistakes? Any help is appreciated.
Big CSV files generally aren't the best for distributed compute engines like Dask. In this example, the CSVs are 600MB and 300MB, which aren't huge. As specified in the comments, you can set the blocksize when reading the CSVs to make sure the CSVs are read into Dask DataFrames with the right number of partitions.
Distributed compute joins are always going to run faster when you can broadcast the small DataFrame before running the join. Your machine has 4GB of RAM and the small DataFrame is 300MB, so it's small enough to be broadcasted. Dask automagically broadcasts Pandas DataFrames. You can convert a Dask DataFrame to a Pandas DataFrame with compute().
key is the small DataFrame in your example. Column pruning the small DataFrame and making it even smaller before broadcasting is even better.
key=dd.read_csv("key.csv")
sig=dd.read_csv("sig.csv", blocksize="100 MiB")
key_pdf = key.compute()
merge=dd.merge(key_pdf, sig, left_on=["tag","name"],
right_on=["key_tag","query_name"], how="inner")
merge.to_csv("test2903_*.csv")
Here's a MVCE:
import dask.dataframe as dd
import pandas as pd
df = pd.DataFrame(
{
"id": [1, 2, 3, 4],
"cities": ["Medellín", "Rio", "Bogotá", "Buenos Aires"],
}
)
large_ddf = dd.from_pandas(df, npartitions=2)
small_df = pd.DataFrame(
{
"id": [1, 2, 3, 4],
"population": [2.6, 6.7, 7.2, 15.2],
}
)
merged_ddf = dd.merge(
large_ddf,
small_df,
left_on=["id"],
right_on=["id"],
how="inner",
)
print(merged_ddf.compute())
id cities population
0 1 Medellín 2.6
1 2 Rio 6.7
0 3 Bogotá 7.2
1 4 Buenos Aires 15.2
I'm trying to use dask bag for wordcount 30GB of json files, I strict according to the tutoral from offical web: http://dask.pydata.org/en/latest/examples/bag-word-count-hdfs.html
But still not work, my single machine is 32GB memory and 8 cores CPU.
My code below, I used to processing 10GB file even not work, the error is running couple of hours without any notification the jupyter was collapsed, i tried on Ubuntu and Windows both system is the same problem. So i suspect if dask bag can processing data out of memory? or is that my code incorrect?
The test data from http://files.pushshift.io/reddit/comments/
import dask.bag as db
import json
b = db.read_text('D:\RC_2015-01\RC_2012-04')
records = b.map(json.loads)
result = b.str.split().concat().frequencies().topk(10, lambda x: x[1])
%time f = result.compute()
f
Try setting a blocksize in the 10MB range when reading from the single file to break it up a bit.
In [1]: import dask.bag as db
In [2]: b = db.read_text('RC_2012-04', blocksize=10000000)
In [3]: %time b.count().compute()
CPU times: user 1.22 s, sys: 56 ms, total: 1.27 s
Wall time: 20.4 s
Out[3]: 19044534
Also, as a warning, you create a bag records but then don't do anything with it. You might want to remove that line.
I am trying to use dask instead of pandas since I have 2.6gb csv file.
I load it and I want to drop a column. but it seems that neither the drop method
df.drop('column') or slicing df[ : , :-1]
is implemented yet. Is this the case or am I just missing something ?
We implemented the drop method in this PR. This is available as of dask 0.7.0.
In [1]: import pandas as pd
In [2]: df = pd.DataFrame({'x': [1, 2, 3], 'y': [3, 2, 1]})
In [3]: import dask.dataframe as dd
In [4]: ddf = dd.from_pandas(df, npartitions=2)
In [5]: ddf.drop('y', axis=1).compute()
Out[5]:
x
0 1
1 2
2 3
Previously one could also have used slicing with column names; though of course this can be less attractive if you have many columns.
In [6]: ddf[['x']].compute()
Out[6]:
x
0 1
1 2
2 3
This should work:
print(ddf.shape)
ddf = ddf.drop(columns, axis=1)
print(ddf.shape)