How can I remove the task dependencies to eliminate part of the task graph in Dask distributed - dask

I have come across an issue to eliminate the completed part of the task graph when handling an iterative problem with a large matrix. The minimal example code and corresponding task graph is shown below
from dask.distributed import Client
client = Client()
import numpy as np
import dask.array as da
x = np.array([1, 1, 2, 3, 3, 3, 2, 1, 1])
x = da.from_array(x, chunks=5)
def derivative(x):
return x - np.roll(x, 1)
for i in range(10):
y = x.map_overlap(derivative, depth=1, boundary='periodic')
x = y.persist()
corresponding task graph
the task graph grows during the iterative process, and the data to rebuild from the initial is not practical for this case once the data is missing. I want to eliminate the part of completed tasks in graph and keep only the task graph of the ongoing loop.
I try to clean the dependencies of x in the for loop, but it did not work.
dsk = x.__dask_graph__()
dsk.dependencies={}
How should I broke the dependencies exactly to cut the unwanted graph?
Thanks in advance!

Related

Reasons why swifter/dask/ray only use one core for an apply task?

I have this function that I would like to apply to a large dataframe in parallel:
from rdkit import Chem
from rdkit.Chem.MolStandardize import rdMolStandardize
from rdkit import RDLogger
RDLogger.DisableLog('rdApp.*')
def standardize_smiles(smiles):
if smiles is None:
return None
try:
mol = Chem.MolFromSmiles(smiles)
# removeHs, disconnect metal atoms, normalize the molecule, reionize the molecule
clean_mol = rdMolStandardize.Cleanup(mol)
# if many fragments, get the "parent" (the actual mol we are interested in)
parent_clean_mol = rdMolStandardize.FragmentParent(clean_mol)
# try to neutralize molecule
uncharger = rdMolStandardize.Uncharger() # annoying, but necessary as no convenience method exists
uncharged_parent_clean_mol = uncharger.uncharge(parent_clean_mol)
# note that no attempt is made at reionization at this step
# nor at ionization at some pH (rdkit has no pKa caculator)
# the main aim to to represent all molecules from different sources
# in a (single) standard way, for use in ML, catalogue, etc.
te = rdMolStandardize.TautomerEnumerator() # idem
taut_uncharged_parent_clean_mol = te.Canonicalize(uncharged_parent_clean_mol)
return Chem.MolToSmiles(taut_uncharged_parent_clean_mol)
#except:
# return False
standardize_smiles('CCC')
'CCC'
However, neither Dask, nor Swifter, nor Ray can do the job. All frameworks use a single CPU for some reason.
Native Pandas
import pandas as pd
N = 1000
smilest_test = pd.DataFrame({'smiles': ['CCC']*N})
smilest_test
CPU times: user 3.58 s, sys: 0 ns, total: 3.58 s
Wall time: 3.58 s
Swifter 1.3.4
smiles_test['standardized_siles'] = smiles_test.smiles.swifter.allow_dask_on_strings(True).apply(standardize_smiles)
CPU times: user 892 ms, sys: 31.4 ms, total: 923 ms
Wall time: 5.14 s
While this WORKS with the dummy data, it does not with the real data, which looks like this:
The strings are a bit more complicated than the ones in the dummy data.
it seems first swifter needs some time to prepare the parallel execution and only uses one core, but then uses more cores. However, for the real data, it only uses 3 out of 8 cores.
I have the same issue with other frameworks such as dask, ray, modin, swifter.
Is there something that I miss here? Is there a problem when the dataframe contains stings? Why does the parallel execution take so much time even on a single computer (with multiple cores)? Or is there an issue with the RDKit library that I am using that makes it difficult to parallelize the above function?

Difference between two datasets with Dask and Xarray

I need to compute a difference between two datasets (two daily variables resampled on monthly basis) with Dask and Xarray. Here my code:
def diff(path_1,path_2):
import xarray as xr
max_v=xr.open_mfdataset(path_1, combine='by_coords', concat_dim="time", parallel=True)['variable_1'].resample({'time': '1M'}).max()
min_v=xr.open_mfdataset(path_2, combine='by_coords', concat_dim="time", parallel=True)['variable_2'].resample({'time': '1M'}).min()
return (max_v-min_v).compute()
future = client.submit(diff,path_1,path_2)
diff = client.gather(future)
I also tried this:
%%time
def max_var(path):
import xarray as xr
multi_file_dataset = xr.open_mfdataset(path, combine='by_coords', concat_dim="time", parallel=True)
max_v=multi_file_dataset['variable_1'].resample(time='1M').max(dim='time')
return max_v.compute()
def min_var(path):
import xarray as xr
multi_file_dataset = xr.open_mfdataset(path, combine='by_coords', concat_dim="time", parallel=True)
min_v=multi_file_dataset['variable_2'].resample(time='1M').min(dim='time')
return min_v.compute()
futures=[]
future = client.submit(max_temp,path1)
futures.append(future)
future = client.submit(min_temp,path2)
futures.append(future)
results = client.gather(futures)
diff = results[0]-results[1]
But I noticed that the computation becomes very slow in the final step of getitem-nanmax e getitem-nanmin (1974 out of 1980 for example).
Here the cluster configuration:
cluster = SLURMCluster(walltime='1:00:00',cores=5,memory='5GB')
cluster.scale(jobs=10)
Each datasets consists of several files: total size=7GB
Is there a better way to implement this computation?
Thanks
Not 100% sure this works on your case, but without a mwe it's difficult to do much better. So, my suspicion is that .compute() used by xarray might conflict with the client.submit, because now computing is happening on the worker and I'm not sure if it can correctly distribute the work among peers (but this is a suspicion, I'm not sure). So one way out of this is to get the computations out into the main script (since xarray will integrate with dask in the backgroun), so perhaps this will work:
import xarray as xr
max_v=xr.open_mfdataset(path_1, combine='by_coords', concat_dim="time", parallel=True, chunks={'time': 10})['variable_1'].resample({'time': '1M'}).max()
min_v=xr.open_mfdataset(path_2, combine='by_coords', concat_dim="time", parallel=True, chunks={'time': 10})['variable_2'].resample({'time': '1M'}).min()
diff_result = (max_v-min_v).compute()
Below is the mwe on a different dataset:
import xarray as xr
# chunks option will create dask array
ds = xr.tutorial.open_dataset('rasm', decode_times=True, chunks={'time': 10})
# these are lazy calculations
max_v = ds['Tair'].resample({'time': '1M'}).max()
min_v = ds['Tair'].resample({'time': '1M'}).min()
# this will use dask scheduler in the background
diff_result = (max_v-min_v).compute()
# since the data refers to the same variable, all the results will be either 0 or `nan` (if the variable was not available in that time/x/y combination)

I'm using Dask to apply LabelingFunction using Snorkel on multiple datasets but it seems to take forever. Is this normal?

My problem is as follow:
I have several datasets (900K, 1M7 and 1M7 entries) in csv format which I load into multiple Dask Dataframe.
Then I concatenate them all in one Dask Dataframe that I can feed to my Snorkel Applier, which applies a bunch of Labeling Function to each row of my Dataframe and return a numpy array with as many rows as there are in the Dataframe and as many columns as there are Labeling Functions.
The call to Snorkel Applier seems to take forever when I do that with 3 datasets (more than 2 days...). However if I just run the code with only the first dataset, the call takes around 2 hours. Of course I don't do the concatenation step.
So I was wondering how can this be ? Should I change the number of partitions in the concatenated Dataframe ? Or maybe I'm using Dask badly in the first place ?
Here is the code I'm using:
from snorkel.labeling.apply.dask import DaskLFApplier
import dask.dataframe as dd
import numpy as np
import os
start = time.time()
applier = DaskLFApplier(lfs) # lfs are the function that are going to be applied, one of them featurize one of the column of my Dataframe and apply a sklearn classifier (I put n_jobs to None when loading the model)
# If I have only one CSV to read
if isinstance(PATH_TO_CSV, str):
training_data = dd.read_csv(PATH_TO_CSV, lineterminator=os.linesep, na_filter=False, dtype={'size': 'int32'})
slices = None
# If I have several CSV
elif isinstance(PATH_TO_CSV, list):
training_data_list = [dd.read_csv(path, lineterminator=os.linesep, na_filter=False, dtype={'size': 'int32'}) for path in PATH_TO_CSV]
training_data = dd.concat(training_data_list, axis=0)
# some useful things I do to know where to slice my final result and be sure I can assign each part to each dataset
df_sizes = [len(df) for df in training_data_list]
cut_idx = np.insert(np.cumsum(df_sizes), 0, 0)
slices = list(zip(cut_idx[:-1], cut_idx[1:]))
# The call that lasts forever: I tested all the code above without that line on my 3 datasets and it runs perfectly fine
L_train = applier.apply(training_data)
end = time.time()
print('Time elapsed: {}'.format(timedelta(seconds=end-start)))
If you need more info I will try to get them to you as much as I can.
Thank in you advance for your help :)
It seems that by default applier function is using processes, so does not benefit from additional workers you might have available:
# add this to the beginning of your code
from dask.distributed import Client
client = Client()
# you can see the address of the client by typing `client` and opening the dashboard
# skipping your other code
# you need to pass the client explicitly to the applier
# after launching this open the dashboard and watch the workers work :)
L_train = applier.apply(training_data, scheduler=client)

Dask Dataframe Greater than a Delayed Number

Is there a way to do this but with the threshold as a delayed number?
import dask
import pandas as pd
import dask.dataframe as dd
threshold = 3
df = pd.DataFrame({'something': [1,2,3,4]})
ddf = dd.from_pandas(df, npartitions=2)
ddf[ddf['something'] >= threshold]
What if threshold is:
threshold = dask.delayed(3)
Atm it gives me:
TypeError('Truth of Delayed objects is not supported')
I want to keep the ddf as a dask dataframe, and not turn it into a pandas dataframe. Wondering if there was combinator forms that also took delayed values.
Dask has no way to know that the concrete value in that Delayed object is an integer, so there's no way to know what to do with it in the operation (align, broadcast, etc.)
If you use something like a size-0 array, things seem OK
In [32]: df = dd.from_pandas(pd.DataFrame({"A": [1, 2, 3, 4]}), 2)
In [33]: threshold = da.from_array(np.array([3]))[0]
In [34]: df.A > threshold
Out[34]:
Dask Series Structure:
npartitions=2
0 bool
2 ...
3 ...
Name: A, dtype: bool
Dask Name: gt, 8 tasks
In [35]: df[df.A > threshold].compute()
Out[35]:
A
3 4

How to enable automatic resampling on zoom in datashader with holoviews together with a (live) data stream through a pipe into a DynamicMap?

While I have a data flow through a pipe together with a holoviews.DynamicMap containing a Curve, on which holoviews.operation.datashader.datashade() is applied: when use the zoom tool the view does not resample (as for static data) leading to a very pixelated visualization of my data. What do I need to do to enable this resampling?
I run the whole thing in a jupyter notebook with python3
When I setup my holoviews.DynamicMap just with static data and I have no pipe running it works properly.
When I start to fill the pipe (without ever using it) the resampling does not take place anymore. (I do not use the pipe at all)
Problem Scenario:
(3 cells in jupyter notebook)
(1) Import
import time
import numpy as np
import holoviews as hv
from holoviews.operation.datashader import datashade
from holoviews import opts
from holoviews.streams import Pipe
hv.extension('bokeh')
(2) Setup Pipe and Plot
#no of samples
N=100000
pipe2 = Pipe(data=[])
data_dmap = hv.DynamicMap(hv.Curve, streams=[pipe2])
data_dmap_opt = datashade(data_dmap, streams=[hv.streams.RangeXY])
data_dmap_opt.opts(width=900,xlim=(0, N),ylim=(0, 1))
(3) Generate Data Stream
def makeBigData(N):
x = np.arange(N)
y = np.random.rand(N)
while True:
time.sleep(1)
y = np.random.rand(N)
pipe2.send((x,y))
Debugging Scenarios:
alternative to cell (2)
(alternative 2) Setup Pipe and Plot with static Plot
#default Data
N=100000
x = np.arange(N)
y = np.random.rand(N)
pipe2 = Pipe(data=[])
data_dmap = hv.DynamicMap(hv.Curve((x,y)))
data_dmap_opt = datashade(data_dmap, streams=[hv.streams.RangeXY])
data_dmap_opt.opts(width=900,xlim=(0, 100000),ylim=(0, 1))
(this works as long as cell (3) is not executed, then this alternative stops working)
Expected result:
continuously updating plot with noise (at a later stage with real data)
Therefore the actual graph is sampled into an image, when zooming in the sampling should be adjusted to the actual view
Real results:
zooming in does not trigger adjustment of sampling into image.
The problem you are having is that if you trigger the updates from a while loop the kernel will be permanently busy, which means that it never gets freed up to respond to the events arriving from JS which tell it to resample. You need to schedule the events on the pipe asynchronously in some form. In the notebook you can do this using a tornado PeriodicCallback, e.g.:
from tornado.ioloop import PeriodicCallback
from tornado import gen
N = 100
x = np.arange(N)
#gen.coroutine
def f():
y = np.random.rand(N)
pipe.send((x, y))
cb = PeriodicCallback(f, 1000)
cb.start()

Resources