Computing in-place with dask - dask

Short version
I have a dask array whose graph is ultimately based on a bunch of numpy arrays at the bottom, and which applies elementwise operations to them. Is it safe to use da.store to compute the array and store the results back into the original backup numpy arrays, making the whole thing an in-place operation?
If you're thinking "you're using dask wrong" then see the long version below for why I feel the need to do this.
Long version
I'm using dask for an application where the original data is sourced from in-memory numpy arrays that contain data collected from a scientific instrument. The goal is to fill most of the RAM (say 75%+) with the original data, which means that there isn't enough to make an in-memory copy. That makes it semantically a bit like an out-of-core problem, in that any derived value can only be realised in memory in chunks rather than all at once.
Dask is well-suited to this, except for one wrinkle. I'm simplifying a lot, but on most of the data (call it X), we need to apply an element-wise operation f, compute some summary statistics s(f(X)), and use that to compute another result over the data, say t(s(f(X)), f(X)). While all the functions are dask-friendly (can be done on a per-chunk basis), trying to simply run this dask graph would cause f(X) to all be held in memory at once because the chunks are all needed for the second pass. An alternative is to explicitly compute s before asking for t (as suggested by https://github.com/dask/dask/issues/874), and thus pay to compute f(X) twice, but it's a somewhat expensive operation so I'd like to avoid that.
However, once f has been applied, the original data are no longer needed. So I'd like to run da.store(f(X)) and have it store the results in the original backing numpy arrays. Technically I think I know how to set that up, and as long as I can be sure that each piece of data is fully consumed before it is overwritten then there are no race conditions, but I'm worried that I may be breaking an API contract by changing back data underneath dask and that it might go wrong in some way. Is there any way to guarantee that it is safe?
One way I can immediately see this going wrong is if several of the input arrays have the same contents and hence get given the same name in dask, causing them to the unified in the graph. I'm using name=False in da.from_array though, so that shouldn't be an issue.

Related

Does xarray.Dataset.to_array() load the array into memory and how efficiently sample mini batches from an xarray?

I am currently trying to load a big multi-dimensional array (>5 GB) into a python script. Since I use the array as training data for a machine learning model, it is important to efficiently load the data in mini batches but avoid loading the whole data set in memory once.
My idea was to use the xarray library.
I load the data set with X=xarray.open_dataset("Test_file.nc"). To the best of my knowledge, this command does not load the data set in memory - so far, so good. However, I want to convert X to an array with the command X=X.to_array().
My first question is: Does X=X.to_array() load it into memory or not?
If that is done, I wonder how to best load minibatches in memory. The shape of the array is (variable,datetime,x1_position,x2_position). I want to load minibatches per datetime, which would lead to:
ind=np.random.randint(low=0,high=n_times,size=(BATCH_SIZE))
mini_batch=X[:,ind]
The other approach would be to transpose the array before with X.transpose("datetime","variable","x1_position","x2_position") and then sample via:
ind=np.random.randint(low=0,high=n_times,size=(BATCH_SIZE))
mini_batch=X[ind,:]
My second question is:
Does transposing an xarray affect the efficiency of indexing? More specifically, does X[ind,:] take as long as X[:,ind]?
My first question is: Does X=X.to_array() load it into memory or not?
xarray makes use of dask to chunk (load) parts of the data into memory. You can compare X through
X = xarray.open_dataset("Test_file.nc")
# or
X = xarray.open_dataset("Test_file.nc",
chunks={'datetime':1, 'x1_position':x1_count, 'x2_position':x2_count})
and see (print(X)) the differences between loaded datasets, or specify the chunks accordingly.
The latter way means chunking (load) only one datetime slice data into memory. I don't think you need X=X.to_array() but you can also compare the results after to_array(). My experience is that to_array() does not change the actual chunking (loading) but just the view of the data.
My second question is: Does transposing an xarray affect the efficiency of indexing? More specifically, does X[ind,:] take as long as X[:,ind]?
I think one goal of xarray is to let users forget the details of the underlying implementation (based on numpy). Transposing may only modify the view rather than the underlying structure of the data. There certainly are some efficiency differences between the two indexing ways, depending on which one is accessing data along contiguous memory. But such difference would not be overhead. Feel free to use both.

Can Dask computational graphs keep intermediate data so re-compute is not necessary?

I am very impressed with Dask and I am trying to determine if it is the right tool for my problem. I am building a project for interactive data exploration where users can interactively change parameters of a figure. Sometimes these changes requires re-computing the entire pipeline to make the graph (e.g. "show data from a different time interval"), but sometimes not. For instance, "change the smoothing parameter" should not require the system to reload the raw unsmoothed data, because the underlying data is the same, only the processing changes. The system should instead use the existing raw data that has already been loaded. I would like my system to be able to keep around the intermediate data objects and intelligently determine what tasks in the graph need to be re-run based on what parameters of the data visualization have been changed. It looks like the caching system in Dask is close to what I need, but was designed with a bit of a different use-case in mind. I see there is a persist method, but I'm not sure if that would work either. Is there an easy way to accomplish this in Dask, or is there another project that would be more appropriate?
"change the smoothing parameter" should not require the system to reload the raw unsmoothed data
Two options:
The builtin functools.lru_cache will cache every unique input. The check on memory is with the maxsize parameter, which controls how many input/output pairs are stored.
Using persist in the right places will compute that object as mentioned at https://distributed.dask.org/en/latest/manage-computation.html#client-persist. It will not require re-running computation to get the object in later computation; functionally, it's the same as lru_cache.
For example, this code will read from disk twice:
>>> import dask.dataframe as dd
>>> df = dd.read_csv(...)
>>> # df = df.persist() # uncommenting this line → only read from disk once
>>> df[df.x > 0].mean().compute()
24.9
>>> df[df.y > 0].mean().compute()
0.1
Uncommented the line will mean this code only reads from disk once because the task graph for the CSV is computed and the value is stored in memory. For your application is sounds like I would use persist intelligently: https://docs.dask.org/en/latest/best-practices.html#persist-when-you-can
What if two smoothing parameters want to be visualized? In that case, I'd avoid calling compute repeatedly: https://docs.dask.org/en/latest/best-practices.html#avoid-calling-compute-repeatedly
lower, upper = client.compute(df.x.min(), df.x.max())
This will share the task graph for min and max so unnecessary computation is not performed.
I would like my system to be able to keep around the intermediate data objects and intelligently determine what tasks in the graph need to be re-run based on what parameters of the data visualization have been changed.
Dask Distributed has a smart caching ability: https://docs.dask.org/en/latest/caching.html#automatic-opportunistic-caching. Part of the documentation says
Another approach is to watch all intermediate computations, and guess which ones might be valuable to keep for the future. Dask has an opportunistic caching mechanism that stores intermediate tasks that show the following characteristics:
Expensive to compute
Cheap to store
Frequently used
I think this is what you're looking for; it'll store values depending on those attributes.

Abaqus: efficiently export large xy data set from Abaqus

I am trying to export XY data objects from sets of the size of 20-40k elements, but Abaqus is slowing down considerably, and even crashing. In fact, when I create the xy data, Abaqus gives me a warning saying that "the number of xyDataObjects is very large, and might cause performance issues". And so it does.
My usual procedure to is to create the xy data and then export in rpt format. Can someone suggest another method less prone to crashing? Would it be more efficient to divide the output element set into two or more subsets, and concatenate them after exporting?
The method recommended by #hgazibara in the comments is certainly sufficient, but it is laborious.
An easier method, I found, is a package called Abaqus2Matlab, which scrapes any variable you want from the odb. See here: http://www.abaqus2matlab.com/

Lazy repartitioning of dask dataframe

After several stages of lazy dataframe processing, I need to repartition my dataframe before saving it. However, the .repartition() method requires me to know the number of partitions (as opposed to size of partitions) and that depends on size of the data after processing, which is yet unknown.
I think I can do lazy calculation of size by df.memory_usage().sum() but repartition() does not seem to accept it (scalar) as an argument.
Is there a way to do this kind of adaptative (data-size-based) lazy repartitioning?
PS. Since this is the (almost) last step in my pipeline, I can probably work around this by converting to delayed and repartitioning "manually" (I don't need to go back to dataframe), but I'm looking for a simpler way to do this.
PS. Repartitioning by partition size would also be a very useful feature
Unfortunately Dask's task-graph construction happens immediately and there is no way to partition (or do any operation) in a way where the number of partitions is not immediately known or is lazily computed.
You could, as you suggest, switch to lower-level systems like delayed. In this case I would switch to using futures and track the size of results as they came in, triggering appropriate merging of partitions on the fly. This is probably far more complex than is desired though.

Does autodiff in tensorflow work if I have a for loop involved in constructing my graph?

I have a situation where I have a batch of images and in each image I have to perform some operation over a tiny patch in that image. Now the problem is the patch size is variable in each image in the batch. So this implies that I cannot vectorize it. I could vectorize by considering the entire range of pixels in an image but my patch size per image is really a small fraction and I don't want to waste my memory here by performing the operation and storing the results for all the pixels in each image.
So in short, I need to use a loop. Now I see that tensorflow has just a while loop defined and no for loops. So my question is , if I use a plain python style for loop for performing operations over my tensor , will the autodiff fail to calculate gradients in my graph?
Tensorflow does not know (thus does not care) how the graph has been constructed, you can even write each node by hand as long as you use proper functions to do so. So in particular for loop has nothing to do with TF. TF while loop on the other hand gives you ability to express dynamic computations inside the graph, so if you want to process data in a sequence and only need a current one in the memory - only while loop can achieve that. If you create a huge graph by hand (through the loop) it will be always executed, and everything stored in memory. As long as this fits on your machine you should be fine. Another thing is the dynamic length, if sometimes you need to run a loop 10 times, and sometimes 1000, you have to use tf.while_loop, you cannot do this with for loop (unless you create separate graphs for each possible length).

Resources