Processing distributed dask collections with external code - dask

I have input data stored as a single large file on S3.
I want Dask to chop the file automatically, distribute to workers and manage the data flow. Hence the idea of using distributed collection, e.g. bag.
On each worker I have a command line tools (Java) that read the data from file(s). Therefore I'd like to write a whole chunk of data into file, call external CLI/code to process the data and then read the results from output file. This looks like processing batches of data instead of record-at-a-time.
What would be the best approach to this problem? Is it possible to write partition to disk on a worker and process it as a whole?
PS. It nor necessary, but desirable, to stay in a distributed collection model because other operations on data might be simpler Python functions that process data record by record.

You probably want the read_bytes function. This breaks the file into many chunks cleanly split by a delimiter (like an endline). It gives you back a list of dask.delayed objects that point to those blocks of bytes.
There is more information on this documentation page: http://dask.pydata.org/en/latest/bytes.html
Here is an example from the docstring:
>>> sample, blocks = read_bytes('s3://bucket/2015-*-*.csv', delimiter=b'\n')

Related

How to load a huge model on Dask with limited RAM?

I want to load a model (ANNOY model) on Dask. The size of the model is 60GB and Dask RAM is 2GB only. Is there a way to load the model in distributed manner as well?
If by "load" you mean: "store in memory", then obviously there is no way to do this. If you need access to the whole dataset in memory at once, you'll need a machine that can handle this.
However, you very probably meant that you want to do some processing to the data and get a result (prediction, statistical score...) which does fit in memory.
Since I don't know what ANNOY is (array? dataframe? something else?), I can only give you general rules. For dask to work, it needs to be able to split a job into tasks. For data IO, this commonly means that the input is in multiple files, or that the files have some natural internal structure such that they can be loaded chunk-wise. For example, zarr (for arrays) stores each chunk of a logical dataset as a separate file, parquet (for dataframes) chunks up data into pages within columns within groups within files, and even CSV can be loaded chunkwise by looking for newline characters.
I suspect annoy ( https://github.com/spotify/annoy ?) has complex internal storage structure, and you may eed to raise an issue on their repo asking about dask support.

How do Dask bag partitions and workers correlate?

I'm using a vanilla Dask-Kubernetes setup with two workers and one scheduler to iterate over the lines of some JSON file (and apply some functions which don't appear here for simplicity). I see only one worker ever working, where I'd expect to see two of them, instead.
Hoping that repartitioning would help I've experimented with different values for bag.repartition(num) which return different numbers of lines, but they don't change anything about the worker imbalance and memory consumption concentrating only on one worker.
I think I don't understand the correlation between partitions and workers, and I could not find anything in the Dask documentation about it. Any help or pointers are highly welcome!
import dask.bag as db
def grep_buildings():
base = "https://usbuildingdata.blob.core.windows.net/usbuildings-v1-1/"
b = db.text.read_text(f"{base}/Alabama.zip")
# b = b.repartition(2)
lines = b.take(3_000_000)
return lines
len(grep_buildings())
In your example, you are opening on file, and it is compressed
db.text.read_text(f"{base}/Alabama.zip")
Dask is able to open and process multiple files in parallel, with at least one partition per file. Dask is also able to split a single file into chunks (the blocksize parameter); but this only works for uncompressed data. The reason is, that for whole-file compression methods, the only way to get to some point in the uncompressed stream is to read from the start, so every partition would read most of the data.
Finally, repartition doesn't help you when you start with a single partition: you need to read that whole file before splitting the data into pieces for downstream tasks.

Defining a dask array from already blocked data on gcs

I want to create a 3d dask array from data that I have that is already chunked. My data consists of 216 blocks containing 1024x1024x1024 uint8 voxels each, each stored as a compressed hdf5 file with one key called data. Compressed, my data is only a few megabytes per block, but decompressed, it takes 1GB per block. Furthermore, my data is currently stored in Google Cloud storage (gcs), although I could potentially mirror it locally inside a container.
I thought the easiest way would be to use zarr, following these instructions (https://pangeo.io/data.html). Would xarray have to decompress my data before saving to zarr format? Would it have to shuffle data and try to communicate across blocks? Is there a lower level way of assembling a zarr from hdf5 blocks?
There are a few questions there, so I will try to be brief and hope that some edits can flesh out details I may have omitted.
You do not need to do anything in order to view your data as a single dask array, since you can reference the individual chunks as arrays (see here) and then use the stack/concatenate functions to build up into a single array. That does mean opening every file in the client, in order to read the meatadata, though.
Similarly, xarray has some functions for reading sets of files, where you should be able to assume consistency of dtype and dimensionality - please see their docs.
As far as zarr is concerned, you could use dask to create the set of files for you on GCS or not, and choose to use the same chunking scheme as the input - then there will be no shuffling. Since zarr is very simple to set up and understand, you could even create the zarr dataset yourself and write the chunks one-by-one without having to create the dask array up front from the zarr files. That would normally be via the zarr API, and writing a chunk of data does not require any change to the metadata file, so can be done in parallel. In theory, you could simply copy a block in, if you understood the low-level data representation (e.g., int64 in C-array layout); however, I don't know how likely it is that the exact same compression mechanism will be available in both the original hdf and zarr (see here).

Control the tie-breaking choice for loading chunks in the scheduler?

I have some large files in a local binary format, which contains many 3D (or 4D) arrays as a series of 2D chunks. The order of the chunks in the files is random (could have chunk 17 of variable A, followed by chunk 6 of variable B, etc.). I don't have control over the file generation, I'm just using the results. Fortunately the files contain a table of contents, so I know where all the chunks are without having to read the entire file.
I have a simple interface to lazily load this data into dask, and re-construct the chunks as Array objects. This works fine - I can slice and dice the array, do calculations on them, and when I finally compute() the final result the chunks get loaded from file appropriately.
However, the order that the chunks are loaded is not optimal for these files. If I understand correctly, for tasks where there is no difference of cost (in terms of # of dependencies?), the local threaded scheduler will use the task keynames as a tie-breaker. This seems to cause the chunks to be loaded in their logical order within the Array. Unfortunately my files do not follow the logical order, so this results in many seeks through the data (e.g. seek halfway through the file to get chunk (0,0,0) of variable A, then go back near the beginning to get chunk (0,0,1) of variable A, etc.). What I would like to do is somehow control the order that these chunks get read, so they follow the order in the file.
I found a kludge that works for simple cases, by creating a callback function on the start_state. It scans through the tasks in the 'ready' state, looking for any references to these data chunks, then re-orders those tasks based on the order of the data on disk. Using this kludge, I was able to speed up my processing by a factor of 3. I'm guessing the OS is doing some kind of read-ahead when the file is being read sequentially, and the chunks are small enough that several get picked up in a single disk read. This kludge is sufficient for my current usage, however, it's ugly and brittle. It will probably work against dask's optimization algorithm for complex calculations. Is there a better way in dask to control which tasks win in a tie-breaker, in particular for loading chunks from disk? I.e., is there a way to tell dask, "all things being equal, here's the relative order I'd like you to process this group of chunks?"
Your assessment is correct. As of 2018-06-16 there is not currently any way to add in a final tie breaker. In the distributed scheduler (which works fine on a single machine) you can provide explicit priorities with the priority= keyword, but these take precedence over all other considerations.

importing and processing data from a CSV File in Delphi

I had an pre-interview task, which I have completed and the solution works, however I was marked down and did not get an interview due to having used a TADODataset. I basically imported a CSV file which populated the dataset, the data had to be processed in a specific way, so I used Filtering and Sorting of the dataset to make sure that the data was ordered in the way I wanted it and then I did the logic processing in a while loop. The feedback that was received said that this was bad as it would be very slow for large files.
My main question here is if using an in memory dataset is slow for processing large files, what would have been better way to access the information from the csv file. Should I have used String Lists or something like that?
It really depends on how "big" and the available resources(in this case RAM) for the task.
"The feedback that was received said that this was bad as it would be very slow for large files."
CSV files are usually used for moving data around(in most cases that I've encountered files are ~1MB+ up to ~10MB, but that's not to say that others would not dump more data in CSV format) without worrying too much(if at all) about import/export since it is extremely simplistic.
Suppose you have a 80MB CSV file, now that's a file you want to process in chunks, otherwise(depending on your processing) you can eat hundreds of MB of RAM, in this case what I would do is:
while dataToProcess do begin
// step1
read <X> lines from file, where <X> is the max number of lines
you read in one go, if there are less lines(i.e. you're down to 50 lines and X is 100)
to process, then you read those
// step2
process information
// step3
generate output, database inserts, etc.
end;
In the above case, you're not loading 80MB of data into RAM, but only a few hundred KB, and the rest you use for processing, i.e. linked lists, dynamic insert queries(batch insert), etc.
"...however I was marked down and did not get an interview due to having used a TADODataset."
I'm not surprised, they were probably looking to see if you're capable of creating algorithm(s) and provide simple solutions on the spot, but without using "ready-made" solutions.
They were probably thinking of seeing you use dynamic arrays and creating one(or more) sorting algorithm(s).
"Should I have used String Lists or something like that?"
The response might have been the same, again, I think they wanted to see how you "work".
The interviewer was quite right.
The correct, scalable and fastest solution on any medium file upwards is to use an 'external sort'.
An 'External Sort' is a 2 stage process, the first stage being to split each file into manageable and sorted smaller files. The second stage is to merge these files back into a single sorted file which can then be processed line by line.
It is extremely efficient on any CSV file with over say 200,000 lines. The amount of memory the process runs in can be controlled and thus dangers of running out of memory can be eliminated.
I have implemented many such sort processes and in Delphi would recommend a combination of TStringList, TList and TQueue classes.
Good Luck

Resources