dask dataframe,to_parquet creates a 10 kb file first and then creates rest of the files - dask

When dask dataframe's to_parquet is called, a small parquet file of 10 kb is immediately created. Then other files of size 100 MB are created. Not sure how this small file was created. my intent is to create the files with uniform size.
Also I'm missing one row count between source csv and output parquet files.
def ProcessPQ(aDfChunk, output_pqfolder):
fPQfile = os.path.join(output_pqfolder,"{}.parquet".format(str(uuid.uuid1())))
aDfChunk.to_parquet(fPQfile, compression="snappy")
def ConvertTxttoPQ(aLocation,aFilePattern,aColumns,aDTypes):
files=os.path.join(aLocation, aFilePattern)
df = dd.read_csv(files, dtype=aDTypes, header=None,sep='|', names=aColumns)
df2 = df.repartition(npartitions=2)
df2.map_partitions(ProcessPQ, aObjFileInfo.outputfilelocation).compute(num_workers=3)

Related

Performance of multiple chunked datasets in the same HDF5 file?

Suppose (i am adding a code example below) that i create multiple chunked datasets in the same HDF5 file, and i start appending data to each dataset in random order. Since HDF does not know in advance what size to allocate for each dataset, i would think that each append operation (or perhaps a dataset buffer when filled) is directly appended to the HDF5 file. If so, the data of each dataset would be interleaved with the data from the other datasets, and would be spread out in chunks over the entire HDF5 file.
My question is: If the above description is more or less accurate, would this not adversely affect the performance of the read operations done later from that file, and perhaps also the file size if more metadata records are required? And (corrollary), if the option exists to store each dataset in a separate file, would it not be better to do so from the viewpoint of read performance?
Here is an example of how the HDF5 file that i describe in the beginning could be created:
import h5py, numpy as np
dtype1 = np.dtype( [ ('t','f8'), ('T','f8') ] )
dtype2 = np.dtype( [ ('q','i2'), ('Q','f8'), ('R','f8') ] )
dtype3 = np.dtype( [ ('p','f8'), ('P','i8') ] )
with h5py.File('foo.hdf5','w') as f:
dset1 = f.create_dataset('dset1', (1,), maxshape=(None,), dtype=h5py.vlen_dtype(dtype1))
dset2 = f.create_dataset('dset2', (1,), maxshape=(None,), dtype=h5py.vlen_dtype(dtype2))
dset3 = f.create_dataset('dset3', (1,), maxshape=(None,), dtype=h5py.vlen_dtype(dtype3))
for _ in range(10):
random_lengths = np.random.randint(low=1, high=10, size=3)
d1 = np.ones( (random_lengths[0],), dtype=dtype1 )
dset1[-1] = d1
dset1.resize( (dset1.shape[0]+1,) )
d2 = np.ones( (random_lengths[1],), dtype=dtype2 )
dset2[-1] = d2
dset2.resize( (dset2.shape[0]+1,) )
d3 = np.ones( (random_lengths[2],), dtype=dtype3 )
dset3[-1] = d3
dset3.resize( (dset3.shape[0]+1,) )
I know i could try it both ways (single file multiple datasets or multiple files single datasets) and time it, but the result might depend on the specifics of the example data used and i would rather have a more general answer to this question, and possibly some insight into how HDF5/h5py work internally in this case.

Python parallelization for code to combine multiple images

I am new to Python and am trying to parallelize a program that I somehow pieced together from the internet. The program reads all image files (usually multiple series of images such as abc001,abc002...abc015 and xyz001,xyz002....xyz015) in a specific folder and then combines images in a specified range. Most times, the number of files exceeds 10000, and my latest case requires me to combine 24000 images. Could someone help me with:
Taking 2 sets of images from different directories. Currently I have to move these images into 1 directory and then work in said directory.
Reading only specified files. Currently my program reads all files, saves names in an array (I think it's an array. Could be a directory also) and then uses only the images required to combine. If I specify a range of files, it still checks against all files in the directory and takes a lot of time.
Parallel Processing - I work with usually 10k files or sometimes more. These are images saved from the fluid simulations that I run at specific times. Currently, I save about 2k files at a time in separate folders and run the program to combine these 2000 files at one time. And then I copy all the output files to a separate folder to keep them together. It would be great if I could use all 16 cores on the processor to combine all files in 1 go.
Image series 1 is like so.
Consider it to be a series of photos of the cat walking towards the camera. Each frame is is suffixed with 001,002,...,n.
Image series 1 is like so.
Consider it to be a series of photos of the cat's expression changing with each frame. Each frame is is suffixed with 001,002,...,n.
The code currently combines each frame from set1 and set2 to provide output.png as shown in the link here.
import sys
import os
from PIL import Image
keywords=input('Enter initial characters of image series 1 [Ex:Scalar_ , VoF_Scene_]:\n')
keywords2=input('Enter initial characters of image series 2 [Ex:Scalar_ , VoF_Scene_]:\n')
directory = input('Enter correct folder name where images are present :\n') # FOLDER WHERE IMAGES ARE LOCATED
result1 = {}
result2={}
name_count1=0
name_count2=0
for filename in os.listdir(directory):
if keywords in filename:
name_count1 +=1
result1[name_count1] = os.path.join(directory, filename)
if keywords2 in filename:
name_count2 +=1
result2[name_count2] = os.path.join(directory, filename)
num1=input('Enter initial number of series:\n')
num2=input('Enter final number of series:\n')
num1=int(num1)
num2=int(num2)
if name_count1==(num2-num1+1):
a1=1
a2=name_count1
elif name_count2==(num2-num1+1):
a1=1
a2=name_count2
else:
a1=num1
a2=num2+1
for x in range(a1,a2):
y=format(x,'05') # '05' signifies number of digits in the series of file name Ex: [Scalar_scene_1_00345.png --> 5 digits], [Temperature_section_2_951.jpg --> 3 digits]. Change accordingly
y=str(y)
for comparison_name1 in result1:
for comparison_name2 in result2:
test1=result1[comparison_name1]
test2=result2[comparison_name2]
if y in test1 and y in test2:
a=test1
b=test2
test=[a,b]
images = [Image.open(x) for x in test]
widths, heights = zip(*(i.size for i in images))
total_width = sum(widths)
max_height = max(heights)
new_im = Image.new('RGB', (total_width, max_height))
x_offset = 0
for im in images:
new_im.paste(im, (x_offset,0))
x_offset += im.size[0]
output_name='output'+y+'.png'
new_im.save(os.path.join(directory, output_name))
I did a Python version as well, it's not quite as fast but it is maybe closer to your heart :-)
#!/usr/bin/env python3
import cv2
import numpy as np
from multiprocessing import Pool
def doOne(params):
"""Append the two input images side-by-side to output the third."""
imA = cv2.imread(params[0], cv2.IMREAD_UNCHANGED)
imB = cv2.imread(params[1], cv2.IMREAD_UNCHANGED)
res = np.hstack((imA, imB))
cv2.imwrite(params[2], res)
if __name__ == '__main__':
# Build the list of jobs - each entry is a tuple with 2 input filenames and an output filename
jobList = []
for i in range(1000):
# Horizontally append a-XXXXX.png to b-XXXXX.png to make c-XXXXX.png
jobList.append( (f'a-{i:05d}.png', f'b-{i:05d}.png', f'c-{i:05d}.png') )
# Make a pool of processes - 1 per CPU core
with Pool() as pool:
# Map the list of jobs to the pool of processes
pool.map(doOne, jobList)
You can do this a little quicker with libvips. To join two images left-right, enter:
vips join left.png out.png result.png horizontal
To test, I made 200 pairs of 1200x800 PNGs like this:
for i in {1..200}; do cp x.png left$i.png; cp x.png right$i.png; done
Then tried a benchmark:
time parallel vips join left{}.png right{}.png result{}.png horizontal ::: {1..200}
real 0m42.662s
user 2m35.983s
sys 0m6.446s
With imagemagick on the same laptop I see:
time parallel convert left{}.png right{}.png +append result{}.png ::: {1..200}
real 0m55.088s
user 3m24.556s
sys 0m6.400s
You can do that much faster without Python, and using multi-processing with ImageMagick or libvips.
The first part is all setup:
Make 20 images, called a-000.png ... a-019.png that go from red to blue:
convert -size 64x64 xc:red xc:blue -morph 18 a-%03d.png
Make 20 images, called b-000.png ... b-019.png that go from yellow to magenta:
convert -size 64x64 xc:yellow xc:magenta -morph 18 b-%03d.png
Now append them side-by-side into c-000.png ... c-019.png
for ((f=0;f<20;f++))
do
z=$(printf "%03d" $f)
convert a-${z}.png b-${z}.png +append c-${z}.png
done
Those images look like this:
If that looks good, you can do them all in parallel with GNU Parallel:
parallel convert a-{}.png b-{}.png +append c-{}.png ::: {1..19}
Benchmark
I did a quick benchmark and made 20,000 images a-00000.png...a-019999.png and another 20,000 images b-00000.png...b-019999.png with each image 1200x800 pixels. Then I ran the following command to append each pair horizontally and write 20,000 output images c-00000.png...c-019999.png:
seq -f "%05g" 0 19999 | parallel --eta convert a-{}.png b-{}.png +append c-{}.png
and that takes 16 minutes on my MacBook Pro with all 12 CPU cores pegged at 100% throughout. Note that you can:
add spacers between the images,
write annotation onto the images,
add borders,
resize
if you wish and do lots of other processing - this is just a simple example.
Note also that you can get even quicker times - in the region of 10-12 minutes if you accept JPEG instead of PNG as the output format.

Large csv to parquet using Dask - OOM

I've 7 csv files with 8 GB each and need to convert to parquet.
Memory usage goes to 100 GB and I had to kill it .
I tried with Distributed Dask as well . The memory is limited to 12 GB but no output produced for long time.
FYI. I used to traditional pandas with Chunking + Prod consumer --> was able to convert in 30 mins
What I'm missing for Dask processing ?
def ProcessChunk(df,...):
df.to_parquet()
for factfile in fArrFileList:
df = dd.read_csv(factfile, blocksize="100MB",
dtype=fColTypes, header=None, sep='|',names=fCSVCols)
result = ProcessChunk(df,output_parquet_file, chunksize, fPQ_Schema, fCSVCols, fColTypes)
Thanks all for suggestions. map_partitions worked.
df = dd.read_csv(filename, blocksize="500MB",
dtype=fColTypes, header=None, sep='|',names=fCSVCols)
df.map_partitions(DoWork,output_parquet_file, chunksize, Schema, CSVCols, fColTypes).compute(num_workers=2)
But the same approach for Dask Distributed Local Cluster didn't work well.when the csv size < 100 MB it worked in local cluster mode.
I had a similar problem and I found that use dask to split to smallest parquet is very slow and will eventually fail. If you have access to a Linux Terminal you can use parallel or split. For an example of their usage check answers from here
My workflow is supposing your files are called file1.csv,..., file7.csv and are stored in data/raw. I'm assuming you are using the terminal commands from your notebook and this is the reason I'm adding the %%bash magic
create folders data/raw_part/part1/,... ,data/raw_part/part7/
%%bash
for year in {1..7}
do
mkdir -p data/raw_parts/part${i}
done
for each file run (in case you want to use parallel)
%%bash
cat data/raw/file1.csv | parallel --header : --pipe -N1000000 'cat >data/raw_parts/part1/file_{#}.csv'```
convert files to parquet
first create output folders
%%bash
for year in {1..7}
do
mkdir -p data/processed/part${i}
done
define function to convert csv to parquet
import pandas as pd
import os
from dask import delayed, compute
# this can run in parallel
#delayed
def convert2parquet(fn, fldr_in, fldr_out):
fn_out = fn.replace(fldr_in, fldr_out)\
.replace(".csv", ".parquet")
df = pd.read_csv(fn)
df.to_parquet(fn_out, index=False)
get all files you want to convert
jobs = []
fldr_in = "data/raw_parts/"
for (dirpath, dirnames, filenames) in os.walk(fldr_in):
if len(filenames) > 0:
jobs += [os.path.join(dirpath, fn)
for fn in filenames]
process all in parallel
%%time
to_process = [convert2parquet(job, fldr_in, fldr_out) for job in jobs]
out = compute(to_process)

Writing Dask/XArray to NetCDF - Parallel IO

I am using Dask/Xarray with a ~150 GB dataset on a distributed cluster on a HPC system. I have the computation component complete, which takes about ~30 minutes. I want to save the final result to a NETCDF4 file, but writing the data to a NETCDF file is quite slow (~3hrs) and seems to not run in parallel. It is unclear to me if the "to_netcdf" function in Xarray is supposed to support parallel writes. Currently my approach is to write an empty netcdf file with NetCDF4 and then append the data from the Xarray:
f_mosaic = 't1.nc'
meta = {'width': dat_f.shape[1],
'height': dat_f.shape[2],
'crs': rasterio.crs.CRS(init='epsg:'+fi['CPER']['Reflectance']['Metadata']['Coordinate_System']['EPSG Code'].value.decode("utf-8")),
'transform': aff_final,
'count': dat_f.shape[0]}
with netCDF4.Dataset(f_mosaic, mode='w', format="NETCDF4") as t1:
# Create spatial dimensions
y = t1.createDimension('y', meta['width'])
x = t1.createDimension('x', meta['height'])
wl_dim = t1.createDimension('wl',meta['count'])
reflectance = t1.createVariable("reflectance","int16",("wl","y","x",),fill_value=null_val,zlib=True)
reflectance.setncattr('grid_mapping', 'crs')
crs = t1.createVariable('crs', 'c')
crs.spatial_ref = meta['crs'].wkt
crs.epsg_code = meta['crs'].to_string()
crs.GeoTransform = " ".join(str(x) for x in meta['transform'].to_gdal())
dat_f.to_netcdf(path=f_mosaic,mode='a',format='NETCDF4',encoding={'reflectance':{'zlib':True}})
Overall, the question is, how can I write this data to a NETCDF4 file quickly? Does dask/Xarray support parallel writes with NETCDF4? If so, what am I doing incorrectly?
Thanks!

dask.bag processing data out-of-memory

I'm trying to use dask bag for wordcount 30GB of json files, I strict according to the tutoral from offical web: http://dask.pydata.org/en/latest/examples/bag-word-count-hdfs.html
But still not work, my single machine is 32GB memory and 8 cores CPU.
My code below, I used to processing 10GB file even not work, the error is running couple of hours without any notification the jupyter was collapsed, i tried on Ubuntu and Windows both system is the same problem. So i suspect if dask bag can processing data out of memory? or is that my code incorrect?
The test data from http://files.pushshift.io/reddit/comments/
import dask.bag as db
import json
b = db.read_text('D:\RC_2015-01\RC_2012-04')
records = b.map(json.loads)
result = b.str.split().concat().frequencies().topk(10, lambda x: x[1])
%time f = result.compute()
f
Try setting a blocksize in the 10MB range when reading from the single file to break it up a bit.
In [1]: import dask.bag as db
In [2]: b = db.read_text('RC_2012-04', blocksize=10000000)
In [3]: %time b.count().compute()
CPU times: user 1.22 s, sys: 56 ms, total: 1.27 s
Wall time: 20.4 s
Out[3]: 19044534
Also, as a warning, you create a bag records but then don't do anything with it. You might want to remove that line.

Resources