Cuda streams synchronization issue with cupy and tensorRT - stream

I am working with TensorRT and cupy. The following code does not wait for the cuda calls too be executed if I set the cp.cuda.Stream(non_blocking=True) while it works perfectly with non_blocking=False.
Why shouldn't it work with non_blocking=True? I checked the input data and it is fine. But the code ends up with my model returning random detections (random data), meaning that there are some synchronization issues.
# Select stream
stream.use()
# Copy cupy array to the buffer
input_images = cp.array(batch_input_image)
cp.copyto(cuda_inputs[0], input_images)
# Run inference.
context.execute_async(bindings=bindings, stream_handle=stream.ptr, batch_size=len(batch_input_image))
# Copy results from the buffer
output_images = cuda_outputs[0].copy()
# Split results into batch
list_output = cp.split(output_images, indices_or_sections=len(batch_input_image), axis=0)
# Squeeze output arrays to remove axis of length one
list_output = [cp.squeeze(array) for array in list_output]
# Synchronize the stream
stream.synchronize()

After receiving some support from Nvidia, I can confirm this was not a cupy issue. It seems to be a problem with the C++ code of the TensorRT model, as discussed here: github.com/cupy/cupy/issues/6104 .

Related

Does GPU accelerate data preprocessing in ML tasks?

I am doing a machine learning (value prediction) task. While I am preprocessing data, it takes a very long time. I have a csv file with around 640000 rows, and I am trying to subtract the dates of consecutive rows and calculate the time duration. The csv file looks as attached. For example, 2011-08-17 to 2011-08-19 takes 2 days, and I would like to write 2 to the "time duration" column. I've used the python datetime function to do this. And it costs a lot of time.
data = pd.read_csv(f'{proj_dir}/raw data/measures.csv', encoding="cp1252")
file = data[['ID', 'date', 'value1', 'value2', 'duration']]
def time_subtraction(date, prev_date):
diff = datetime.strptime(date, '%Y-%m-%d') - datetime.strptime(prev_date, '%Y-%m-%d')
diff_days = diff.days
return diff_days
def calculate_time_duration(dataframe, set_0_indices):
for i in range(dataframe.shape[0]):
# For each patient, sets "Time Duration" at the first measurement to be 0
if i in set_time_0_indices.values:
dataframe.iloc[i, 4] = 0 # set time duration to 0 (beginning of this patient)
else: # time subtraction
dataframe.iloc[i, 4] = time_subtraction(date=dataframe.iloc[i, 1], prev_date=dataframe.iloc[i-1, 1])
return dataframe
# I am running on Google Colab. This line takes very long.
result = calculate_time_duration(dataframe = file, set_0_indices = set_time_0_indices)
I wonder if there are any ways to accelerate this process. Does using a GPU help? I have access to a remote GPU, but I don't know if using a GPU helps with data preprocessing. By the way, under what scenario can GPUs really make things faster? Thanks in advance!
what my data looks like
Regarding updating your data in a faster fashion please see this post.
Regarding speed improvements using the GPU: You can only use the GPU if there are optimized operations which can actually be run on the CPU. Preprocessing like you do it are normally not in the scope. You must also consider that you would need to transfer data to the GPU first, before computing anything and then transferring the results back. In your case, this would take much longer than the actual speedup, especially since your operation on the data is quite simple. I'm sure using the correct pandas syntax will lead to your desired speed up in preprocessing.

How do I get xarray.interp() to work in parallel?

I'm using xarray.interp on a large 3D DataArray (weather data: lat, lon, time) to map the values (wind speed) to new values based on a discrete mapping function f.
The interpolation method seems to only utilise one core for computation, making the process horibly inefficient. I can not figure out how to make xarray to use more than one core for this task.
I did monitor the computation via htop and a dask dashboard for xarray.interp.
htop only shows one core to be in use, the dashboard doesn't show any activity in any of the workers. The only dask activity I can observe is from loading the netcdf data file from disk. If I preload the data using .load(), this dask activity is gone.
I also tried using using a scipy.interpolate.interp1d function with xarray.apply_ufunc() to achieve the equivalent result I am aiming for but did not observe any parallel utilisation (htop) or activity (dask dashboard) either.
The fastest approach for me right now is using numpy.interp and then recasting it back to a xr.DataArray with the coordinates of the original DataArray. But that's also not parallelised and only some percent faster.
In the following MWE I don't see any dask activity after the da.load() statement in block 4.
edit:
The code has to be run in the separate blocks 1 - 4 when evaluting using e.g. htop. Because load() is causing multi-core activity and happens either explicitly (block 2) or implicitly (triggered by 4), it's easy to missattribute the multi-core activity to .interp() when its caused by data loading if you run the script as a whole.
# 1: For the dask dashboard
from dask.distributed import Client
client = Client()
display(client)
import xarray as xr
import numpy as np
da = xr.tutorial.open_dataset("air_temperature", chunks={})['air']
# 2: Preload data into memory
da.load()
# 3: Dummy interpolation function
xp = np.linspace(0,400,21)
fp = -1*(xp-300)**2
xr_interp_da = xr.DataArray(fp, [('xp', xp)], name='interpolation function')
# 4: I expect this to run in parallel but it does not
f = xr_interp_da.interp({'xp':da})

How to save dask array as .png files slice by slice?

I'm running a machine learning pipeline for segmentation of very large 3D images. I would like to store the results (dask arrays) as .png files, with each file corresponding to one slice of the dask array. Do you have any suggestions on how to implement this?
I have been trying to save the results by building a parallel for loop using the joblib dask parallel backend and then looping through the results slice by slice. This works fine until a certain point at which my pipe gets stuck without any apparent reason (no memory issue, no too many open file descriptors etc.).
array_to_save has been persisted in memory with client.persist()
with joblib.parallel_backend('dask'):
joblib.Parallel(verbose=100)(joblib.delayed(png_sav)(j, stack_height, client.compute(array_to_save[j])) for j in range(stack_height))
def png_sav(j, stack_height, prediction):
img = Image.fromarray(prediction.result().astype('uint32'), 'I') # I to save as 16 bit binary image
img.save(png_pn+str(j)+'_slice_prediction.png', "PNG")
img.close()
You might consider using either ...
the map_blocks method to call a function on every block of your data. Your function can take a block_info= keyword argument if it wants to know where it is in the stack.
Convert your array to a list of delayed arrays. Maybe something like this (untested, you should read the docs here)
x = x.rechunk((1, None, None)) # many chunks along the first axis
slices = x.to_delayed().flatten()
saves = [dask.delayed(numpy_array_to_png)(slc, filename='...') for slc in slices]
dask.compute(*saves)
Check in with the dask-image project. I suspect that they have something https://github.com/dask/dask-image
thanks a lot for your hints.
I'm trying to understand how to use .map_blocks() and in particular block_info=. But I don't understand how to use the information given by block_info. I would like to save each chunk separately but don't know how to do this. Any hints? Thanks a lot!
da.map_blocks(png_sav(stack_height, prediction,
block_info=True), dtype='uint16')
def png_sav(stack_height, prediction, block_info=True):
# I don't get how I can save each chunk separately
img = Image.fromarray("prediction_chunk".astype('uint32'), 'I') # I to save as 16 bit binary image
img.save(png_pn+str(j)+'_slice_prediction.png', "PNG")
img.close()

How to use Tensorflow inference models to generate deepdream like images

I am using a custom image set to train a neural network using Tensorflow API. After successful training process I get these checkpoint files containing values of different training var. I now want to get an inference model from these checkpoint files, I found this script which does that, which I can then use to generate deepdream images as explained in this tutorial. The problem is when I load my model using:
import tensorflow as tf
model_fn = 'export'
graph = tf.Graph()
sess = tf.InteractiveSession(graph=graph)
with tf.gfile.FastGFile(model_fn, 'rb') as f:
graph_def = tf.GraphDef()
graph_def.ParseFromString(f.read())
t_input = tf.placeholder(np.float32, name='input')
imagenet_mean = 117.0
t_preprocessed = tf.expand_dims(t_input-imagenet_mean, 0)
tf.import_graph_def(graph_def, {'input':t_preprocessed})
I get this error:
graph_def.ParseFromString(f.read())
self.MergeFromString(serialized)
raise message_mod.DecodeError('Unexpected end-group tag.')
google.protobuf.message.DecodeError: Unexpected end-group tag.
The script expect a protocol buffer file, I am not sure the script I am using to generate inference models is giving me proto buffer files or not.
Can someone please suggest what am I doing wrong, or is there a better way to achieve this. I simply want to convert checkpoint files generated by tensor to proto buffer.
Thanks
The link to the script you ran is broken, but in any case the recommended thing is not to try to generate an inference model from a checkpoint, but rather to embed code at the end of your training program that will emit a "SavedModel" export (which is not the same thing as a checkpoint).
Please see [1], and in particular the heading "Building a Saved Model". Note that a Saved Model constitutes multiple files, one of which is indeed a protocol buffer (which directly answers your question I hope); the others are variable files and (optional) asset files.
[1] https://www.tensorflow.org/programmers_guide/saved_model

retrieve sequence alignment score produced by emboss in biopython

I'm trying to retrieve the alignment score of two sequences compared using emboss in biopython. The only way that I know is to retrieve it from an output text file produced by emboss. The problem is that there will be hundreds of these files to iterate over. Is there an easier/cleaner method to retrieve the alignment score, without resorting to that? This is the main part of the code that I'm using.
From Bio.Emboss.Applications import StretcherCommandline
needle_cline = StretcherCommandline(asequence=,bsequence=,gapopen=,gapextend=,outfile=)
stdout, stderr = needle_cline()
I had the same problem and after some time spent on searching for a neat solution I popped up a white flag.
However, to speed up significantly the processing of output files I did the following things:
1) I used re python module for handling regular expressions to extract all data needed.
2) I created a ramdisk space for the output files. The use of a ramdisk here allowed for processing and exchanging all the data in RAM memory (much faster than writing and reading the output files from a hard drive, not to mention it saves your hdd in case of processing massive number of alignments).
I don't know if there is one specifically for your command.
For Primer3CommandLine, there is Primer3. Make your life much easier with something like:
from Bio.Emboss import Primer3
inputFile = "./wherever/your/outputfileis.out"
with open(inputFile) as fileHandle:
record = Primer3.parse(fileHandle)
# XXX check is len>0
primers = record.next().primers
numPrimers = len(primers)
# you should have access to each primer, using a for loop
# to check how to access the data you care about. For example:
I would also check http://biopython.org/wiki/SeqIO#Sequence_Input

Resources