Relationship between .cpp and .cu of a layer in CAFFE - machine-learning

I found that a layer can be implemented in CPU (.cpp) or GPU (.cu) version. For example, batch normalization layer has two source files:batch_norm_layer.cpp and batch_norm_layer.cu.
When I compile code, I select CUDA mode by uncommenting USE_CUDNN := 1. Then, when I use batch normalization layer, CAFFE will perform every thing in .cu file without getting any information from .cpp file. Am I right?
I asked it because I am not sure the correction of my CPU implementation, but I sure GPU implementation is correct. Hence, I don't know any wrong in CPU code can affect to my GPU code or not. Thanks

The LayerSetup is performed in the .cpp code, so you need to get this right. Otherwise, you are right (given you selected solver_mode: GPU), your .cu code should run for the forward and backward passes.

Related

Use .tflite with iOS and GPU

I have created a new tflite model based on MobilenetV2. It works well without quantization using CPU on iOS. I should say that TensorFlow team did a great job, many thanks.
Unfortunately there is a problem with latency. I use iPhone5s to test my model, so I have the following results for CPU:
500ms for MobilenetV2 with 224*224 input image.
250-300ms for MobilenetV2 with 160*160 input image.
I used the following pod 'TensorFlowLite', '~> 1.13.1'
It's not enough, so I have read TF documentation related to optimization (post trainig quantization). I suppose I need to use Float16 or UInt8 quantization and GPU Delegate (see https://www.tensorflow.org/lite/performance/post_training_quantization).
I used Tensorflow v2.1.0 to train and quantize my models.
Float16 quantization of weights (I used MobilenetV2 model after Float16 quantization)
https://github.com/tensorflow/examples/tree/master/lite/examples/image_segmentation/ios
pod 'TensorFlowLiteSwift', '0.0.1-nightly'
No errors, but model doesn’t work
pod 'TensorFlowLiteSwift', '2.1.0'
2020-05-01 21:36:13.578369+0300 TFL Segmentation[6367:330410] Initialized TensorFlow Lite runtime.
2020-05-01 21:36:20.877393+0300 TFL Segmentation[6367:330397] Execution of the command buffer was aborted due to an error during execution. Caused GPU Hang Error (IOAF code 3)
Full integer quantization of weights and activations
pod ‘TensorFlowLiteGpuExperimental’
Code sample: https://github.com/makeml-app/MakeML-Nails/tree/master/Segmentation%20Nails
I used a MobilenetV2 model after uint8 quantization.
GpuDelegateOptions options;
options.allow_precision_loss = true;
options.wait_type = GpuDelegateOptions::WaitType::kActive;
//delegate = NewGpuDelegate(nullptr);
delegate = NewGpuDelegate(&options);
if (interpreter->ModifyGraphWithDelegate(delegate) != kTfLiteOk)
Segmentation Live[6411:331887] [DYMTLInitPlatform] platform initialization successful
Loaded model 1resolved reporterDidn't find op for builtin opcode 'PAD' version '2'
Is it possible to use MObilenetV2 quantized model on IOS somehow? Hopefully I did some mistake :) and it's possible.
Best regards,
Dmitriy
This is a link to GITHUB issue with answers: https://github.com/tensorflow/tensorflow/issues/39101
sorry for outdated documentation - the GPU delegate should be included in the TensorFlowLiteSwift 2.1.0. However, looks like you're using C API, so depending on TensorFlowLiteC would be sufficient.
MobileNetV2 do work with TFLite runtime in iOS, and if I recall correctly it doesn't have PAD op. Can you attach your model file? With the information provided it's a bit hard to see what's causing the error. As a sanity check, you can get quant/non-quant version of MobileNetV2 from here: https://www.tensorflow.org/lite/guide/hosted_models
For int8 quantized model - afaik GPU delegate only works for FP32 and (possibly) FP16 inputs.

The implementation in source code of Backpropagation in TensorFlow (Conv2DBackpropFilter and Conv2DBackpropInput)

Since two operations Conv2DBackpropFilter and Conv2DBackpropInput count most of the time for lots of applications(AlexNet/VGG/GAN/Inception, etc.), I am analyzing the complexity of these two operations (back-propagation) in TensorFlow and I found out that there are three implementation versions (custom, fast and slot) for Conv2DBackpropFilter (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/conv_grad_filter_ops.cc ) and Conv2DBackpropInput (https://github.com/tensorflow/tensorflow/blob/master/tensorflow/core/kernels/conv_grad_input_ops.cc). While I profile, all computations are passed to "custom" version instead of "fast" or "slow" which directly calls Eigen function SpatialConvolutionBackwardInput to do that.
The issue is:
Conv2DBackpropFilter uses Eigen:“TensorMap.contract" to do the tensor contraction and Conv2DBackpropInput uses Eigen:"MatrixMap.transpose" to do the matrix transposition in the Compute() function. Beside these two functions, I didn't see any convolutional operations which are needed for back-propagation theoretically. Beside convolutions, what else would be run inside these two operations for back-propagation? Does anyone know how to analyze the computation complexity of "back propagation" operation in TensorFlow?
I am looking for any advise/suggestion. Thank you!
In addition to the transposition and contraction, the gradient op for the filter and the gradient op for the input must transform their input using Im2Col and Col2Im respectively. Approximately speaking, these transformations enable the convolution operation to be implemented using tensor contraction. For more information, see the CS231n page on Convolutional Networks (specifically, the paragraphs titled "Implementation as Matrix Multiplication" and "Backpropagation").
mrry, I got it. It means that Conv2D, Conv2DBackpropFilter and Conv2DBackpropInput use the same way by using "GEMM" to work for convolution by Im2Col/Col2Im. An other issue is that while I do the profile of GAN in TensorFlow, the execution time of Conv2DBackpropInput and Conv2DBackpropFilter are around 4-6 times slower than Conv2D with the same input size. Why?

How does one calculate the GPU memory required to run a model in TensorFlow?

Is there a straightforward way to find the GPU memory consumed by, say, an inception-resnet-v2 model that is initialized in tensorflow? This includes the inference and the backprop memories required.
You can explicitly calculate the memory needed to store parameters, but I am afraid it would be difficult to compute the size of all buffers needed for training. Probably, a more clever way would be to make TF do it for you. Set the gpu_options.allow_growth config option to True and see how much does it consume. Another option is to try smaller values for gpu_options.per_process_gpu_memory_fraction until it fails with out of memory.
Since using gpu.options.allow_growth and gpu_options.per_process_gpu_memory_fraction for model size estimation is currently a trial-and-error and tedious solution, I suggest using tf.RunMetadata() in combination with tensorboard.
Example:
run_options = tf.RunOptions(trace_level=tf.RunOptions.FULL_TRACE)
run_metadata = tf.RunMetadata()
summary, _ = sess.run(train_step, feed_dict, options=run_options, run_metadata=run_metadata)
train_writer.add_run_metadata(run_metadata, 'step%d' % i)
Run your model and tensorboard, navigate to the desired part of your graph and read the node statistics.
Source: https://www.tensorflow.org/get_started/graph_viz

Error while using opencv_train cascade

I am trying to create my own haar cascade classifier for hand gesture recognition. After generating the sample images[positive and negative] and generating the .vec file, when i try to execute the opencv_trainascaded exe file, i get the following error :
"Train dataset for temp stage can not be filled. Branch training terminated."
Can anyone help me in this regard??
Thanks in advance
Try opencv_haartraining function.
You might want to try aligning the positive and negative samples to the same size - see the linked reference.
Code wise, it looks like the error is only thrown when attempting to get the negative and positive images, so you might also want to make sure you are telling the classifier exe the correct number of positive and negative training samples.
http://nmarkou.blogspot.com/2012/01/using-train-cascades.html
check for the no of negative samples vs no. entries you have for them in .txt file.....

vDSP: Do the FFT functions include windowing?

I am working on implementing an algorithm using vDSP.
1) take FFT
2) take log of square of absolute value (can be done with lookup table)
3) take another FFT
4) take absolute value
I'm not sure if it is up to me to throw the incoming data through a windowing function before I run the FFT on it.
vDSP_fft_zrip(setupReal, &A, stride, log2n, direction);
that is my FFT function
Do I need to throw the data through vDSP_hamm_window(...) first?
The iOS Accelerate library function vDSP_fft_zrip() does not include applying a window function (unless you count the implied rectangular window due to the finite length parameter).
So you need to apply your chosen window function (there are many different ones) first.
It sounds like you're doing cepstral analysis and yes, you do need a window function prior to the first FFT. I would suggest a simple Hann or Hamming window.
I don't have any experience with your particular library, but in every other FFT library I know of it's up to you to window the data first. If nothing else, the library can't know what window you wish to use, and sometimes you don't want to use a window (if you're using the FFT for overlap-add filtering, or if you know the signal is exactly periodic in the transform block).
Also, just offhand, it seems like if you're doing 2 FFTs, the overhead of calling a logarithm function is relatively minor.

Resources