Passing OpenCV mat to OpenCL kernel for execution - opencv

I am trying to combine Opencv with OpenCL for creating image buffer and pass it to GPU.
I have imx6 which uses vivante core (GPU).
Do not support OCL feature of opencv.
I am using OpenCV for reading an image which is in Mat and then want to convert it to float array and pass to kernel for execution.
But I'm getting an error segment fault while running cl program.
Probably I am not able to convert cv::mat to cl_float2, please help.
Snippet of code:
/* Load Image using opencv in opencl*/
cv::Mat shore_mask = cv::imread("1.jpg", 1);
cl_float2 *im = (cl_float2*)shore_mask.data;
cl_int h = shore_mask.rows;
cl_int w = shore_mask.cols;
/* Transfer data to memory buffer */
ret = clEnqueueWriteBuffer(queue , inimg, CL_TRUE, 0, h*w*sizeof(cl_float2), im, 0, NULL, &ev_writ);
How do I convert mat to float matrix and pass it to opencl kernel for execution?

cv::Mat shore_mask = cv::imread("1.jpg", 1);
This gives you BGR image of size 3*width*height*8bit. Now, you do
ret = clEnqueueWriteBuffer(queue , inimg, CL_TRUE, 0, h*w*sizeof(cl_float2), im, 0, NULL, &ev_writ);
Read size is width*height*64 bit - you are getting over the array bounds. Convert your cv::Mat to proper type or change read size.

Related

opencv c++ inverse fourier transformation does not give same image

I have a bgr image and convert to lab channels.
I tried to check if the idft image of the result of dft of L channel image is the same.
// MARK: Split LAB Channel each
cv::Mat lab_resized_host_image;
cv::cvtColor(resized_host_image, lab_resized_host_image, cv::COLOR_BGR2Lab);
imshow("lab_resized_host_image", lab_resized_host_image);
cv::Mat channel_L_host_image, channel_A_host_image, channel_B_host_image;
std::vector<cv::Mat> channel_LAB_host_image(3);
cv::split(lab_resized_host_image, channel_LAB_host_image);
// MARK: DFT the channel_L host image.
channel_L_host_image = channel_LAB_host_image[0];
imshow("channel_L_host_image", channel_L_host_image);
cv::Mat padded_L;
int rows_L = getOptimalDFTSize(channel_L_host_image.rows);
int cols_L = getOptimalDFTSize(channel_L_host_image.cols);
copyMakeBorder(channel_L_host_image, padded_L, 0, rows_L - channel_L_host_image.rows, 0, cols_L - channel_L_host_image.cols, BORDER_CONSTANT, Scalar::all(0));
Mat planes_L[] = {Mat_<float>(padded_L), Mat::zeros(padded_L.size(), CV_32F)};
Mat complexI_L;
merge(planes_L, 2, complexI_L);
dft(complexI_L, complexI_L);
// MARK: iDFT Channel_L.
Mat complexI_channel_L = complexI_L;
Mat complexI_channel_L_idft;
cv::dft(complexI_L, complexI_channel_L_idft, cv::DFT_INVERSE|cv::DFT_REAL_OUTPUT);
normalize(complexI_channel_L_idft, complexI_channel_L_idft, 0, 1, NORM_MINMAX);
imshow("complexI_channel_L_idft", complexI_channel_L_idft);
Each imshow give me different image... I think normalization would be error...
what is wrong? help!
original image
idft
OpenCV’s FFT is not normalized by default. One of the forward/backward transform pair must be normalized for the pair to reproduce the input values. Simply add cv::DFT_SCALE to the options:
cv::dft(complexI_mid_frequency_into_channel_A, iDFT_mid_frequency_into_channel_A, cv::DFT_INVERSE|cv::DFT_REAL_OUTPUT|cv::DFT_SCALE);

Efficiently copy Matrix (such as opencv cv::Mat, realsense rs2::frame) to Eigen::Matrix

I am working on the data conversion from opencv cv::Mat or realsense rs2::frame to Eigen::Matrix.
Is there an efficient way to do this conversion?
For example:
//For opencv Mat:
cv::Mat cvMat = cv::Mat::zeros(100, 100, CV_8UC1);
unsigned char* cv_data = cvMat.data();
Eigen::Matrix<unsigned char, 100, 100> eigenMat;
unsigned char* eigen_data = eigenMat.data();
//use memcpy
memcpy(eigen_data, cv_data, 100*100*sizeof(unsigned char));
However, Eigen::Matrix has memory alignement. Is it correct to use memcpy directly?
Or I have to use a for loop to assign each element of cvMat to eigenMat?
Thanks!

OpenCV 2.4 : C API for converting cvMat to IplImage

I'm having an YUYV image buffer in cvMat object (snippet shown below). I had to convert this cvMat object to IplImage for color conversion.
CvMat cvmat = cvMat(480, 640, CV_8UC2, yuyv_buff);
I tried the below options to convert this cvmat object to IplImage object (src: https://medium.com/#zixuan.wang/mat-cvmat-iplimage-2f9603b43909 ).
//cvGetImage()
CvMat M;
IplImage* img = cvCreateImageHeader(M.size(), M.depth(), M.channels());
cvGetImage(&M, img); //Deep Copy
//Or
CvMat M;
IplImage* img = cvGetImage(&M, cvCreateImageHeader(M.size(), M.depth(), M.channels()));
//cvConvert()
CvMat M;
IplImage* img = cvCreateImage(M.size(), M.depth(), M.channels());
cvConvert(&M, img); //Deep Copy
But nothing worked. cvGetImage(), cvConvert() expects cvArr* as input. Passing &cvmat to them throws exception.
Is there any other way to convert an CvMat object to IplImage object in OpenCV 2.4 ?
Note: I cannot use C++ API or any other version of OpenCV. I'm limited to use only OpenCV 2.4
Edit 1: My objective is to convert this YUYV buffer to an RGB image object.
Instead of creating cvMat, I was able to create an IplImage directly from the yuyv image buffer like below:
IplImage* frame = cvCreateImage(cvSize(640,480), IPL_DEPTH_8U, 2);
frame->imageData=(char*)yuyv_buffer;

MedianBlur issues with OpenCV

I'm using OpenCV for Windows Phone 8.1 (Windows runtime) in c++ with the release from MS Open Tech https://github.com/MSOpenTech/opencv.
This version is based on OpenCV 3 and the medianBlur function seemms to have a problem.
When I use a square image, the medianBlur works perfectly, but when I use a rectangle image, the medianBlur produces strange effects...
Here the result: http://fff.azurewebsites.net/opencv.png
The code that I use:
// get the pixels from the WriteableBitmap
byte* pPixels = GetPointerToPixelData(m_bitmap->PixelBuffer);
int height = m_bitmap->PixelHeight;
int width = m_bitmap->PixelWidth;
// create a matrix the size and type of the image
cv::Mat mat(width, height, CV_8UC4);
memcpy(mat.data, pPixels, 4 * height*width);
cv::Mat timg(mat);
cv::medianBlur(mat, timg, 9);
cv::Mat gray0(timg.size(), CV_8U), gray;
// copy processed image back to the WriteableBitmap
memcpy(pPixels, timg.data, 4 * height*width);
// update the WriteableBitmap
m_bitmap->Invalidate();
I did'nt find where the problem is... Is it a bug in my code ? or a bug of OpenCV 3 ? from the code of MS Open Tech ?
Thanks for your help !
You are inverting the height and the width when creating the cv::Mat.
Opencv Doc on Mat
according to the doc, you should create like that :
Mat img(height, width, CV_8UC3);
When you use cv::Size however, you give the width first.
Mat img(Size(width,height),CV_8UC3);
It is a bit confusing, but there is certainly a reason.
Try this code,change some code.
// get the pixels from the WriteableBitmap
byte* pPixels = GetPointerToPixelData(m_bitmap->PixelBuffer);
int height = m_bitmap->PixelHeight;
int width = m_bitmap->PixelWidth;
// create a matrix the size and type of the image
cv::Mat mat(cv::Size(width, height), CV_8UC3);
memcpy(mat.data, pPixels, sizeof(byte) * height*width);
cv::Mat timg(mat.size(),CV_8UC3);
cv::medianBlur(mat, timg, 9);
// cv::Mat gray0(timg.size(), CV_8U), gray;
// copy processed image back to the WriteableBitmap
memcpy(pPixels, timg.data,sizeof(byte) * height*width);
// update the WriteableBitmap
m_bitmap->Invalidate();

How to feed output of GPUImage rawDataOutput to OpenCV findCountours function

Currently I have the following:
[cameraFeed] -> [gaussianBlur] -> [sobel] -> [luminanceThreshold] -> [rawDataOutput]
the rawDataOuput I would like to pass it to the OpenCV findCountours function. Unfortunately, I can't figure out the right way to do this. The following is the callback block that gets the rawDataOuput and that should pass it to the OpenCV function but it does not work. I am assuming there are a few things involved such as converting the BGRA image given by GPUImage to CV_8UC1 (single channel) but I am not able to figure them out. Some help would be much appreciated, thanks!
// Callback on raw data output
__weak GPUImageRawDataOutput *weakOutput = rawDataOutput;
[rawDataOutput setNewFrameAvailableBlock:^{
[weakOutput lockFramebufferForReading];
GLubyte *outputBytes = [weakOutput rawBytesForImage];
NSInteger bytesPerRow = [weakOutput bytesPerRowInOutput];
// OpenCV stuff
int width = videoSize.width;
int height = videoSize.height;
size_t step = bytesPerRow;
cv::Mat mat(height, width, CV_8UC1, outputBytes, step); // outputBytes should be converted to type CV_8UC1
cv::Mat workingCopy = mat.clone();
// PASS mat TO OPENCV FUNCTION!!!
[weakOutput unlockFramebufferAfterReading];
// Update rawDataInput if we want to display the result
[rawDataInput updateDataFromBytes:outputBytes size:videoSize];
[rawDataInput processData];
}];
Try change CV_8UC1 to CV_8UC4 and then convert to gray.
In code replace lines
cv::Mat mat(height, width, CV_8UC1, outputBytes, step);
cv::Mat workingCopy = mat.clone();
with
cv::Mat mat(height, width, CV_8UC4, outputBytes, step);
cv::Mat workingCopy;
cv::cvtColor(mat, workingCopy, CV_RGBA2GRAY);

Resources