How to convert ROI from/to QImage to/from cv::Mat? - opencv

I cannot properly convert and/or display an ROI using a QRect in a QImage and create a cv::Mat image out of the QImage.
The problem is symmetric, i.e, I cannot properly get the ROI by using a cv::Rect in a cv::Mat and creating a QImage out of the Mat. Surprisingly, everything work fine whenever the width and the height of the cv::Rect or the QRect are equal.
In what follows, my full-size image is the cv::Mat matImage. It is of type CV_8U and has a square size of 2048x2048
int x = 614;
int y = 1156;
// buggy
int width = 234;
int height = 278;
//working
// int width = 400;
// int height = 400;
QRect ROI(x, y, width, height);
QImage imageInit(matImage.data, matImage.cols, matImage.rows, QImage::Format_Grayscale8);
QImage imageROI = imageInit.copy(ROI);
createNewImage(imageROI);
unsigned char* dataBuffer = imageROI.bits();
cv::Mat tempImage(cv::Size(imageROI.width(), imageROI.height()), CV_8UC1, dataBuffer, cv::Mat::AUTO_STEP);
cv::namedWindow( "openCV imshow() from a cv::Mat image", cv::WINDOW_AUTOSIZE );
cv::imshow( "openCV imshow() from a cv::Mat image", tempImage);
The screenshot below illustrates the issue.
(Left) The full-size cv::Mat matImage.
(Middle) the expected result from the QImage and the QRect (which roughly corresponds to the green rectangle drawn by hand).
(Right) the messed-up result from the cv::Mat matImageROI

By exploring other issues regarding conversion between cv::Mat and QImage, it seems the stride becomes "non-standard" in some particular ROI sizes. For example, from this post, I found out one just needs to change cv::Mat::AUTO_STEP to imageROI.bytesPerLine() in
cv::Mat tempImage(cv::Size(imageROI.width(), imageROI.height()), CV_8UC1, dataBuffer, cv::Mat::AUTO_STEP);
so I end up instead with:
cv::Mat matImageROI(cv::Size(imageROI.width(), imageROI.height()), CV_8UC1, dataBuffer, imageROI.bytesPerLine());
For the other way round, i.e, creating the QImage from a ROI of a cv::Mat, one would use the property cv::Mat::step.
For example:
QImage imageROI(matImageROI.data, matImageROI.cols, matImageROI.rows, matImageROI.step, QImage::Format_Grayscale8);
instead of:
QImage imageROI(matImageROI.data, matImageROI.cols, matImageROI.rows, QImage::Format_Grayscale8);
Update
For the case of odd ROI sizes, although it is not a problem when using either imshow() with cv::Mat or QImage in QGraphicsScene, it becomes an issue when using openGL (with QOpenGLWidget). I guess the simplest workaround is just to constrain ROIs to have even sizes.

Related

Opencv imshow Y from YUV

I'm trying to extract and display
Y channel from YUV converted image
My code is as follows:
Mat src, src_resized, src_gray;
src = imread("11.jpg", 1);
resize(src, src_resized, cvSize(400, 320));
cvtColor(src_resized, src_resized, cv::COLOR_BGR2RGB);
/*
I've tried both with and without the upper conversion
(mentioned here as bug
http://stackoverflow.com/questions/7954416/converting-yuv-into-bgr-or-rgb-in-opencv
in an opencv 2.4.* version - mine is 2.4.10 )
*/
cvtColor(src_resized, src_gray, CV_RGB2YUV); //YCrCb
vector<Mat> yuv_planes(3);
split(src_gray,yuv_planes);
Mat g, fin_img;
g = Mat::zeros(Size(src_gray.cols, src_gray.rows),0);
// same result withg = Mat::zeros(Size(src_gray.cols, src_gray.rows), CV_8UC1);
vector<Mat> channels;
channels.push_back(yuv_planes[0]);
channels.push_back(g);
channels.push_back(g);
merge(channels, fin_img);
imshow("Y ", fin_img);
waitKey(0);
return 0;
As result I was expecting a Gray image showing luminescence.
Instead I get a B/G/R channel image depending on the position of (first/second/third)
channels.push_back(yuv_planes[0]);
as shown here:
What am I missing? (I plan to use the luminance to do a sum of rows/columns and extracting the License Plate later using the data obtained)
The problem was displaying the luminescence only in one channel instead of filling all channels with it.
If anyone else hits the same problem just change
Mat g, fin_img;
g = Mat::zeros(Size(src_gray.cols, src_gray.rows),0);
vector<Mat> channels;
channels.push_back(yuv_planes[0]);
channels.push_back(g);
channels.push_back(g);
to
(fill all channels with desired channel)
Mat fin_img;
vector<Mat> channels;
channels.push_back(yuv_planes[0]);
channels.push_back(yuv_planes[0]);
channels.push_back(yuv_planes[0]);

Inpainting depth map, still a black image border

I'm trying to inpaint missing depth values of a depth map using the method described here. To summarize the method:
Downsize depth map to 20% of the original size
Inpaint all black (unknown) pixels in the downsized image
Upsize to original size
Replace all black pixels in the original image with corresponding values from the upsized image
Super simple and everything works well. A video showing the results can be found here.
However, I wonder why the left and top image border are still black although they should be inpainted (can be seen in the video). My first thought was that this could have to do something with the border interpolation (black pixels outside the image boundary), but than I would expect this also to happen on the other image borders. My second thought was that it is something specific to the used inpainting method (method by Alexandru Telea), but changing it to the Navier-Stokes based method didn't change the results.
Can somebody explain to me why this happens and how to tell OpenCV to also inpaint these regions, if possible?
Thanks in advance.
After asked by #theodore in http://answers.opencv.org/question/86569/inpainting-depth-map-still-black-image-borders/?comment=86587#comment-86587 I've used the sample images to test the inpaint behavious. It looks like it does not handle the border correctly, so creating a border with cv::copyMakeBorder can be used.
Here's the extended version with some kind of unit testing:
int main(int argc, char* argv[])
{
cv::Mat input = cv::imread("C:/StackOverflow/Input/depthInpaint.png");
cv::Mat img;
cv::cvtColor(input, img, CV_BGR2GRAY);
cv::Mat inpainted;
const unsigned char noDepth = 0; // change to 255, if values no depth uses max value or use the mask image
//cv::inpaint(img, (img == noDepth), depth, 5.0, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
double inpaintRadius = 5;
int makeBorder = 1;
cv::Mat borderimg;
cv::copyMakeBorder(img, borderimg, makeBorder, makeBorder, makeBorder, makeBorder, cv::BORDER_REPLICATE);
cv::imshow("border", borderimg);
cv::inpaint(borderimg, (borderimg == noDepth), inpainted, inpaintRadius, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
cv::Mat originalEmbedded = borderimg(cv::Rect(makeBorder, makeBorder, img.cols, img.rows));
cv::Mat inpaintedEmbedded = inpainted(cv::Rect(makeBorder, makeBorder, img.cols, img.rows));
cv::Mat diffImage;
cv::absdiff(img, originalEmbedded, diffImage);
cv::imshow("embedding correct?", diffImage > 0);
cv::Mat mask = img == noDepth;
cv::imshow("mask", mask);
cv::imshow("input", input);
cv::imshow("inpainted", inpainted);
cv::imshow("inpainted from border", inpaintedEmbedded);
cv::waitKey(0);
return 0;
}
Here's the reduced version if you believe it to be correct:
int main(int argc, char* argv[])
{
cv::Mat input = cv::imread("C:/StackOverflow/Input/depthInpaint.png");
cv::Mat img;
cv::cvtColor(input, img, CV_BGR2GRAY);
cv::Mat inpainted;
const unsigned char noDepth = 0; // change to 255, if values no depth uses max value or use the mask image
//cv::inpaint(img, (img == noDepth), depth, 5.0, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
double inpaintRadius = 5;
int makeBorderSize = 1;
cv::Mat borderimg;
//cv::copyMakeBorder(img, borderimg, borderSize, borderSize, borderSize, borderSize, cv::BORDER_REPLICATE);
cv::copyMakeBorder(img, borderimg, makeBorderSize, makeBorderSize, makeBorderSize, makeBorderSize, cv::BORDER_REPLICATE);
//cv::imshow("border", borderimg);
cv::inpaint(borderimg, (borderimg == noDepth), inpainted, inpaintRadius, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
// extract the original area without border:
cv::Mat inpaintedEmbedded = inpainted(cv::Rect(makeBorderSize, makeBorderSize, img.cols, img.rows));
cv::imshow("input", input);
cv::imshow("inpainted from border", inpaintedEmbedded);
cv::waitKey(0);
return 0;
}
Here's Input:
Here's the input with border (bordersize 5 to visualize the effect better):
Here's the output:

MedianBlur issues with OpenCV

I'm using OpenCV for Windows Phone 8.1 (Windows runtime) in c++ with the release from MS Open Tech https://github.com/MSOpenTech/opencv.
This version is based on OpenCV 3 and the medianBlur function seemms to have a problem.
When I use a square image, the medianBlur works perfectly, but when I use a rectangle image, the medianBlur produces strange effects...
Here the result: http://fff.azurewebsites.net/opencv.png
The code that I use:
// get the pixels from the WriteableBitmap
byte* pPixels = GetPointerToPixelData(m_bitmap->PixelBuffer);
int height = m_bitmap->PixelHeight;
int width = m_bitmap->PixelWidth;
// create a matrix the size and type of the image
cv::Mat mat(width, height, CV_8UC4);
memcpy(mat.data, pPixels, 4 * height*width);
cv::Mat timg(mat);
cv::medianBlur(mat, timg, 9);
cv::Mat gray0(timg.size(), CV_8U), gray;
// copy processed image back to the WriteableBitmap
memcpy(pPixels, timg.data, 4 * height*width);
// update the WriteableBitmap
m_bitmap->Invalidate();
I did'nt find where the problem is... Is it a bug in my code ? or a bug of OpenCV 3 ? from the code of MS Open Tech ?
Thanks for your help !
You are inverting the height and the width when creating the cv::Mat.
Opencv Doc on Mat
according to the doc, you should create like that :
Mat img(height, width, CV_8UC3);
When you use cv::Size however, you give the width first.
Mat img(Size(width,height),CV_8UC3);
It is a bit confusing, but there is certainly a reason.
Try this code,change some code.
// get the pixels from the WriteableBitmap
byte* pPixels = GetPointerToPixelData(m_bitmap->PixelBuffer);
int height = m_bitmap->PixelHeight;
int width = m_bitmap->PixelWidth;
// create a matrix the size and type of the image
cv::Mat mat(cv::Size(width, height), CV_8UC3);
memcpy(mat.data, pPixels, sizeof(byte) * height*width);
cv::Mat timg(mat.size(),CV_8UC3);
cv::medianBlur(mat, timg, 9);
// cv::Mat gray0(timg.size(), CV_8U), gray;
// copy processed image back to the WriteableBitmap
memcpy(pPixels, timg.data,sizeof(byte) * height*width);
// update the WriteableBitmap
m_bitmap->Invalidate();

Image created does not display correctly

Hi I am using following function to create an image.
Mat im(584, 565, CV_8UC1, imgg);
imwrite("Output_Image.tif", im);
but the problem is that when I display the image "Output_Image.tif". Right hand side portion is overlapped onto the left handside portion. I am not able to understand what is happening. Please explain as I am beginner to opencv. Thanks
Are you sure that the image is CV_8UC1 (colospace grayscale)?
It looks like the image is an BGR image (Blue, Green, Red) and when you use an CV_8UC3 image as an CV_8UC1 image it does that.
Change this line:
Mat im(584, 565, CV_8UC1, imgg);
To this line:
Mat im(584, 565, CV_8UC3, imgg);
EDIT for mixChannels() (see comments):
C++: void mixChannels(const Mat* src, size_t nsrcs, Mat* dst, size_t ndsts, const int* fromTo, size_t npairs)
the Mat* dst is an array of Mats which must be allocated with the right size and depth before calling mixChannels. In that array your cv::Mats will really have 1 channel.
Code Example:
cv::Mat rgb(500, 500, CV_8UC3, cv::Scalar(255,255,255));
cv::Mat redChannel(rgb.rows, rgb.cols, cv_8UC1);
cv::Mat greenChannel(rgb.rows, rgb.cols, cv_8UC1);
cv::Mat blueChannel(rgb.rows, rgb.cols, cv_8UC1);
cv::Mat outArray[] = {redChannel, greenChannel, blueChannel };
int from_to[] = {0,0 , 1,2 , 3,3};
cv::mixChannels(&rgb, 1, outArray, 3, from_to, 3);
Its a little bit more complex than the split() function so heres a link to the documentary, especially the from_to array is hard to understand at first
Link to documentation:
http://docs.opencv.org/modules/core/doc/operations_on_arrays.html#mixchannels

How to feed output of GPUImage rawDataOutput to OpenCV findCountours function

Currently I have the following:
[cameraFeed] -> [gaussianBlur] -> [sobel] -> [luminanceThreshold] -> [rawDataOutput]
the rawDataOuput I would like to pass it to the OpenCV findCountours function. Unfortunately, I can't figure out the right way to do this. The following is the callback block that gets the rawDataOuput and that should pass it to the OpenCV function but it does not work. I am assuming there are a few things involved such as converting the BGRA image given by GPUImage to CV_8UC1 (single channel) but I am not able to figure them out. Some help would be much appreciated, thanks!
// Callback on raw data output
__weak GPUImageRawDataOutput *weakOutput = rawDataOutput;
[rawDataOutput setNewFrameAvailableBlock:^{
[weakOutput lockFramebufferForReading];
GLubyte *outputBytes = [weakOutput rawBytesForImage];
NSInteger bytesPerRow = [weakOutput bytesPerRowInOutput];
// OpenCV stuff
int width = videoSize.width;
int height = videoSize.height;
size_t step = bytesPerRow;
cv::Mat mat(height, width, CV_8UC1, outputBytes, step); // outputBytes should be converted to type CV_8UC1
cv::Mat workingCopy = mat.clone();
// PASS mat TO OPENCV FUNCTION!!!
[weakOutput unlockFramebufferAfterReading];
// Update rawDataInput if we want to display the result
[rawDataInput updateDataFromBytes:outputBytes size:videoSize];
[rawDataInput processData];
}];
Try change CV_8UC1 to CV_8UC4 and then convert to gray.
In code replace lines
cv::Mat mat(height, width, CV_8UC1, outputBytes, step);
cv::Mat workingCopy = mat.clone();
with
cv::Mat mat(height, width, CV_8UC4, outputBytes, step);
cv::Mat workingCopy;
cv::cvtColor(mat, workingCopy, CV_RGBA2GRAY);

Resources