Opencv imshow Y from YUV - opencv

I'm trying to extract and display
Y channel from YUV converted image
My code is as follows:
Mat src, src_resized, src_gray;
src = imread("11.jpg", 1);
resize(src, src_resized, cvSize(400, 320));
cvtColor(src_resized, src_resized, cv::COLOR_BGR2RGB);
/*
I've tried both with and without the upper conversion
(mentioned here as bug
http://stackoverflow.com/questions/7954416/converting-yuv-into-bgr-or-rgb-in-opencv
in an opencv 2.4.* version - mine is 2.4.10 )
*/
cvtColor(src_resized, src_gray, CV_RGB2YUV); //YCrCb
vector<Mat> yuv_planes(3);
split(src_gray,yuv_planes);
Mat g, fin_img;
g = Mat::zeros(Size(src_gray.cols, src_gray.rows),0);
// same result withg = Mat::zeros(Size(src_gray.cols, src_gray.rows), CV_8UC1);
vector<Mat> channels;
channels.push_back(yuv_planes[0]);
channels.push_back(g);
channels.push_back(g);
merge(channels, fin_img);
imshow("Y ", fin_img);
waitKey(0);
return 0;
As result I was expecting a Gray image showing luminescence.
Instead I get a B/G/R channel image depending on the position of (first/second/third)
channels.push_back(yuv_planes[0]);
as shown here:
What am I missing? (I plan to use the luminance to do a sum of rows/columns and extracting the License Plate later using the data obtained)

The problem was displaying the luminescence only in one channel instead of filling all channels with it.
If anyone else hits the same problem just change
Mat g, fin_img;
g = Mat::zeros(Size(src_gray.cols, src_gray.rows),0);
vector<Mat> channels;
channels.push_back(yuv_planes[0]);
channels.push_back(g);
channels.push_back(g);
to
(fill all channels with desired channel)
Mat fin_img;
vector<Mat> channels;
channels.push_back(yuv_planes[0]);
channels.push_back(yuv_planes[0]);
channels.push_back(yuv_planes[0]);

Related

How to convert any image to CV_32FC1

I want to convert any image I use as input to CV_32FC1 type. My code is below:
char fname[MAX_PATH];
while (openFileDlg(fname))
{
Mat img = imread(fname, CV_LOAD_IMAGE_UNCHANGED);
Mat src(img.rows, img.cols, CV_32FC1);
Mat dst(img.rows, img.cols, CV_32FC1);
if (img.type() == 0)
{
img.convertTo(img, CV_32FC1, 1.0f / 255.0f);
src = img.clone();
}
else
{
img.convertTo(img, CV_32FC1);
}
std::cout << img.type() << ' ' << src.type();
}
For grayscale images it works, but when I use a color image, the conversion doesn't work. For example: for CV_32FC1 the value is 5. When I upload a color image it gives me the value 16, and after conversion is 21. Any ideas?
You see inconsistent results for Gray and RGB images to the difference in channel numbers in both cases, 32FC1 can be broken down as:
32F - 32 bit floating value
C1 - Single channel
But RGB image or BGRA images have 3, 4 channels respectively, so we can't use C1 for them, hence we need to use 32FC3 for 3-channel image and 32FC4 for 4-channel image.

(opencv) imread with CV_LOAD_IMAGE_GRAYSCALE yields a 4 channels Mat

The following code reads an image from a file into a cv::Mat object.
#include <string>
#include <opencv2/opencv.hpp>
cv::Mat load_image(std::string img_path)
{
cv::Mat img = cv::imread(img_path, CV_LOAD_IMAGE_GRAYSCALE);
cv::Scalar intensity = img.at<uchar>(0, 0);
std::cout << intensity << std::endl;
return img;
}
I would expect the cv::Mat to have only one channel (namely, the intensity of the image) but it has 4.
$ ./test_load_image
[164, 0, 0, 0]
I also tried converting the image with
cv::Mat gray(img.size(), CV_8UC1);
img.convertTo(gray, CV_8UC1);
but the gray matrix is also a 4 channels one.
I'd like to know if it's possible to have a single channel cv::Mat. Intuitively, that's what I would expect to have when dealing with a grayscale (thus, single channel) image.
The matrix is single channel. You're just reading the values in the wrong way.
Scalar is a struct with 4 values. Constructing a Scalar with a single value will result in a Scalar with the first value set, and the remaining at zero.
In your case, only the first values make sense. The zeros are as default for Scalar.
However, you don't need to use a Scalar:
uchar intensity = img.at<uchar>(0, 0);
std::cout << int(intensity) << std::endl; // Print the value, not the ASCII character

How to convert ROI from/to QImage to/from cv::Mat?

I cannot properly convert and/or display an ROI using a QRect in a QImage and create a cv::Mat image out of the QImage.
The problem is symmetric, i.e, I cannot properly get the ROI by using a cv::Rect in a cv::Mat and creating a QImage out of the Mat. Surprisingly, everything work fine whenever the width and the height of the cv::Rect or the QRect are equal.
In what follows, my full-size image is the cv::Mat matImage. It is of type CV_8U and has a square size of 2048x2048
int x = 614;
int y = 1156;
// buggy
int width = 234;
int height = 278;
//working
// int width = 400;
// int height = 400;
QRect ROI(x, y, width, height);
QImage imageInit(matImage.data, matImage.cols, matImage.rows, QImage::Format_Grayscale8);
QImage imageROI = imageInit.copy(ROI);
createNewImage(imageROI);
unsigned char* dataBuffer = imageROI.bits();
cv::Mat tempImage(cv::Size(imageROI.width(), imageROI.height()), CV_8UC1, dataBuffer, cv::Mat::AUTO_STEP);
cv::namedWindow( "openCV imshow() from a cv::Mat image", cv::WINDOW_AUTOSIZE );
cv::imshow( "openCV imshow() from a cv::Mat image", tempImage);
The screenshot below illustrates the issue.
(Left) The full-size cv::Mat matImage.
(Middle) the expected result from the QImage and the QRect (which roughly corresponds to the green rectangle drawn by hand).
(Right) the messed-up result from the cv::Mat matImageROI
By exploring other issues regarding conversion between cv::Mat and QImage, it seems the stride becomes "non-standard" in some particular ROI sizes. For example, from this post, I found out one just needs to change cv::Mat::AUTO_STEP to imageROI.bytesPerLine() in
cv::Mat tempImage(cv::Size(imageROI.width(), imageROI.height()), CV_8UC1, dataBuffer, cv::Mat::AUTO_STEP);
so I end up instead with:
cv::Mat matImageROI(cv::Size(imageROI.width(), imageROI.height()), CV_8UC1, dataBuffer, imageROI.bytesPerLine());
For the other way round, i.e, creating the QImage from a ROI of a cv::Mat, one would use the property cv::Mat::step.
For example:
QImage imageROI(matImageROI.data, matImageROI.cols, matImageROI.rows, matImageROI.step, QImage::Format_Grayscale8);
instead of:
QImage imageROI(matImageROI.data, matImageROI.cols, matImageROI.rows, QImage::Format_Grayscale8);
Update
For the case of odd ROI sizes, although it is not a problem when using either imshow() with cv::Mat or QImage in QGraphicsScene, it becomes an issue when using openGL (with QOpenGLWidget). I guess the simplest workaround is just to constrain ROIs to have even sizes.

Inpainting depth map, still a black image border

I'm trying to inpaint missing depth values of a depth map using the method described here. To summarize the method:
Downsize depth map to 20% of the original size
Inpaint all black (unknown) pixels in the downsized image
Upsize to original size
Replace all black pixels in the original image with corresponding values from the upsized image
Super simple and everything works well. A video showing the results can be found here.
However, I wonder why the left and top image border are still black although they should be inpainted (can be seen in the video). My first thought was that this could have to do something with the border interpolation (black pixels outside the image boundary), but than I would expect this also to happen on the other image borders. My second thought was that it is something specific to the used inpainting method (method by Alexandru Telea), but changing it to the Navier-Stokes based method didn't change the results.
Can somebody explain to me why this happens and how to tell OpenCV to also inpaint these regions, if possible?
Thanks in advance.
After asked by #theodore in http://answers.opencv.org/question/86569/inpainting-depth-map-still-black-image-borders/?comment=86587#comment-86587 I've used the sample images to test the inpaint behavious. It looks like it does not handle the border correctly, so creating a border with cv::copyMakeBorder can be used.
Here's the extended version with some kind of unit testing:
int main(int argc, char* argv[])
{
cv::Mat input = cv::imread("C:/StackOverflow/Input/depthInpaint.png");
cv::Mat img;
cv::cvtColor(input, img, CV_BGR2GRAY);
cv::Mat inpainted;
const unsigned char noDepth = 0; // change to 255, if values no depth uses max value or use the mask image
//cv::inpaint(img, (img == noDepth), depth, 5.0, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
double inpaintRadius = 5;
int makeBorder = 1;
cv::Mat borderimg;
cv::copyMakeBorder(img, borderimg, makeBorder, makeBorder, makeBorder, makeBorder, cv::BORDER_REPLICATE);
cv::imshow("border", borderimg);
cv::inpaint(borderimg, (borderimg == noDepth), inpainted, inpaintRadius, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
cv::Mat originalEmbedded = borderimg(cv::Rect(makeBorder, makeBorder, img.cols, img.rows));
cv::Mat inpaintedEmbedded = inpainted(cv::Rect(makeBorder, makeBorder, img.cols, img.rows));
cv::Mat diffImage;
cv::absdiff(img, originalEmbedded, diffImage);
cv::imshow("embedding correct?", diffImage > 0);
cv::Mat mask = img == noDepth;
cv::imshow("mask", mask);
cv::imshow("input", input);
cv::imshow("inpainted", inpainted);
cv::imshow("inpainted from border", inpaintedEmbedded);
cv::waitKey(0);
return 0;
}
Here's the reduced version if you believe it to be correct:
int main(int argc, char* argv[])
{
cv::Mat input = cv::imread("C:/StackOverflow/Input/depthInpaint.png");
cv::Mat img;
cv::cvtColor(input, img, CV_BGR2GRAY);
cv::Mat inpainted;
const unsigned char noDepth = 0; // change to 255, if values no depth uses max value or use the mask image
//cv::inpaint(img, (img == noDepth), depth, 5.0, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
double inpaintRadius = 5;
int makeBorderSize = 1;
cv::Mat borderimg;
//cv::copyMakeBorder(img, borderimg, borderSize, borderSize, borderSize, borderSize, cv::BORDER_REPLICATE);
cv::copyMakeBorder(img, borderimg, makeBorderSize, makeBorderSize, makeBorderSize, makeBorderSize, cv::BORDER_REPLICATE);
//cv::imshow("border", borderimg);
cv::inpaint(borderimg, (borderimg == noDepth), inpainted, inpaintRadius, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
// extract the original area without border:
cv::Mat inpaintedEmbedded = inpainted(cv::Rect(makeBorderSize, makeBorderSize, img.cols, img.rows));
cv::imshow("input", input);
cv::imshow("inpainted from border", inpaintedEmbedded);
cv::waitKey(0);
return 0;
}
Here's Input:
Here's the input with border (bordersize 5 to visualize the effect better):
Here's the output:

OpenCV - Image Stitching

I am using following code to stitch to input images. For an unknown
reason the output result is crap!
It seems that the homography matrix is wrong (or is affected wrongly)
because the transformed image is like an "exploited star"!
I have commented the part that I guess is the source of the problem
but I cannot realize it.
Any help or point is appriciated!
Have a nice day,
Ali
void Stitch2Image(IplImage *mImage1, IplImage *mImage2)
{
// Convert input images to gray
IplImage* gray1 = cvCreateImage(cvSize(mImage1->width, mImage1->height), 8, 1);
cvCvtColor(mImage1, gray1, CV_BGR2GRAY);
IplImage* gray2 = cvCreateImage(cvSize(mImage2->width, mImage2->height), 8, 1);
cvCvtColor(mImage2, gray2, CV_BGR2GRAY);
// Convert gray images to Mat
Mat img1(gray1);
Mat img2(gray2);
// Detect FAST keypoints and BRIEF features in the first image
FastFeatureDetector detector(50);
BriefDescriptorExtractor descriptorExtractor;
BruteForceMatcher<L1<uchar> > descriptorMatcher;
vector<KeyPoint> keypoints1;
detector.detect( img1, keypoints1 );
Mat descriptors1;
descriptorExtractor.compute( img1, keypoints1, descriptors1 );
/* Detect FAST keypoints and BRIEF features in the second image*/
vector<KeyPoint> keypoints2;
detector.detect( img1, keypoints2 );
Mat descriptors2;
descriptorExtractor.compute( img2, keypoints2, descriptors2 );
vector<DMatch> matches;
descriptorMatcher.match(descriptors1, descriptors2, matches);
if (matches.size()==0)
return;
vector<Point2f> points1, points2;
for(size_t q = 0; q < matches.size(); q++)
{
points1.push_back(keypoints1[matches[q].queryIdx].pt);
points2.push_back(keypoints2[matches[q].trainIdx].pt);
}
// Create the result image
result = cvCreateImage(cvSize(mImage2->width * 2, mImage2->height), 8, 3);
cvZero(result);
// Copy the second image in the result image
cvSetImageROI(result, cvRect(mImage2->width, 0, mImage2->width, mImage2->height));
cvCopy(mImage2, result);
cvResetImageROI(result);
// Create warp image
IplImage* warpImage = cvCloneImage(result);
cvZero(warpImage);
/************************** Is there anything wrong here!? *******************/
// Find homography matrix
Mat H = findHomography(Mat(points1), Mat(points2), 8, 3.0);
CvMat HH = H; // Is this line converted correctly?
// Transform warp image
cvWarpPerspective(mImage1, warpImage, &HH);
// Blend
blend(result, warpImage);
/*******************************************************************************/
cvReleaseImage(&gray1);
cvReleaseImage(&gray2);
cvReleaseImage(&warpImage);
}
This is what I would suggest you to try, in this order:
1) Use CV_RANSAC option for homography. Refer http://opencv.willowgarage.com/documentation/cpp/calib3d_camera_calibration_and_3d_reconstruction.html
2) Try other descriptors, particularly SIFT or SURF which ship with OpenCV. For some images FAST or BRIEF descriptors are not discriminating enough. EDIT (Aug '12): The ORB descriptors, which are based on BRIEF, are quite good and fast!
3) Try to look at the Homography matrix (step through in debug mode or print it) and see if it is consistent.
4) If above does not give you a clue, try to look at the matches that are formed. Is it matching one point in one image with a number of points in the other image? If so the problem again should be with the descriptors or the detector.
My hunch is that it is the descriptors (so 1) or 2) should fix it).
Also switch to Hamming distance instead of L1 distance in BruteForceMatcher. BRIEF descriptors are supposed to be compared using Hamming distance.
Your homography, might calculated based on wrong matches and thus represent bad allignment.
I suggest to path the matrix through additional check of interdependancy between its rows.
You can use the following code:
bool cvExtCheckTransformValid(const Mat& T){
// Check the shape of the matrix
if (T.empty())
return false;
if (T.rows != 3)
return false;
if (T.cols != 3)
return false;
// Check for linear dependency.
Mat tmp;
T.row(0).copyTo(tmp);
tmp /= T.row(1);
Scalar mean;
Scalar stddev;
meanStdDev(tmp,mean,stddev);
double X = abs(stddev[0]/mean[0]);
printf("std of H:%g\n",X);
if (X < 0.8)
return false;
return true;
}

Resources