Inpainting depth map, still a black image border - opencv

I'm trying to inpaint missing depth values of a depth map using the method described here. To summarize the method:
Downsize depth map to 20% of the original size
Inpaint all black (unknown) pixels in the downsized image
Upsize to original size
Replace all black pixels in the original image with corresponding values from the upsized image
Super simple and everything works well. A video showing the results can be found here.
However, I wonder why the left and top image border are still black although they should be inpainted (can be seen in the video). My first thought was that this could have to do something with the border interpolation (black pixels outside the image boundary), but than I would expect this also to happen on the other image borders. My second thought was that it is something specific to the used inpainting method (method by Alexandru Telea), but changing it to the Navier-Stokes based method didn't change the results.
Can somebody explain to me why this happens and how to tell OpenCV to also inpaint these regions, if possible?
Thanks in advance.

After asked by #theodore in http://answers.opencv.org/question/86569/inpainting-depth-map-still-black-image-borders/?comment=86587#comment-86587 I've used the sample images to test the inpaint behavious. It looks like it does not handle the border correctly, so creating a border with cv::copyMakeBorder can be used.
Here's the extended version with some kind of unit testing:
int main(int argc, char* argv[])
{
cv::Mat input = cv::imread("C:/StackOverflow/Input/depthInpaint.png");
cv::Mat img;
cv::cvtColor(input, img, CV_BGR2GRAY);
cv::Mat inpainted;
const unsigned char noDepth = 0; // change to 255, if values no depth uses max value or use the mask image
//cv::inpaint(img, (img == noDepth), depth, 5.0, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
double inpaintRadius = 5;
int makeBorder = 1;
cv::Mat borderimg;
cv::copyMakeBorder(img, borderimg, makeBorder, makeBorder, makeBorder, makeBorder, cv::BORDER_REPLICATE);
cv::imshow("border", borderimg);
cv::inpaint(borderimg, (borderimg == noDepth), inpainted, inpaintRadius, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
cv::Mat originalEmbedded = borderimg(cv::Rect(makeBorder, makeBorder, img.cols, img.rows));
cv::Mat inpaintedEmbedded = inpainted(cv::Rect(makeBorder, makeBorder, img.cols, img.rows));
cv::Mat diffImage;
cv::absdiff(img, originalEmbedded, diffImage);
cv::imshow("embedding correct?", diffImage > 0);
cv::Mat mask = img == noDepth;
cv::imshow("mask", mask);
cv::imshow("input", input);
cv::imshow("inpainted", inpainted);
cv::imshow("inpainted from border", inpaintedEmbedded);
cv::waitKey(0);
return 0;
}
Here's the reduced version if you believe it to be correct:
int main(int argc, char* argv[])
{
cv::Mat input = cv::imread("C:/StackOverflow/Input/depthInpaint.png");
cv::Mat img;
cv::cvtColor(input, img, CV_BGR2GRAY);
cv::Mat inpainted;
const unsigned char noDepth = 0; // change to 255, if values no depth uses max value or use the mask image
//cv::inpaint(img, (img == noDepth), depth, 5.0, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
double inpaintRadius = 5;
int makeBorderSize = 1;
cv::Mat borderimg;
//cv::copyMakeBorder(img, borderimg, borderSize, borderSize, borderSize, borderSize, cv::BORDER_REPLICATE);
cv::copyMakeBorder(img, borderimg, makeBorderSize, makeBorderSize, makeBorderSize, makeBorderSize, cv::BORDER_REPLICATE);
//cv::imshow("border", borderimg);
cv::inpaint(borderimg, (borderimg == noDepth), inpainted, inpaintRadius, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
// extract the original area without border:
cv::Mat inpaintedEmbedded = inpainted(cv::Rect(makeBorderSize, makeBorderSize, img.cols, img.rows));
cv::imshow("input", input);
cv::imshow("inpainted from border", inpaintedEmbedded);
cv::waitKey(0);
return 0;
}
Here's Input:
Here's the input with border (bordersize 5 to visualize the effect better):
Here's the output:

Related

OpenCV camera calibration with chessboard of different colours

A doubt came to my mind this morning: does the findChessboardCorners OpenCV function work with a chessboard of different colours, for example blue?
If it's not the case, do you think that a quite straightforward thresholding would do the trick?
You can't pass coloured images to the findChessboardCorners because it only takes a greyscale image as #api55 pointed out in his comment.
You might be worth taking a look at the checkchessboard code provided here
// does a fast check if a chessboard is in the input image. This is a workaround to
// a problem of cvFindChessboardCorners being slow on images with no chessboard
// - src: input binary image
// - size: chessboard size
// Returns 1 if a chessboard can be in this image and findChessboardCorners should be called,
// 0 if there is no chessboard, -1 in case of error
int checkChessboardBinary(const cv::Mat & img, const cv::Size & size)
{
CV_Assert(img.channels() == 1 && img.depth() == CV_8U);
Mat white = img.clone();
Mat black = img.clone();
int result = 0;
for ( int erosion_count = 0; erosion_count <= 3; erosion_count++ )
{
if ( 1 == result )
break;
if ( 0 != erosion_count ) // first iteration keeps original images
{
erode(white, white, Mat(), Point(-1, -1), 1);
dilate(black, black, Mat(), Point(-1, -1), 1);
}
vector<pair<float, int> > quads;
fillQuads(white, black, 128, 128, quads);
if (checkQuads(quads, size))
result = 1;
}
return result;
}
With the main loop being:
CV_IMPL
int cvFindChessboardCorners( const void* arr, CvSize pattern_size,
CvPoint2D32f* out_corners, int* out_corner_count,
int flags )
is the main implementation of this method. In here they
Use cvCheckChessboard to determine if a chessboard is in the image
Convert to binary (B&W) and dilate to split the corners apart Use
icvGenerateQuads to find the squares.
So in answer to your question, as long as there is sufficient contrast in your image after you convert it to greyscale it will likely work, I would imagine a greyscaled blue and white image would be good enough, if it was a light aqua or yellow or something you might struggle without more processing

Convert color image into grey in opencv without CV_RGB2GRAY

I want to convert colorBGR image into grey scale in opencv without using direct command CV_RGB2GRAY. Here I uploaded my code which gives me a bluish color of the image which is not a proper grey output image. Please check the below code and tell me where I m going wrong or you can give me another solution to convert the color image into grey output image without CV_RGB2GRAY.
Thanks in advance.
Mat image=imread("Desktop\\Sample input\\ip1.png");
Mat grey( image.rows,image.cols, CV_8UC3);
for(int i=0;i<image.rows;i++)
{
for(int j=0;j<image.cols;j++)
{
int blue = image.at<Vec3b>(i,j)[0];
int green = image.at<Vec3b>(i,j)[1];
int red = image.at<Vec3b>(i,j)[2];
grey.at<Vec3b>(i,j) = 0.114*blue+0.587*green+ 0.299*red ;
}
}
imshow("grey image",grey);
If you intend to convert the image which you are taking by imread() functions, you can take the image as input as a grayscale image directly by
Mat image = imread("Desktop\\Sample input\\ip1.png",CV_LOAD_IMAGE_GRAYSCALE);
or, by
Mat image = imread("Desktop\\Sample input\\ip1.png",0);
It is because CV_LOAD_IMAGE_GRAYSCALE corresponds to the constant 0. And when in imread() function gets this argument zero, it will load an image with intensity one.
And if want to convert any image to grayscale then the out image image should like
Mat grey = Mat::zeros(src_image.rows, src_image.cols, CV_8UC1);
as grayscale image is of only one channel and then you can convert the image like this:
for(int i=0;i<image.rows;i++)
{
for(int j=0;j<image.cols;j++)
{
int blue = image.at<Vec3b>(i,j)[0];
int green = image.at<Vec3b>(i,j)[1];
int red = image.at<Vec3b>(i,j)[2];
grey.at<uchar>(i, j) = (uchar) (0.114*blue + 0.587*green + 0.299*red);
}
}
It will give you the grayscale image.
In your code, the grey Mat has 3 channels. For a grayscale image you only need 1 channel (8UC1).
Also, when you are writing the values in the grayscale image, you need to use uchar instead of Vec3b because each pixel in the grayscale image is only made up of one unsigned char value, not a vector of 3 values.
So, you need to replace these lines:
Mat grey(image.rows, image.cols, CV_8UC1);
and
grey.at<uchar>(i, j) = 0.114*blue + 0.587*green + 0.299*red;

How to convert ROI from/to QImage to/from cv::Mat?

I cannot properly convert and/or display an ROI using a QRect in a QImage and create a cv::Mat image out of the QImage.
The problem is symmetric, i.e, I cannot properly get the ROI by using a cv::Rect in a cv::Mat and creating a QImage out of the Mat. Surprisingly, everything work fine whenever the width and the height of the cv::Rect or the QRect are equal.
In what follows, my full-size image is the cv::Mat matImage. It is of type CV_8U and has a square size of 2048x2048
int x = 614;
int y = 1156;
// buggy
int width = 234;
int height = 278;
//working
// int width = 400;
// int height = 400;
QRect ROI(x, y, width, height);
QImage imageInit(matImage.data, matImage.cols, matImage.rows, QImage::Format_Grayscale8);
QImage imageROI = imageInit.copy(ROI);
createNewImage(imageROI);
unsigned char* dataBuffer = imageROI.bits();
cv::Mat tempImage(cv::Size(imageROI.width(), imageROI.height()), CV_8UC1, dataBuffer, cv::Mat::AUTO_STEP);
cv::namedWindow( "openCV imshow() from a cv::Mat image", cv::WINDOW_AUTOSIZE );
cv::imshow( "openCV imshow() from a cv::Mat image", tempImage);
The screenshot below illustrates the issue.
(Left) The full-size cv::Mat matImage.
(Middle) the expected result from the QImage and the QRect (which roughly corresponds to the green rectangle drawn by hand).
(Right) the messed-up result from the cv::Mat matImageROI
By exploring other issues regarding conversion between cv::Mat and QImage, it seems the stride becomes "non-standard" in some particular ROI sizes. For example, from this post, I found out one just needs to change cv::Mat::AUTO_STEP to imageROI.bytesPerLine() in
cv::Mat tempImage(cv::Size(imageROI.width(), imageROI.height()), CV_8UC1, dataBuffer, cv::Mat::AUTO_STEP);
so I end up instead with:
cv::Mat matImageROI(cv::Size(imageROI.width(), imageROI.height()), CV_8UC1, dataBuffer, imageROI.bytesPerLine());
For the other way round, i.e, creating the QImage from a ROI of a cv::Mat, one would use the property cv::Mat::step.
For example:
QImage imageROI(matImageROI.data, matImageROI.cols, matImageROI.rows, matImageROI.step, QImage::Format_Grayscale8);
instead of:
QImage imageROI(matImageROI.data, matImageROI.cols, matImageROI.rows, QImage::Format_Grayscale8);
Update
For the case of odd ROI sizes, although it is not a problem when using either imshow() with cv::Mat or QImage in QGraphicsScene, it becomes an issue when using openGL (with QOpenGLWidget). I guess the simplest workaround is just to constrain ROIs to have even sizes.

how to reduce light intensity of an image

Took an example image from opencv (cat.jpg).To reduce brightness at particular area. here is the link for the image
http://tinypic.com/view.php?pic=2lnfx46&s=5
Here is one possible solution. The bright spots are detected using a simple threshold operation. Then the bright spots are darkened using a gamma transformation. The result looks slightly better, but unfortunately, if the pixels in the image are exactly white, all the pixel information is lost and you will not be able to recover this information.
#include <opencv2/opencv.hpp>
#include <iostream>
#include <cfloat>
int threshold = 200;
double gammav = 3;
int main(int argc, char** argv )
{
cv::Mat image,gray_image,bin_image;
// read image
cv::imread(argv[1]).convertTo(image,CV_32FC3);
// find bright spots with thresholding
cv::cvtColor(image, gray_image, CV_RGB2GRAY);
cv::threshold( gray_image, bin_image, threshold, 255,0 );
// blur mask to smooth transitions
cv::GaussianBlur(bin_image, bin_image, cv::Size(21,21), 5 );
// create 3 channel mask
std::vector<cv::Mat> channels;
channels.push_back(bin_image);
channels.push_back(bin_image);
channels.push_back(bin_image);
cv::Mat bin_image3;
cv::merge(channels,bin_image3);
// create darker version of the image using gamma correction
cv::Mat dark_image = image.clone();
for(int y=0; y<dark_image.rows; y++)
for(int x=0; x<dark_image.cols; x++)
for(int c=0;c<3;c++)
dark_image.at<cv::Vec3f>(y,x)[c] = 255.0 * pow(dark_image.at<cv::Vec3f>(y,x)[c]/255.0,gammav);
// create final image
cv::Mat res_image = image.mul((255-bin_image3)/255.0) + dark_image.mul((bin_image3)/255.0);
cv::imshow("orig",image/255);
cv::imshow("dark",dark_image/255);
cv::imshow("bin",bin_image/255);
cv::imshow("res",res_image/255);
cv::waitKey(0);
}

Subtract blue background from image by OpenCV C++

I am a beginner in OpenCV and C++, but now I have to find a solution for this problem:
I have an image of a person with blue background, now I have to subtract background from image then replace it by another image.
Now I think there are 2 ways to resolve this problem, but I don't know which is better:
Solution 1:
Convert image to B&W
Use it as a mask to subtract background.
Solution 2:
Using coutour to find the background,
and then subtract it.
I have already implemented as solution 1, but the result is not as my expect.
Do you know there's another better solution or somebody already implement it as source code?
I will appreciate your help.
I update my source code here, please give me some comment
//Get the image with person
cv::Mat imgRBG = imread("test.jpg");
//Convert this image to grayscale
cv::Mat imgGray = imread("test.jpg",CV_LOAD_IMAGE_GRAYSCALE);
//Get the background from image
cv::Mat background = imread("paris.jpg");
cv::Mat imgB, imgW;
//Image with black background but inside have some area black
threshold(imgGray, imgB, 200, 255, CV_THRESH_BINARY_INV);
cv::Mat imgTemp;
cv::Mat maskB, maskW;
cv::Mat imgDisplayB, imgDisplayW;
cv::Mat imgDisplay1, imgDisplay2, imgResult;
//Copy image with black background, overide the original image
//Now imgTemp has black background wrap the human image, and inside the person, if there're some white area, they will be replace by black area
imgRBG.copyTo(imgTemp, imgB);
//Now replace the black background with white color
cv::floodFill(imgTemp, cv::Point(imgTemp.cols -10 ,10), cv::Scalar(255.0, 255.0, 255.0));
cv::floodFill(imgTemp, cv::Point(10,10), cv::Scalar(255.0, 255.0, 255.0));
cv::floodFill(imgTemp, cv::Point(10,imgTemp.rows -10), cv::Scalar(255.0, 255.0, 255.0));
cv::floodFill(imgTemp, cv::Point(imgTemp.cols -10,imgTemp.rows -10), cv::Scalar(255.0, 255.0, 255.0));
//Convert to grayscale
cvtColor(imgTemp,imgGray,CV_RGB2GRAY);
//Convert to B&W image, now background is black, other is white
threshold(imgGray, maskB, 200, 255, CV_THRESH_BINARY_INV);
//Convert to B&W image, now background is white, other is black
threshold(imgGray, maskW, 200, 255, CV_THRESH_BINARY);
//Replace background of image by the black mask
imgRBG.copyTo(imgDisplayB, maskB);
//Clone the background image
cv::Mat overlay = background.clone();
//Create ROI
cv::Mat overlayROI = overlay(cv::Rect(0,0,imgDisplayB.cols,imgDisplayB.rows));
//Replace the area which will be human image by white color
overlayROI.copyTo(imgResult, maskW);
//Add the person image
cv::addWeighted(imgResult,1,imgDisplayB,1,0.0,imgResult);
imshow("Image Result", imgResult);
waitKey();
return 0;
Check this project
https://sourceforge.net/projects/cvchromakey
void chromakey(const Mat under, const Mat over, Mat *dst, const Scalar& color) {
// Create the destination matrix
*dst = Mat(under.rows,under.cols,CV_8UC3);
for(int y=0; y<under.rows; y++) {
for(int x=0; x<under.cols; x++) {
if (over.at<Vec3b>(y,x)[0] >= red_l && over.at<Vec3b>(y,x)[0] <= red_h && over.at<Vec3b>(y,x)[1] >= green_l && over.at<Vec3b>(y,x)[1] <= green_h && over.at<Vec3b>(y,x)[2] >= blue_l && over.at<Vec3b>(y,x)[2] <= blue_h)
{
dst->at<Vec3b>(y,x)[0]= under.at<Vec3b>(y,x)[0];
dst->at<Vec3b>(y,x)[1]= under.at<Vec3b>(y,x)[1];
dst->at<Vec3b>(y,x)[2]= under.at<Vec3b>(y,x)[2];}
else{
dst->at<Vec3b>(y,x)[0]= over.at<Vec3b>(y,x)[0];
dst->at<Vec3b>(y,x)[1]= over.at<Vec3b>(y,x)[1];
dst->at<Vec3b>(y,x)[2]= over.at<Vec3b>(y,x)[2];}
}
}
}
If you know that the background is blue, you are losing valuable information by converting the image to B/W.
If the person is not wearing blue (at least not one that is very close to the background color), you don't have to use contours. just replace the blue pixels with the pixels from the other image. You can use cvScalar data type with, cvGet2D and cvSet2D functions to achieve this.
Edit:
Your code looks a lot more complicated than the original problem you stated. Having a blue background (also called "blue screen" and "chroma key") is a common method used by TV channels to change backgrounds of news readers. The reason for selecting blue was that the human skin has less dominance in the blue component.
Assuming that the person is not wearing blue, the following code should work. Let me know if you need something different.
//Read the image with person
IplImage* imgPerson = cvLoadImage("person.jpg");
//Read the image with background
IplImage* imgBackground = cvLoadImage("paris.jpg");
// assume that the blue background is quite even
// here is a possible range of pixel values
// note that I did not use all of them :-)
unsigned char backgroundRedMin = 0;
unsigned char backgroundRedMax = 10;
unsigned char backgroundGreenMin = 0;
unsigned char backgroundGreenMax = 10;
unsigned char backgroundBlueMin = 245;
unsigned char backgroundBlueMax = 255;
// for simplicity, I assume that both images are of the same resolution
// run a loop to replace pixels
for (int i=0; i<imgPerson->width; i++)
{
for (int j=0; j< imgPerson->height; j++)
{
CvScalar currentPixel = cvGet2D(imgPerson, j, i);
// compare the RGB values of the pixel, with the range
if (curEdgePixel.val[0] > backgroundBlueMin && curEdgePixel.val[1] <
backgroundGreenMax && curEdgePixel.val[2] < backgroundRedMax)
{
// copy the corresponding pixel from background
CvScalar currentBackgroundPixel = cvGet2D(imgBackground, j, i);
cvSet2D(imgPerson, j, i, currentBackgroundPixel);
}
}
}
imshow("Image Result", imgPerson);
waitKey();
return 0;

Resources