Subtract blue background from image by OpenCV C++ - opencv

I am a beginner in OpenCV and C++, but now I have to find a solution for this problem:
I have an image of a person with blue background, now I have to subtract background from image then replace it by another image.
Now I think there are 2 ways to resolve this problem, but I don't know which is better:
Solution 1:
Convert image to B&W
Use it as a mask to subtract background.
Solution 2:
Using coutour to find the background,
and then subtract it.
I have already implemented as solution 1, but the result is not as my expect.
Do you know there's another better solution or somebody already implement it as source code?
I will appreciate your help.
I update my source code here, please give me some comment
//Get the image with person
cv::Mat imgRBG = imread("test.jpg");
//Convert this image to grayscale
cv::Mat imgGray = imread("test.jpg",CV_LOAD_IMAGE_GRAYSCALE);
//Get the background from image
cv::Mat background = imread("paris.jpg");
cv::Mat imgB, imgW;
//Image with black background but inside have some area black
threshold(imgGray, imgB, 200, 255, CV_THRESH_BINARY_INV);
cv::Mat imgTemp;
cv::Mat maskB, maskW;
cv::Mat imgDisplayB, imgDisplayW;
cv::Mat imgDisplay1, imgDisplay2, imgResult;
//Copy image with black background, overide the original image
//Now imgTemp has black background wrap the human image, and inside the person, if there're some white area, they will be replace by black area
imgRBG.copyTo(imgTemp, imgB);
//Now replace the black background with white color
cv::floodFill(imgTemp, cv::Point(imgTemp.cols -10 ,10), cv::Scalar(255.0, 255.0, 255.0));
cv::floodFill(imgTemp, cv::Point(10,10), cv::Scalar(255.0, 255.0, 255.0));
cv::floodFill(imgTemp, cv::Point(10,imgTemp.rows -10), cv::Scalar(255.0, 255.0, 255.0));
cv::floodFill(imgTemp, cv::Point(imgTemp.cols -10,imgTemp.rows -10), cv::Scalar(255.0, 255.0, 255.0));
//Convert to grayscale
cvtColor(imgTemp,imgGray,CV_RGB2GRAY);
//Convert to B&W image, now background is black, other is white
threshold(imgGray, maskB, 200, 255, CV_THRESH_BINARY_INV);
//Convert to B&W image, now background is white, other is black
threshold(imgGray, maskW, 200, 255, CV_THRESH_BINARY);
//Replace background of image by the black mask
imgRBG.copyTo(imgDisplayB, maskB);
//Clone the background image
cv::Mat overlay = background.clone();
//Create ROI
cv::Mat overlayROI = overlay(cv::Rect(0,0,imgDisplayB.cols,imgDisplayB.rows));
//Replace the area which will be human image by white color
overlayROI.copyTo(imgResult, maskW);
//Add the person image
cv::addWeighted(imgResult,1,imgDisplayB,1,0.0,imgResult);
imshow("Image Result", imgResult);
waitKey();
return 0;

Check this project
https://sourceforge.net/projects/cvchromakey
void chromakey(const Mat under, const Mat over, Mat *dst, const Scalar& color) {
// Create the destination matrix
*dst = Mat(under.rows,under.cols,CV_8UC3);
for(int y=0; y<under.rows; y++) {
for(int x=0; x<under.cols; x++) {
if (over.at<Vec3b>(y,x)[0] >= red_l && over.at<Vec3b>(y,x)[0] <= red_h && over.at<Vec3b>(y,x)[1] >= green_l && over.at<Vec3b>(y,x)[1] <= green_h && over.at<Vec3b>(y,x)[2] >= blue_l && over.at<Vec3b>(y,x)[2] <= blue_h)
{
dst->at<Vec3b>(y,x)[0]= under.at<Vec3b>(y,x)[0];
dst->at<Vec3b>(y,x)[1]= under.at<Vec3b>(y,x)[1];
dst->at<Vec3b>(y,x)[2]= under.at<Vec3b>(y,x)[2];}
else{
dst->at<Vec3b>(y,x)[0]= over.at<Vec3b>(y,x)[0];
dst->at<Vec3b>(y,x)[1]= over.at<Vec3b>(y,x)[1];
dst->at<Vec3b>(y,x)[2]= over.at<Vec3b>(y,x)[2];}
}
}
}

If you know that the background is blue, you are losing valuable information by converting the image to B/W.
If the person is not wearing blue (at least not one that is very close to the background color), you don't have to use contours. just replace the blue pixels with the pixels from the other image. You can use cvScalar data type with, cvGet2D and cvSet2D functions to achieve this.
Edit:
Your code looks a lot more complicated than the original problem you stated. Having a blue background (also called "blue screen" and "chroma key") is a common method used by TV channels to change backgrounds of news readers. The reason for selecting blue was that the human skin has less dominance in the blue component.
Assuming that the person is not wearing blue, the following code should work. Let me know if you need something different.
//Read the image with person
IplImage* imgPerson = cvLoadImage("person.jpg");
//Read the image with background
IplImage* imgBackground = cvLoadImage("paris.jpg");
// assume that the blue background is quite even
// here is a possible range of pixel values
// note that I did not use all of them :-)
unsigned char backgroundRedMin = 0;
unsigned char backgroundRedMax = 10;
unsigned char backgroundGreenMin = 0;
unsigned char backgroundGreenMax = 10;
unsigned char backgroundBlueMin = 245;
unsigned char backgroundBlueMax = 255;
// for simplicity, I assume that both images are of the same resolution
// run a loop to replace pixels
for (int i=0; i<imgPerson->width; i++)
{
for (int j=0; j< imgPerson->height; j++)
{
CvScalar currentPixel = cvGet2D(imgPerson, j, i);
// compare the RGB values of the pixel, with the range
if (curEdgePixel.val[0] > backgroundBlueMin && curEdgePixel.val[1] <
backgroundGreenMax && curEdgePixel.val[2] < backgroundRedMax)
{
// copy the corresponding pixel from background
CvScalar currentBackgroundPixel = cvGet2D(imgBackground, j, i);
cvSet2D(imgPerson, j, i, currentBackgroundPixel);
}
}
}
imshow("Image Result", imgPerson);
waitKey();
return 0;

Related

OpenCV camera calibration with chessboard of different colours

A doubt came to my mind this morning: does the findChessboardCorners OpenCV function work with a chessboard of different colours, for example blue?
If it's not the case, do you think that a quite straightforward thresholding would do the trick?
You can't pass coloured images to the findChessboardCorners because it only takes a greyscale image as #api55 pointed out in his comment.
You might be worth taking a look at the checkchessboard code provided here
// does a fast check if a chessboard is in the input image. This is a workaround to
// a problem of cvFindChessboardCorners being slow on images with no chessboard
// - src: input binary image
// - size: chessboard size
// Returns 1 if a chessboard can be in this image and findChessboardCorners should be called,
// 0 if there is no chessboard, -1 in case of error
int checkChessboardBinary(const cv::Mat & img, const cv::Size & size)
{
CV_Assert(img.channels() == 1 && img.depth() == CV_8U);
Mat white = img.clone();
Mat black = img.clone();
int result = 0;
for ( int erosion_count = 0; erosion_count <= 3; erosion_count++ )
{
if ( 1 == result )
break;
if ( 0 != erosion_count ) // first iteration keeps original images
{
erode(white, white, Mat(), Point(-1, -1), 1);
dilate(black, black, Mat(), Point(-1, -1), 1);
}
vector<pair<float, int> > quads;
fillQuads(white, black, 128, 128, quads);
if (checkQuads(quads, size))
result = 1;
}
return result;
}
With the main loop being:
CV_IMPL
int cvFindChessboardCorners( const void* arr, CvSize pattern_size,
CvPoint2D32f* out_corners, int* out_corner_count,
int flags )
is the main implementation of this method. In here they
Use cvCheckChessboard to determine if a chessboard is in the image
Convert to binary (B&W) and dilate to split the corners apart Use
icvGenerateQuads to find the squares.
So in answer to your question, as long as there is sufficient contrast in your image after you convert it to greyscale it will likely work, I would imagine a greyscaled blue and white image would be good enough, if it was a light aqua or yellow or something you might struggle without more processing

Convert color image into grey in opencv without CV_RGB2GRAY

I want to convert colorBGR image into grey scale in opencv without using direct command CV_RGB2GRAY. Here I uploaded my code which gives me a bluish color of the image which is not a proper grey output image. Please check the below code and tell me where I m going wrong or you can give me another solution to convert the color image into grey output image without CV_RGB2GRAY.
Thanks in advance.
Mat image=imread("Desktop\\Sample input\\ip1.png");
Mat grey( image.rows,image.cols, CV_8UC3);
for(int i=0;i<image.rows;i++)
{
for(int j=0;j<image.cols;j++)
{
int blue = image.at<Vec3b>(i,j)[0];
int green = image.at<Vec3b>(i,j)[1];
int red = image.at<Vec3b>(i,j)[2];
grey.at<Vec3b>(i,j) = 0.114*blue+0.587*green+ 0.299*red ;
}
}
imshow("grey image",grey);
If you intend to convert the image which you are taking by imread() functions, you can take the image as input as a grayscale image directly by
Mat image = imread("Desktop\\Sample input\\ip1.png",CV_LOAD_IMAGE_GRAYSCALE);
or, by
Mat image = imread("Desktop\\Sample input\\ip1.png",0);
It is because CV_LOAD_IMAGE_GRAYSCALE corresponds to the constant 0. And when in imread() function gets this argument zero, it will load an image with intensity one.
And if want to convert any image to grayscale then the out image image should like
Mat grey = Mat::zeros(src_image.rows, src_image.cols, CV_8UC1);
as grayscale image is of only one channel and then you can convert the image like this:
for(int i=0;i<image.rows;i++)
{
for(int j=0;j<image.cols;j++)
{
int blue = image.at<Vec3b>(i,j)[0];
int green = image.at<Vec3b>(i,j)[1];
int red = image.at<Vec3b>(i,j)[2];
grey.at<uchar>(i, j) = (uchar) (0.114*blue + 0.587*green + 0.299*red);
}
}
It will give you the grayscale image.
In your code, the grey Mat has 3 channels. For a grayscale image you only need 1 channel (8UC1).
Also, when you are writing the values in the grayscale image, you need to use uchar instead of Vec3b because each pixel in the grayscale image is only made up of one unsigned char value, not a vector of 3 values.
So, you need to replace these lines:
Mat grey(image.rows, image.cols, CV_8UC1);
and
grey.at<uchar>(i, j) = 0.114*blue + 0.587*green + 0.299*red;

Inpainting depth map, still a black image border

I'm trying to inpaint missing depth values of a depth map using the method described here. To summarize the method:
Downsize depth map to 20% of the original size
Inpaint all black (unknown) pixels in the downsized image
Upsize to original size
Replace all black pixels in the original image with corresponding values from the upsized image
Super simple and everything works well. A video showing the results can be found here.
However, I wonder why the left and top image border are still black although they should be inpainted (can be seen in the video). My first thought was that this could have to do something with the border interpolation (black pixels outside the image boundary), but than I would expect this also to happen on the other image borders. My second thought was that it is something specific to the used inpainting method (method by Alexandru Telea), but changing it to the Navier-Stokes based method didn't change the results.
Can somebody explain to me why this happens and how to tell OpenCV to also inpaint these regions, if possible?
Thanks in advance.
After asked by #theodore in http://answers.opencv.org/question/86569/inpainting-depth-map-still-black-image-borders/?comment=86587#comment-86587 I've used the sample images to test the inpaint behavious. It looks like it does not handle the border correctly, so creating a border with cv::copyMakeBorder can be used.
Here's the extended version with some kind of unit testing:
int main(int argc, char* argv[])
{
cv::Mat input = cv::imread("C:/StackOverflow/Input/depthInpaint.png");
cv::Mat img;
cv::cvtColor(input, img, CV_BGR2GRAY);
cv::Mat inpainted;
const unsigned char noDepth = 0; // change to 255, if values no depth uses max value or use the mask image
//cv::inpaint(img, (img == noDepth), depth, 5.0, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
double inpaintRadius = 5;
int makeBorder = 1;
cv::Mat borderimg;
cv::copyMakeBorder(img, borderimg, makeBorder, makeBorder, makeBorder, makeBorder, cv::BORDER_REPLICATE);
cv::imshow("border", borderimg);
cv::inpaint(borderimg, (borderimg == noDepth), inpainted, inpaintRadius, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
cv::Mat originalEmbedded = borderimg(cv::Rect(makeBorder, makeBorder, img.cols, img.rows));
cv::Mat inpaintedEmbedded = inpainted(cv::Rect(makeBorder, makeBorder, img.cols, img.rows));
cv::Mat diffImage;
cv::absdiff(img, originalEmbedded, diffImage);
cv::imshow("embedding correct?", diffImage > 0);
cv::Mat mask = img == noDepth;
cv::imshow("mask", mask);
cv::imshow("input", input);
cv::imshow("inpainted", inpainted);
cv::imshow("inpainted from border", inpaintedEmbedded);
cv::waitKey(0);
return 0;
}
Here's the reduced version if you believe it to be correct:
int main(int argc, char* argv[])
{
cv::Mat input = cv::imread("C:/StackOverflow/Input/depthInpaint.png");
cv::Mat img;
cv::cvtColor(input, img, CV_BGR2GRAY);
cv::Mat inpainted;
const unsigned char noDepth = 0; // change to 255, if values no depth uses max value or use the mask image
//cv::inpaint(img, (img == noDepth), depth, 5.0, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
double inpaintRadius = 5;
int makeBorderSize = 1;
cv::Mat borderimg;
//cv::copyMakeBorder(img, borderimg, borderSize, borderSize, borderSize, borderSize, cv::BORDER_REPLICATE);
cv::copyMakeBorder(img, borderimg, makeBorderSize, makeBorderSize, makeBorderSize, makeBorderSize, cv::BORDER_REPLICATE);
//cv::imshow("border", borderimg);
cv::inpaint(borderimg, (borderimg == noDepth), inpainted, inpaintRadius, cv::INPAINT_TELEA); // img is the 8-bit input image (depth map with blank spots)
// extract the original area without border:
cv::Mat inpaintedEmbedded = inpainted(cv::Rect(makeBorderSize, makeBorderSize, img.cols, img.rows));
cv::imshow("input", input);
cv::imshow("inpainted from border", inpaintedEmbedded);
cv::waitKey(0);
return 0;
}
Here's Input:
Here's the input with border (bordersize 5 to visualize the effect better):
Here's the output:

How to check if an image is B&W in iOS

I have an UIImage that shows a photo downloaded from the net.
I would like to know away to programmatically discover if the image is in B&W or Color.
If you dont mind a computing intensive task and you want the job done, check pixel per pixel the image.
The idea is to check if all R G B channels for each single pixels are similar, for example a pixel with RGB 45-45-45 is a gray, and also 43-42-44 because all channels are close to each other. I'm looking that every channel has a similar value (i am using a threshold of 10 but it's just random, you have to do some tests)
As soon you have enought pixels that are above your threshold you can break the loop an flag the image as colored
the code is not tested, is just an idea, and hopefully without leaks.
// load image
CGImageRef imageRef = yourUIImage.CGImage
CFDataRef cfData = CGDataProviderCopyData(CGImageGetDataProvider(imageRef));
NSData * data = (NSData *) cfData;
char *pixels = (char *)[data bytes];
const int threshold = 10; //define a gray threshold
for(int i = 0; i < [data length]; i += 4)
{
Byte red = pixels[i];
Byte green = pixels[i+1];
Byte blue = pixels[i+2];
//check if a single channel is too far from the average value.
//greys have RGB values very close to each other
int average = (red+green+blue)/3;
if( abs(average - red) >= threshold ||
abs(average - green) >= threshold ||
abs(average - blue) >= threshold )
{
//possibly its a colored pixel.. !!
}
}
CFRelease(cfData);

Thresholding for a colour in opencv

I am trying to set up my programme to threshold for a colour (in BGR format). I have not fully decided which colour I will be looking for yet. I would also like the program to record how many pixels it has detected of that colour. My code so far is below but it is not working.
#include "cv.h"
#include "highgui.h"
int main()
{
// Initialize capturing live feed from the camera
CvCapture* capture = 0;
capture = cvCaptureFromCAM(0);
// Couldn't get a device? Throw an error and quit
if(!capture)
{
printf("Could not initialize capturing...\n");
return -1;
}
// The two windows we'll be using
cvNamedWindow("video");
cvNamedWindow("thresh");
// An infinite loop
while(true)
{
// Will hold a frame captured from the camera
IplImage* frame = 0;
frame = cvQueryFrame(capture);
// If we couldn't grab a frame... quit
if(!frame)
break;
//create image where threshloded image will be stored
IplImage* imgThreshed = cvCreateImage(cvGetSize(frame), 8, 1);
//i want to keep it BGR format. Im not sure what colour i will be looking for yet. this can be easily changed
cvInRangeS(frame, cvScalar(20, 100, 100), cvScalar(30, 255, 255), imgThreshed);
//show the original feed and thresholded feed
cvShowImage("thresh", imgThreshed);
cvShowImage("video", frame);
// Wait for a keypress
int c = cvWaitKey(10);
if(c!=-1)
{
// If pressed, break out of the loop
break;
}
cvReleaseImage(&imgThreshed);
}
cvReleaseCapture(&capture);
return 0;
}
To threshold for a color,
1) convert the image to HSV
2) Then apply cvInrangeS
3) Once you got threshold image, you can count number of white pixels in it.
Try this tutorial to track yellow color: Tracking colored objects in OpenCV
I can tell how to do it in both Python and C++ and both with and without converting to HSV.
C++ Version (Converting to HSV)
Convert the image into an HSV image:
// Convert the image into an HSV image
IplImage* imgHSV = cvCreateImage(cvGetSize(img), 8, 3);
cvCvtColor(img, imgHSV, CV_BGR2HSV);
Create a new image that will hold the threholded image:
IplImage* imgThreshed = cvCreateImage(cvGetSize(img), 8, 1);
Do the actual thresholding using cvInRangeS
cvInRangeS(imgHSV, cvScalar(20, 100, 100), cvScalar(30, 255, 255), imgThreshed);
Here, imgHSV is the reference image. And the two cvScalars represent the lower and upper bound of values that are yellowish in colour. (These bounds should work in almost all conditions. If they don't, try experimenting with the last two values).
Consider any pixel. If all three values of that pixel (H, S and V, in that order) lie within the stated ranges, imgThreshed gets a value of 255 at that corresponding pixel. This is repeated for all pixels. So what you finally get is a thresholded image.
Use countNonZero to count the number of white pixels in the thresholded image.
Python Version (Without converting to HSV):
Create the lower and upper boundaries of the range you are interested in, in Numpy array format (Note: You need to use import numpy as np)
lower = np.array((a,b,c), dtype = "uint8")
upper = np.array((x,y,z), dtype = "uint8")
In the above (a,b,c) is the lower bound and (x,y,z) is the upper bound.
2.Get the mask for the pixels that satisfy the range:
mask = cv2.inRange(image, lower, upper)
In the above, image is the image on which you want to work.
Count the number of white pixels that are present in the mask using countNonZero:
yellowpixels = cv2.countNonZero(mask)
print "Number of Yellow pixels are %d" % (yellowpixels)
Sources:
http://srikanthvidyasagar.blogspot.com/2016/01/tracking-colored-objects-in-opencv.html
http://www.pyimagesearch.com/2014/08/04/opencv-python-color-detection/
count number of black pixels in an image in Python with OpenCV

Resources