openCV inRange masking - opencv

I'm using Opencv 3.0 to get only the colored objects in an image. Therefore i create and use a mask.
#include <opencv2\opencv.hpp>
using namespace cv;
using namespace std;
int main()
{
namedWindow("Display", CV_WINDOW_AUTOSIZE);
namedWindow("Orignial", CV_WINDOW_AUTOSIZE);
namedWindow("Mask", CV_WINDOW_AUTOSIZE);
// First load your image
Mat mSrc = imread("IMG_0005_AUSZUG2.png", CV_LOAD_IMAGE_COLOR);
Mat mGray = Mat::zeros(mSrc.size(), mSrc.type());
cvtColor(mSrc, mGray, CV_BGR2GRAY);
// define your mask
Mat mask = Mat::zeros(mSrc.size(), mSrc.type());
// define destination image
Mat dstImg = Mat::zeros(mSrc.size(), mSrc.type());
//finding mask
inRange(mSrc, Scalar(90, 90, 90), Scalar(180, 180, 180), mask);
// combination of mask and Source image
dilate(mask, mask, Mat(), Point(-1, -1));
bitwise_not(mask, mask);
//cvtColor(mask, mask, CV_GRAY2BGR);
mSrc.copyTo(dstImg, mask);
//bitwise_and(mSrc, mSrc, dstImg, mask);
imshow("Mask", mask);
imshow("Orignial", mSrc);
imshow("Display", dstImg);
waitKey(0);
return 0;
}
As you can see the result image is not the intended one. Only the colored objects should stay, because they have a white background in the mask, but it seems that the result image is a combination of source and mask.
Anybody know how to fix this ?
Source:
Mask:
Result:

To understand your requirement- you have an image with some coloured objects in it, in a white background, and you essentially want an result image containing the same coloured objects in a black background instead.
If that's the case, inRange will not help because you've essentially kept the threshold between grey values 90 and 180, so your code will discard dark objects as well.
To ensure that you obtain a mask that is black only in the white background regions, I would suggest using the threshold function instead, as shown:
//finding mask
//inRange(mSrc, Scalar(90, 90, 90), Scalar(180, 180, 180), mask);
threshold(mGray, mask, 220, 255, THRESH_BINARY_INV);
This function will ensure that any pixel value in your greyscale image above 220 will be set to 0 in the binary mask.
To superimpose the binary mask over the source image, you should use the subtract method, as shown:
cvtColor(mask,mask,CV_GRAY2BGR);//change thresh to a 3 channel image
Mat mResult = Mat::zeros(mSrc.size(), mSrc.type());
subtract(mask,mSrc,mResult);
subtract(mask,mResult,mResult);

Related

Estimate white background

I have image with white uneven background (due to lighting). I'm trying to estimate background color and transform image into image with true white background. For this I estimated white color for each 15x15 pixels block based on its luminosity. So I've got the following map (on the right):
Now I want to interpolate color so it will be more smooth transition from 15x15 block to neighboring block, plus I want it to eliminate outliers (pink dots on left hand side). Could anyone suggest good technique/algorithm for this? (Ideally within OpenCV library, but not necessary)
Starting from this image:
You could find the text on the whiteboard as the parts of your images that have a high gradient, and apply a little dilation to deal with thick parts of the text. You'll get a mask that separates background from foreground pretty well:
Background:
Foreground:
You can then apply inpainting using the computed mask on the original image (you need OpenCV contrib module photo):
Just to show that this works independently of the text color, I tried on a different image:
Resulting in:
Code:
#include <opencv2/opencv.hpp>
#include <opencv2/photo.hpp>
using namespace cv;
void findText(const Mat3b& src, Mat1b& mask)
{
// Convert to grayscale
Mat1b gray;
cvtColor(src, gray, COLOR_BGR2GRAY);
// Compute gradient magnitude
Mat1f dx, dy, mag;
Sobel(gray, dx, CV_32F, 1, 0);
Sobel(gray, dy, CV_32F, 0, 1);
magnitude(dx, dy, mag);
// Remove low magnitude, keep only text
mask = mag > 10;
// Apply a dilation to deal with thick text
Mat1b K = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
dilate(mask, mask, K);
}
int main(int argc, const char * argv[])
{
Mat3b img = imread("path_to_image");
// Segment white
Mat1b mask;
findText(img, mask);
// Show intermediate images
Mat3b background = img.clone();
background.setTo(0, mask);
Mat3b foreground = img.clone();
foreground.setTo(0, ~mask);
// Apply inpainting
Mat3b inpainted;
inpaint(img, mask, inpainted, 21, CV_INPAINT_TELEA);
imshow("Original", img);
imshow("Foreground", foreground);
imshow("Background", background);
imshow("Inpainted", inpainted);
waitKey();
return 0;
}

How can I do image processing operations only in ROI part of original image directly?

Is that possible by using OpenCV to do some image processing operations only in ROI part of original image?
I search some articles on Internet. Most of codes look like this:
int main(int argc, char** argv) {
cv::Mat image;
image = cv::imread(argv[1], CV_LOAD_IMAGE_COLOR);
cv::Rect roi( 100, 100,200, 200);
//do some operations on roi
cv::waitKey(0);
return 0;
}
Actually, it created a new image called roi, and then do some operations in new created image. I want to do operations in original image directly. For example, I want to do gaussian blur, only blur the range of roi part in original image and do not blur other part of this image.
Because new created image roi has different informations with its information in original image. (like coordinates) I want to keep those information.
Is that possible to do this in OpenCV? If so, how to do it?
You can get the sub-image using one either a Rect or two Range (see OpenCV doc).
Mat3b img = imread("path_to_image");
img:
Rect r(100,100,200,200);
Mat3b roi3b(img(r));
As long as you don't change image type you can work on roi3b. All changes will be reflected in the original image img:
GaussianBlur(roi3b, roi3b, Size(), 10);
img after blur:
If you change type (e.g. from CV_8UC3 to CV_8UC1), you need to work on a deep copy, since a Mat can't have mixed types.
Mat1b roiGray;
cvtColor(roi3b, roiGray, COLOR_BGR2GRAY);
threshold(roiGray, roiGray, 200, 255, THRESH_BINARY);
You can always copy the results on the original image, taking care to correct the type:
Mat3b roiGray3b;
cvtColor(roiGray, roiGray3b, COLOR_GRAY2BGR);
roiGray3b.copyTo(roi3b);
img after threshold:
Full code for reference:
#include <opencv2\opencv.hpp>
using namespace cv;
int main(void)
{
Mat3b img = imread("path_to_image");
imshow("Original", img);
waitKey();
Rect r(100,100,200,200);
Mat3b roi3b(img(r));
GaussianBlur(roi3b, roi3b, Size(), 10);
imshow("After Blur", img);
waitKey();
Mat1b roiGray;
cvtColor(roi3b, roiGray, COLOR_BGR2GRAY);
threshold(roiGray, roiGray, 200, 255, THRESH_BINARY);
Mat3b roiGray3b;
cvtColor(roiGray, roiGray3b, COLOR_GRAY2BGR);
roiGray3b.copyTo(roi3b);
imshow("After Threshold", img);
waitKey();
return 0;
}
To blur the required region follow the following steps:
cv::Rect roi(x, y, w, h);
cv::GaussianBlur(image(roi), image(roi), Size(0, 0), 4);
Follow this link for more information http://docs.opencv.org/modules/core/doc/basic_structures.html#id6
Mat::operator()(Range rowRange, Range colRange)
Mat::operator()(const Rect& roi)
I have burred the region of interest and segmented the blurred region, you can perform image processing operation on the blurred region in an original image or you can perform on segmented region.
int main() {
Mat image;
image=imread("Light.jpg",1);
// image = cv::imread(argv[1], CV_LOAD_IMAGE_COLOR);
Rect roi( 100, 100,200, 200);
Mat blur;
GaussianBlur(image(roi), blur, Size(0, 0), 4);
imshow("blurred region",blur);
//do some operations on roi
imshow("aaaa",image);
waitKey(0);
return 0;
}

Extracting transparent background of an image with opencv

I have got a mask calculated in grab_cut(which calculates the foreground). I want to extract only the background leaving the foreground transparent. I manage to do so using the following code in order to extract foreground(background transparent). How is it possible to do the opposite?
int border = 20;
int border2 = border + border;
cv::Rect rectangle(border,border,image.cols-border2,image.rows-border2);
cv::Mat result; // segmentation result (4 possible values)
cv::Mat bgModel,fgModel; /
cv::grabCut(image, // input image
result, // segmentation result
rectangle,// rectangle containing foreground
bgModel,fgModel, // models
1, // number of iterations
cv::GC_INIT_WITH_RECT); // use rectangle
cv::compare(result,cv::GC_PR_FGD,result,cv::CMP_EQ);
cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(foreground,result); // bg pixels not copied
cv::rectangle(image, rectangle, cv::Scalar(255,255,255),1);
cv::imwrite(argv[2], foreground);
cv::imwrite(argv[3], image);
Mat dst;//(src.rows,src.cols,CV_8UC4);
Mat tmp,alpha;
cvtColor(foreground,tmp,CV_BGR2GRAY);
threshold(tmp,alpha,100,255,THRESH_BINARY);
Mat rgb[3];
split(foreground,rgb);
Mat rgba[4]={rgb[0],rgb[1],rgb[2],alpha};
merge(rgba,4,dst);
imwrite("dst.png",dst);
Basically i think I ve got to change those lines:
cv::Mat foreground(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(foreground,result); // bg pixels not copied
How is is possible to select the rest of the image the opposite of result?
Just invert your mask as in:
cv::Mat background(image.size(),CV_8UC3,cv::Scalar(255,255,255));
image.copyTo(background, ~result); // fg pixels not copied

In openCV, how to replace an RGB ROI in image

I have an RGB large-image, and an RGB small-image.
What is the fastest way to replace a region in the larger image with the smaller one?
Can I define a multi-channel ROI and then use copyTo? Or must I split each image to channels, replace the ROI and then recombine them again to one?
Yes. A multi channel ROI and copyTo will work. Something like:
int main(int argc,char** argv[])
{
cv::Mat src = cv::imread("c:/src.jpg");
//create a canvas with 10 pixels extra in each dim. Set all pixels to yellow.
cv::Mat canvas(src.rows + 20, src.cols + 20, CV_8UC3, cv::Scalar(0, 255, 255));
//create an ROI that will map to the location we want to copy the image into
cv::Rect roi(10, 10, src.cols, src.rows);
//initialize the ROI in the canvas. canvasROI now points to the location we want to copy to.
cv::Mat canvasROI(canvas(roi));
//perform the copy.
src.copyTo(canvasROI);
cv::namedWindow("original", 256);
cv::namedWindow("canvas", 256);
cv::imshow("original", src);
cv::imshow("canvas", canvas);
cv::waitKey();
}

Thresholding for a colour in opencv

I am trying to set up my programme to threshold for a colour (in BGR format). I have not fully decided which colour I will be looking for yet. I would also like the program to record how many pixels it has detected of that colour. My code so far is below but it is not working.
#include "cv.h"
#include "highgui.h"
int main()
{
// Initialize capturing live feed from the camera
CvCapture* capture = 0;
capture = cvCaptureFromCAM(0);
// Couldn't get a device? Throw an error and quit
if(!capture)
{
printf("Could not initialize capturing...\n");
return -1;
}
// The two windows we'll be using
cvNamedWindow("video");
cvNamedWindow("thresh");
// An infinite loop
while(true)
{
// Will hold a frame captured from the camera
IplImage* frame = 0;
frame = cvQueryFrame(capture);
// If we couldn't grab a frame... quit
if(!frame)
break;
//create image where threshloded image will be stored
IplImage* imgThreshed = cvCreateImage(cvGetSize(frame), 8, 1);
//i want to keep it BGR format. Im not sure what colour i will be looking for yet. this can be easily changed
cvInRangeS(frame, cvScalar(20, 100, 100), cvScalar(30, 255, 255), imgThreshed);
//show the original feed and thresholded feed
cvShowImage("thresh", imgThreshed);
cvShowImage("video", frame);
// Wait for a keypress
int c = cvWaitKey(10);
if(c!=-1)
{
// If pressed, break out of the loop
break;
}
cvReleaseImage(&imgThreshed);
}
cvReleaseCapture(&capture);
return 0;
}
To threshold for a color,
1) convert the image to HSV
2) Then apply cvInrangeS
3) Once you got threshold image, you can count number of white pixels in it.
Try this tutorial to track yellow color: Tracking colored objects in OpenCV
I can tell how to do it in both Python and C++ and both with and without converting to HSV.
C++ Version (Converting to HSV)
Convert the image into an HSV image:
// Convert the image into an HSV image
IplImage* imgHSV = cvCreateImage(cvGetSize(img), 8, 3);
cvCvtColor(img, imgHSV, CV_BGR2HSV);
Create a new image that will hold the threholded image:
IplImage* imgThreshed = cvCreateImage(cvGetSize(img), 8, 1);
Do the actual thresholding using cvInRangeS
cvInRangeS(imgHSV, cvScalar(20, 100, 100), cvScalar(30, 255, 255), imgThreshed);
Here, imgHSV is the reference image. And the two cvScalars represent the lower and upper bound of values that are yellowish in colour. (These bounds should work in almost all conditions. If they don't, try experimenting with the last two values).
Consider any pixel. If all three values of that pixel (H, S and V, in that order) lie within the stated ranges, imgThreshed gets a value of 255 at that corresponding pixel. This is repeated for all pixels. So what you finally get is a thresholded image.
Use countNonZero to count the number of white pixels in the thresholded image.
Python Version (Without converting to HSV):
Create the lower and upper boundaries of the range you are interested in, in Numpy array format (Note: You need to use import numpy as np)
lower = np.array((a,b,c), dtype = "uint8")
upper = np.array((x,y,z), dtype = "uint8")
In the above (a,b,c) is the lower bound and (x,y,z) is the upper bound.
2.Get the mask for the pixels that satisfy the range:
mask = cv2.inRange(image, lower, upper)
In the above, image is the image on which you want to work.
Count the number of white pixels that are present in the mask using countNonZero:
yellowpixels = cv2.countNonZero(mask)
print "Number of Yellow pixels are %d" % (yellowpixels)
Sources:
http://srikanthvidyasagar.blogspot.com/2016/01/tracking-colored-objects-in-opencv.html
http://www.pyimagesearch.com/2014/08/04/opencv-python-color-detection/
count number of black pixels in an image in Python with OpenCV

Resources