SurfFeatureDetector and creating an empty mask with Mat() - opencv

I would like to use SurfFeatureDetector to detect keypoints on specifying area of a picture:
Train_pic & Source_pic
Detect Train_pic keypoint_1 using SurfFeatureDetector.
Detect Source_pic keypoint_2 using SurfFeatureDetector in specifying area.
Compute and match.
OpenCV SurfFeatureDetector as below.
void FeatureDetector::detect(const Mat& image, vector<KeyPoint>& keypoints, const Mat& mask=Mat())
mask – Mask specifying where to look for keypoints (optional). Must be a char matrix with non-zero values in the region of interest.
Any one can helps to explain how to create mask=Mat() for Source_pic?
Thanks
Jay

You don't technically have to specify the empty matrix to use the detect function as it is the default parameter.
You can call detect like this:
Ptr<FeatureDetector> detector = FeatureDetector::create("SURF");
vector<KeyPoint> keyPoints;
detector->detect(anImage, keyPoints);
Or, by explicitly creating the empty matrix:
Ptr<FeatureDetector> detector = FeatureDetector::create("SURF");
vector<KeyPoint> keyPoints;
detector->detect(anImage, keyPoints, Mat());
If you want to create a mask in a region of interest, you could create one like this:
Assuming Source_pic is of type CV_8UC3,
Mat mask = Mat::zeros(Source_pic.size(), Source_pic.type());
// select a ROI
Mat roi(mask, Rect(10,10,100,100));
// fill the ROI with (255, 255, 255) (which is white in RGB space);
// the original image will be modified
roi = Scalar(255, 255, 255);
EDIT : Had a copy-pasta error in there. Set the ROI for the mask, and then pass that to the detect function.
Hope that clears things up!

Related

OpenCV contour evolution

I have a contour that I would like to "snap" to edges in an image. That is, some thing like Intelligent Scissors, but for the whole contour at the same. A user has provided a rough sketch of the outline of an object, and I'd like to clean it up by "pushing" each point on the contour to the nearest point in an edge image.
Does something like this exist in OpenCV?
You can mimic active contours using cv::grabCut as suggested. You choose the radius of attraction (how far from the original position the curve can evolve), and by using dilated and eroded images, you define the unknown region around the contour.
// cv::Mat img, mask; // contour on mask as filled polygon
if ( mask.size()!=img.size() )
CV_Error(CV_StsError,"ERROR");
int R = 32; // radius of attraction
cv::Mat strel = cv::getStructuringElement(cv::MORPH_ELLIPSE, cv::Size(2*R+1,2*R+1) );
cv::Mat gc( mask.size(), CV_8UC1, cv::Scalar(cv::GC_BGD) );
cv::Mat t;
cv::dilate( mask, t, strel );
gc.setTo( cv::GC_PR_BGD, t );
gc.setTo( cv::GC_PR_FGD, mask ); // 3
cv::erode( mask, t, strel );
gc.setTo( cv::GC_FGD, t ); // 1
cv::grabCut( img, gc, cv::Rect(), cv::Mat(), cv::Mat(), 2 );
gc &= 0x1; // either foreground or probably foreground
gc *= 255; // so that you see it
What you may loose, is the topology of the contour. Some processing required there. Also, you cannot control the curvature or smoothness of the contour and it's not really contour evolution in sense.
Only if you are interested, ITK geodesic active contour might be what you are looking for http://www.itk.org/Doxygen/html/classitk_1_1GeodesicActiveContourLevelSetImageFilter.html

Detecting object regions in image opencv

We're currently trying to detect the object regions in medical instruments images using the methods available in OpenCV, C++ version. An example image is shown below:
Here are the steps we're following:
Converting the image to gray scale
Applying median filter
Find edges using sobel filter
Convert the result to binary image using a threshold of 25
Skeletonize the image to make sure we have neat edges
Finding X largest connected components
This approach works perfectly for the image 1 and here is the result:
The yellow borders are the connected components detected.
The rectangles are just to highlight the presence of a connected component.
To get understandable results, we just removed the connected components that are completely inside any another one, so the end result is something like this:
So far, everything was fine but another sample of image complicated our work shown below.
Having a small light green towel under the objects results this image:
After filtering the regions as we did earlier, we got this:
Obviously, it is not what we need..we're excepting something like this:
I'm thinking about clustering the closest connected components found(somehow!!) so we can minimize the impact of the presence of the towel, but don't know yet if it's something doable or someone has tried something like this before? Also, does anyone have any better idea to overcome this kind of problems?
Thanks in advance.
Here's what I tried.
In the images, the background is mostly greenish and the area of the background is considerably larger than that of the foreground. So, if you take a color histogram of the image, the greenish bins will have higher values. Threshold this histogram so that bins having smaller values are set to zero. This way we'll most probably retain the greenish (higher value) bins and discard other colors. Then backproject this histogram. The backprojection will highlight these greenish regions in the image.
Backprojection:
Then threshold this backprojection. This gives us the background.
Background (after some morphological filtering):
Invert the background to get foreground.
Foreground (after some morphological filtering):
Then find the contours of the foreground.
I think this gives a reasonable segmentation, and using this as mask you may be able to use a segmentation like GrabCut to refine the boundaries (I haven't tried this yet).
EDIT:
I tried the GrabCut approach and it indeed refines the boundaries. I've added the code for GrabCut segmentation.
Contours:
GrabCut segmentation using the foreground as mask:
I'm using the OpenCV C API for the histogram processing part.
// load the color image
IplImage* im = cvLoadImage("bFly6.jpg");
// get the color histogram
IplImage* im32f = cvCreateImage(cvGetSize(im), IPL_DEPTH_32F, 3);
cvConvertScale(im, im32f);
int channels[] = {0, 1, 2};
int histSize[] = {32, 32, 32};
float rgbRange[] = {0, 256};
float* ranges[] = {rgbRange, rgbRange, rgbRange};
CvHistogram* hist = cvCreateHist(3, histSize, CV_HIST_ARRAY, ranges);
IplImage* b = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* g = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* r = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* backproject32f = cvCreateImage(cvGetSize(im), IPL_DEPTH_32F, 1);
IplImage* backproject8u = cvCreateImage(cvGetSize(im), IPL_DEPTH_8U, 1);
IplImage* bw = cvCreateImage(cvGetSize(im), IPL_DEPTH_8U, 1);
IplConvKernel* kernel = cvCreateStructuringElementEx(3, 3, 1, 1, MORPH_ELLIPSE);
cvSplit(im32f, b, g, r, NULL);
IplImage* planes[] = {b, g, r};
cvCalcHist(planes, hist);
// find min and max values of histogram bins
float minval, maxval;
cvGetMinMaxHistValue(hist, &minval, &maxval);
// threshold the histogram. this sets the bin values that are below the threshold to zero
cvThreshHist(hist, maxval/32);
// backproject the thresholded histogram. backprojection should contain higher values for the
// background and lower values for the foreground
cvCalcBackProject(planes, backproject32f, hist);
// convert to 8u type
double min, max;
cvMinMaxLoc(backproject32f, &min, &max);
cvConvertScale(backproject32f, backproject8u, 255.0 / max);
// threshold backprojected image. this gives us the background
cvThreshold(backproject8u, bw, 10, 255, CV_THRESH_BINARY);
// some morphology on background
cvDilate(bw, bw, kernel, 1);
cvMorphologyEx(bw, bw, NULL, kernel, MORPH_CLOSE, 2);
// get the foreground
cvSubRS(bw, cvScalar(255, 255, 255), bw);
cvMorphologyEx(bw, bw, NULL, kernel, MORPH_OPEN, 2);
cvErode(bw, bw, kernel, 1);
// find contours of the foreground
//CvMemStorage* storage = cvCreateMemStorage(0);
//CvSeq* contours = 0;
//cvFindContours(bw, storage, &contours);
//cvDrawContours(im, contours, CV_RGB(255, 0, 0), CV_RGB(0, 0, 255), 1, 2);
// grabcut
Mat color(im);
Mat fg(bw);
Mat mask(bw->height, bw->width, CV_8U);
mask.setTo(GC_PR_BGD);
mask.setTo(GC_PR_FGD, fg);
Mat bgdModel, fgdModel;
grabCut(color, mask, Rect(), bgdModel, fgdModel, GC_INIT_WITH_MASK);
Mat gcfg = mask == GC_PR_FGD;
vector<vector<cv::Point>> contours;
vector<Vec4i> hierarchy;
findContours(gcfg, contours, hierarchy, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0));
for(int idx = 0; idx < contours.size(); idx++)
{
drawContours(color, contours, idx, Scalar(0, 0, 255), 2);
}
// cleanup ...
UPDATE: We can do the above using the C++ interface as shown below.
const int channels[] = {0, 1, 2};
const int histSize[] = {32, 32, 32};
const float rgbRange[] = {0, 256};
const float* ranges[] = {rgbRange, rgbRange, rgbRange};
Mat hist;
Mat im32fc3, backpr32f, backpr8u, backprBw, kernel;
Mat im = imread("bFly6.jpg");
im.convertTo(im32fc3, CV_32FC3);
calcHist(&im32fc3, 1, channels, Mat(), hist, 3, histSize, ranges, true, false);
calcBackProject(&im32fc3, 1, channels, hist, backpr32f, ranges);
double minval, maxval;
minMaxIdx(backpr32f, &minval, &maxval);
threshold(backpr32f, backpr32f, maxval/32, 255, THRESH_TOZERO);
backpr32f.convertTo(backpr8u, CV_8U, 255.0/maxval);
threshold(backpr8u, backprBw, 10, 255, THRESH_BINARY);
kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
dilate(backprBw, backprBw, kernel);
morphologyEx(backprBw, backprBw, MORPH_CLOSE, kernel, Point(-1, -1), 2);
backprBw = 255 - backprBw;
morphologyEx(backprBw, backprBw, MORPH_OPEN, kernel, Point(-1, -1), 2);
erode(backprBw, backprBw, kernel);
Mat mask(backpr8u.rows, backpr8u.cols, CV_8U);
mask.setTo(GC_PR_BGD);
mask.setTo(GC_PR_FGD, backprBw);
Mat bgdModel, fgdModel;
grabCut(im, mask, Rect(), bgdModel, fgdModel, GC_INIT_WITH_MASK);
Mat fg = mask == GC_PR_FGD;
I would consider a few options. My assumption is that the camera does not move. I haven't used the images or written any code, so this is mostly from experience.
Rather than just looking for edges, try separating the background using a segmentation algorithm. Mixture of Gaussian can help with this. Given a set of images over the same region (i.e. video), you can cancel out regions which are persistent. Then, new items such as instruments will pop out. Connected components can then be used on the blobs.
I would look at segmentation algorithms to see if you can optimize the conditions to make this work for you. One major item is to make sure your camera is stable or you stabilize the images yourself pre-processing.
I would consider using interest points to identify regions in the image with a lot of new material. Given that the background is relatively plain, small objects such as needles will create a bunch of interest points. The towel should be much more sparse. Perhaps overlaying the detected interest points over the connected component footprint will give you a "density" metric which you can then threshold. If the connected component has a large ratio of interest points for the area of the item, then it is an interesting object.
On this note, you can even clean up the connected component footprint by using a Convex Hull to prune the objects you have detected. This may help situations such as a medical instrument casting a shadow on the towel which stretches the component region. This is a guess, but interest points can definitely give you more information than just edges.
Finally, given that you have a stable background with clear objects in view, I would take a look at Bag-of-Features to see if you can just detect each individual object in the image. This may be useful since there seems to be a consistent pattern to the objects in these images. You can build a big database of images such as needles, gauze, scissors, etc. Then BoF, which is in OpenCV will find those candidates for you. You can also mix it in with other operations you are doing to compare results.
Bag of Features using OpenCV
http://www.codeproject.com/Articles/619039/Bag-of-Features-Descriptor-on-SIFT-Features-with-O
-
I would also suggest an idea to your initial version. You can also skip the contours, whose regions have width and height greater than the half the image width and height.
//take the rect of the contours
Rect rect = Imgproc.boundingRect(contours.get(i));
if (rect.width < inputImageWidth / 2 && rect.height < inputImageHeight / 2)
//then continue to draw or use for next purposes.

The best and quickest method for detecting quadratic shape in image using OpenCV?

In last few days I'm looking for good and quick method for finding quadratic shape in image.
For example, take a look at attached image.
I want to find the edges of white screen part (the TV screen in this case).
I can replace the white canvas with whatever I want, e.g. QR code, some texture, etc. - just looking for the coordinates of that shape.
Other features of the shape:
Only one shape should be detected.
Perspective transform should be used.
The languages is not that important, but I want to use OpenCV for this.
These are good algorithms that have been implemented in OpenCV:
Harris corner detector as GoodFeatureToTrackDetector
GoodFeaturesToTrackDetector harris_detector (1000, 0.01, 10, 3, true);
vector<KeyPoint> keypoints;
cvtColor (image, gray_image, CV_BGR2GRAY);
harris_detector.detect (gray_image, keypoints);
Fast corner detector as FeatureDetector::create("FAST") and FASTX
Ptr<FeatureDetector> feature_detector = FeatureDetector::create("FAST");
vector<KeyPoint> keypoints;
cvtColor (image, gray_image, CV_BGR2GRAY);
feature_detector->detect (gray_image, keypoints);
Or
FASTX (gray_image, keypoints, 50, true, FastFeatureDetector::TYPE_9_16);
SIFT (Scale Invariant Feature Transform) as FeatureDetector::create("SIFT")
Ptr<FeatureDetector> feature_detector = FeatureDetector::create("SIFT");
vector<KeyPoint> keypoints;
cvtColor (image, gray_image, CV_BGR2GRAY);
feature_detector->detect (gray_image, keypoints);
Update for perspective transform (you must know 4 points before haned):
Point2f source [4], destination [4];
// Assign values to source and destination points.
perspective_matrix = getPerspectiveTransform( source, destination );
warpPerspective( image, result, perspective_matrix, result.size() );

comparing blob detection and Structural Analysis and Shape Descriptors in opencv

I need to use blob detection and Structural Analysis and Shape Descriptors (more specifically findContours, drawContours and moments) to detect colored circles in an image. I need to know the pros and cons of each method and which method is better. Can anyone show me the differences between those 2 methods please?
As #scap3y suggested in the comments I'd go for a much simpler approach. What I'm always doing in these cases is something similar to this:
// Convert your image to HSV color space
Mat hsv;
hsv.create(originalImage.size(), CV_8UC3);
cvtColor(originalImage,hsv,CV_RGB2HSV);
// Chose the range in each of hue, saturation and value and threshold the other pixels
Mat thresholded;
uchar loH = 130, hiH = 170;
uchar loS = 40, hiS = 255;
uchar loV = 40, hiV = 255;
inRange(hsv, Scalar(loH, loS, loV), Scalar(hiH, hiS, hiV), thresholded);
// Find contours in the image (additional step could be to
// apply morphologyEx() first)
vector<vector<Point>> contours;
findContours(thresholded,contours,CV_RETR_EXTERNAL,CHAIN_APPROX_SIMPLE);
// Draw your contours as ellipses into the original image
for(i=0;i<(int)valuable_rectangle_indices.size();i++) {
rect=minAreaRect(contours[valuable_rectangle_indices[i]]);
ellipse(originalImage, rect, Scalar(0,0,255)); // draw ellipse
}
The only thing left for you to do now is to figure out in what range your markers are in HSV color space.

Error using cvCountNonZero in opencv

I am trying to count the number of non-zero pixels in a contour retrieved from a Canny edged image using openCV (using C). I am using cvFindNextContour to find the subsequent contour retrieved using a contour scanner.
But When I use the cvCountNonZero on the contour, an error shows up:
Bad flag (parameter or structure field) (Unrecognized or unsupported array type)
in function cvGetMat, C:\User\..\cvarray.cpp(2881)
My code is:
cvCvtColor(image, gray, CV_BGR2GRAY);
cvCanny(gray, edge, (float)edge_thresh, (float)edge_thresh*4, 3);
sc = cvStartFindContours( edge, mem,
sizeof(CvContour),
CV_RETR_LIST,
CV_CHAIN_APPROX_SIMPLE,
cvPoint(0,0) );
while((contour = cvFindNextContour(sc))!=NULL)
{
CvScalar color = CV_RGB( rand()&255, rand()&255, rand()&255 );
printf("%d\n",cvCountNonZero(contour));
cvDrawContours(final, contour, color, CV_RGB(0,0,0), -1, 1, 8, cvPoint(0,0));
}
Any kind of help is highly appreciated. Thanks in advance.
cvCountNonZero(CvArr*) is for finding the number of non zeros in an array or IplImage but not for CvSeq* contour type. That is why the error is coming. Here teh solution to the problem.
CvRect rect = cvBoundingRect( contour, 0);
cvSetImageROI(img1,rect);
cout<<cvCountNonZero(img1)<<endl;
cvResetImageROI(img1);
//where img1 is the binary image in which you find the contours.
The code can be explained in the following way:
1.First make a rectangular region around each contour.
2.Set the image ROI to that particular region.
3.Now use the cvCountNonZero(); function to find the number of non zeros in the Region.
4.Reset the image ROI.
Have a happy coding.

Resources