Segmentation Edges - opencv

i am working on project where i need to detect the contour of differents type of document.
Currently i am able to segment and detect contours using findcontours and everything works fine in most cases.
But, if the document is white color and the background similar to white color, i can't detect the contour.
For example, in this image
or this xhttp://i.stack.imgur.com/9exrg.jpg
i can't detect the white paper.
Here is the code i am using to segment the image in order to detect perfect edge (edges with no holes) / Perfectly straight.
public static Mat process(Mat original){
Mat src = original.clone();
Mat hsvMat = new Mat();
Mat gray = new Mat();
Mat sobx = new Mat();
Mat soby = new Mat();
Mat grad_abs_val_approx = new Mat();
Imgproc.cvtColor(src, hsvMat, Imgproc.COLOR_BGR2HSV);
List<Mat> hsv_channels = new ArrayList<Mat>(3);
Core.split(hsvMat, hsv_channels);
Mat hue = hsv_channels.get( 0 );
Mat sat = hsv_channels.get( 1 );
Mat val = hsv_channels.get( 2 );
Imgproc.GaussianBlur(val, gray, new Size(9, 9), 2, 2);
Mat imf = new Mat();
gray.convertTo(imf, CV_32FC1, 0.5f, 0.5f);
Imgproc.Sobel(imf, sobx, -1, 1, 0);
Imgproc.Sobel(imf, soby, -1, 0, 1);
sobx = sobx.mul(sobx);
soby = soby.mul(soby);
Mat sumxy = new Mat();
Core.add(sobx,soby, sumxy);
Core.pow(sumxy, 0.5, grad_abs_val_approx);
sobx.release();
soby.release();
sumxy.release();;
Mat filtered = new Mat();
Imgproc.GaussianBlur(grad_abs_val_approx, filtered, new Size(9, 9), 2, 2);
final MatOfDouble mean = new MatOfDouble();
final MatOfDouble stdev = new MatOfDouble();
Core.meanStdDev(filtered, mean, stdev);
Mat thresholded = new Mat();
Imgproc.threshold(filtered, thresholded, mean.toArray()[0] + stdev.toArray()[0], 1.0, Imgproc.THRESH_TOZERO);
Mat converted = new Mat();
thresholded.convertTo(converted, CV_8UC1);
return converted;
}
Using the code above leads to the following result :
As you can notice, the edges is not Perfectly straight (and there is holes). Edges are barely visible and Findcontours fails to detect the contours.
I have tried alls solutions/suggestions described here
therefore, here is my questions :
1) what's wrong with my code ?
2) how can i preprocess the image in order to detect perfect edge (edges with no holes) / Perfectly straight for contour detection ?
Many thanks in advance for your assistance.

When you see that the edges are not easily detected, you can try other complementary approaches.
For example, on this image, the simple matlab code (which you can implement using OpenCV):
I=imread('page.png');
r=I(:,:,1);
g=I(:,:,2);
b=I(:,:,3);
imshow(b>g);
Produces the following result, which you can use using your edge detection code:

Related

OpenCV: How can I remove unwanted blobs or how can I copy wanted parts into an empty image?

From the following image, how could I find the result image?
The images shown here are threshold images. I have tried using morphological operators but they even remove the blob I want. How could I solve this problem?
Any hints?
Following is the result image I am interested to get/find:
import cv2
diff = cv2.imread('Image.png',0)
ret, thresh = cv2.threshold(diff, 12.5, 255, cv2.THRESH_BINARY)
thresh = cv2.dilate(thresh, None, iterations = 1)
cv2.imshow('img', thresh) # This is the first picture I have shown
cv2.waitKey(0)
You are most of the way there, all you need to do now is find the blobs, add some contours and find the biggest one. Easy! below is the code in C++, ill leave it up to you to work out how to convert it to Python:
cv::Mat mat = imread("g0cVU.png");
Mat origImage = mat;
Mat canny_output = mat;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
cv::Mat greyMat, colorMat;
cv::cvtColor(mat, greyMat, CV_BGR2GRAY);
int thresh = 100;
RNG rng(12345);
///// Detect edges using canny
Canny(greyMat, canny_output, thresh, thresh * 2, 3);
/// Find contours
findContours(canny_output, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE, Point(0, 0));
int largest_area = 0;
int largest_contour_index = 0;
Rect bounding_rect;
/// Draw contours
Mat drawing = Mat::zeros(canny_output.size(), CV_8UC3);
for (int i = 0; i< contours.size(); i++)
{
Scalar color = Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255));
drawContours(drawing, contours, i, color, 2, 8, hierarchy, 0, Point());
double a=contourArea( contours[i],false); // Find the area of contour
if(a>largest_area){
largest_area=a;
largest_contour_index=i; //Store the index of largest contour
bounding_rect=boundingRect(contours[i]); // Find the bounding rectangle for biggest contour
}
}
rectangle(origImage, bounding_rect, Scalar(rng.uniform(0, 255), rng.uniform(0, 255), rng.uniform(0, 255)),2);
/// Show in a window
namedWindow("Contours", CV_WINDOW_AUTOSIZE);
imshow("Contours", drawing);
cv::namedWindow("img");
cv::imshow("mat", mat);
cv::imshow("mat", origImage);
cv::imshow("mat123", drawing);
cv::waitKey(0);
Which gives this results:
You can see in the bottom image the largest contor has a brown rectangle drawn around it.
o and obviously once you have the largest blob (or whatever blob you deem "the correct one") you can just set everything else to black which is fairly straightforward.

Remove de-focus region on image by opencv

I have an image and 2 regions (focus region and de-focus region). I use Open CV, I want to detect near region.
I apply watershed in OpenCV or Canny detector to detect the object. But the object includes near and far region.
So, I need an idea or help from anyone help me apply OpenCV to detect near region image.
Code for show image, that I attached.
private Mat CalculateMapStrength(Mat inputMat){
Imgproc.cvtColor(inputMat,inputMat, Imgproc.COLOR_RGBA2GRAY);
//Compute dx and dy derivatives
Mat dx = new Mat();
Mat dy = new Mat();
Imgproc.Sobel(inputMat, dx, CV_32F, 1, 0);
Imgproc.Sobel(inputMat, dy, CV_32F, 0, 1);
Core.convertScaleAbs(dx,dx);
Core.convertScaleAbs(dy,dy);
Mat outputMat = new Mat();
Core.addWeighted(dx,0.5,dy,0.5,0,outputMat);
return outputMat;
}
Beside, I get Image segmentation by watershed algorithm in OpenCV. Can I compile 2 result for detect object? How to compile that?
public Mat steptowatershed(Mat img)
{
Mat threeChannel = new Mat();
Imgproc.cvtColor(img, threeChannel, Imgproc.COLOR_BGR2GRAY);
Imgproc.threshold(threeChannel, threeChannel, 100, 255, Imgproc.THRESH_BINARY);
Mat fg = new Mat(img.size(),CvType.CV_8U);
Imgproc.erode(threeChannel,fg,new Mat());
Mat bg = new Mat(img.size(),CvType.CV_8U);
Imgproc.dilate(threeChannel,bg,new Mat());
Imgproc.threshold(bg,bg,1, 128,Imgproc.THRESH_BINARY_INV);
Mat markers = new Mat(img.size(),CvType.CV_8U, new Scalar(0));
Core.add(fg, bg, markers);
Mat result1=new Mat();
WatershedSegmenter segmenter = new WatershedSegmenter();
segmenter.setMarkers(markers);
Imgproc.cvtColor(img, img, Imgproc.COLOR_RGBA2RGB);
result1 = segmenter.process(img);
return result1;
}
public class WatershedSegmenter
{
public Mat markers=new Mat();
public void setMarkers(Mat markerImage)
{
markerImage.convertTo(markers, CvType.CV_32SC1);
}
public Mat process(Mat image)
{
Imgproc.watershed(image,markers);
markers.convertTo(markers,CvType.CV_8U);
return markers;
}
}

OpenCV sharpen the edges (edges with no holes)

I am trying to detect the biggest/larger rectangular shape and draw bounding box to the detected area.
In my use case, very often (and not always) the object that represent the rectangle shape is in color white and the background is also in color very similar to white.
Before detecting contours, I have preprocessed the image in order to detect perfect edge.
My problem is that I can't detect edges perfectly and i have a lot of noise even after blurring and using 'adaptive threshold' or 'threshold'.
The original image i have used for contours detection
I have tried different way to detect perfect edge in different lighting condition without success.
How can I process image in order to detect perfect edge (edges with no holes) for contour detection ?
Below is the code i am using
public static Mat findRectangleX(Mat original) {
Mat src = original.clone();
Mat gray = new Mat();
Mat binary = new Mat();
MatOfPoint2f approxCurve;
List<MatOfPoint> contours = new ArrayList<MatOfPoint>();
if (original.type() != CvType.CV_8U) {
Imgproc.cvtColor(original, gray, Imgproc.COLOR_BGR2GRAY);
} else {
original.copyTo(gray);
}
Imgproc.GaussianBlur(gray, gray, new Size(5,5),0);
Imgproc.adaptiveThreshold(gray, binary, 255,Imgproc.ADAPTIVE_THRESH_GAUSSIAN_C,Imgproc.THRESH_BINARY_INV,11, 1);
//Imgproc.threshold(gray, binary,0,255,Imgproc.THRESH_BINARY_INV | Imgproc.THRESH_OTSU);
double maxArea = 0;
Imgproc.findContours(binary, contours, new Mat(),Imgproc.RETR_LIST, Imgproc.CHAIN_APPROX_SIMPLE);
for (int i = 0; i<contours.size();i++) {
MatOfPoint contour = contours.get(i);
MatOfPoint2f temp = new MatOfPoint2f(contour.toArray());
double area = Imgproc.contourArea(contour);
approxCurve = new MatOfPoint2f();
Imgproc.approxPolyDP(temp, approxCurve, Imgproc.arcLength(temp, true) * 0.03, true);
if (approxCurve.total() == 4 ) {
Rect rect = Imgproc.boundingRect(contours.get(i));
Imgproc.rectangle(src, rect.tl(), rect.br(), new Scalar(255, 0, 0, .8), 4);
if(maxArea < area)
maxArea = area;
}
}
Log.v(TAG, "Total contours found : " + contours.size());
Log.v(TAG, "Max area :" + maxArea);
return src;
}
I've search similar problems on stackoverflow and try code sample but any of them worked for me. The difficulty i think is the white objet on white background.
How can I process image in order to sharpen the edges for contour detection ?
How can I detect the biggest/larger rectangle shape and draw rectangle line to the detected shape ?
//Updated at : 20/02/2017
i have tried the solution suggested by #Nejc in the post below. The segmentation is better but i still have holes in contour and findcontours fails in detecting the larger contour.
Below is the code provided by #Nejc and translated to java.
public static Mat process(Mat original){
Mat src = original.clone();
Mat hsvMat = new Mat();
Mat saturation = new Mat();
Mat sobx = new Mat();
Mat soby = new Mat();
Mat grad_abs_val_approx = new Mat();
Imgproc.cvtColor(src, hsvMat, Imgproc.COLOR_BGR2HSV);
List<Mat> hsv_channels = new ArrayList<Mat>(3);
Core.split(hsvMat, hsv_channels);
Mat hue = hsv_channels.get( 0 );
Mat sat = hsv_channels.get( 1 );
Mat val = hsv_channels.get( 2 );
Imgproc.GaussianBlur(sat, saturation, new Size(9, 9), 2, 2);
Mat imf = new Mat();
saturation.convertTo(imf, CV_32FC1, 0.5f, 0.5f);
Imgproc.Sobel(imf, sobx, -1, 1, 0);
Imgproc.Sobel(imf, soby, -1, 0, 1);
sobx = sobx.mul(sobx);
soby = soby.mul(soby);
Mat abs_x = new Mat();
Core.convertScaleAbs(sobx,abs_x);
Mat abs_y = new Mat();
Core.convertScaleAbs(soby,abs_y);
Core.addWeighted(abs_x, 1, abs_y, 1, 0, grad_abs_val_approx);
sobx.release();
soby.release();
Mat filtered = new Mat();
Imgproc.GaussianBlur(grad_abs_val_approx, filtered, new Size(9, 9), 2, 2);
final MatOfDouble mean = new MatOfDouble();
final MatOfDouble stdev = new MatOfDouble();
Core.meanStdDev(filtered, mean, stdev);
Mat thresholded = new Mat();
Imgproc.threshold(filtered, thresholded, mean.toArray()[0] + stdev.toArray()[0], 1.0, Imgproc.THRESH_TOZERO);
/*
Mat thresholded_bin = new Mat();
Imgproc.threshold(filtered, thresholded_bin, mean.toArray()[0] + stdev.toArray()[0], 1.0, Imgproc.THRESH_BINARY);
Mat converted = new Mat();
thresholded_bin.convertTo(converted, CV_8UC1);
*/
return thresholded;
}
Here is the image that i have got after running the code above
Image after using #Nejc solution
1) Why my translated code does not output the same image like #Nejc ?
The same code applied to same image should produce the same output ?
2) did i miss something when translating ?
3) For my understanding, why did we multiply the image by itself in this instruction sobx = sobx.mul(sobx); ?
I managed to obtain a pretty nice image of the edge by computing an approximation of the absolute value of gradient of the input image.
EDIT: Before I started working, I resized the input image to 5x smaller size. Click here to see it!. If you use my code on that image, the results will be good. If you want to make my code work well with the image of the original size, then either:
multiply Gaussian kernel sizes and sigmas by 5, or
downsample the image by factor 5, execute the algorithm and then upsample the result by factor 5 (this should work much faster than the first option)
This is the result I got:
My procedure relies on two key features. The first is a conversion to appropriate color space. As Jeru Luke stated in his answer , the saturation channel in HSV color space is the good choice here. The second important thing is the estimation of absolute value of gradient. I used sobel operators and some arithmetics for that purpose. I can provide additional explanations if someone requests them.
This is the code I used to obtain the first image.
using namespace std;
using namespace cv;
Mat img_rgb = imread("letter.jpg");
Mat img_hsv;
cvtColor(img_rgb, img_hsv, CV_BGR2HSV);
vector<Mat> channels_hsv;
split(img_hsv, channels_hsv);
Mat channel_s = channels_hsv[1];
GaussianBlur(channel_s, channel_s, Size(9, 9), 2, 2);
Mat imf;
channel_s.convertTo(imf, CV_32FC1, 0.5f, 0.5f);
Mat sobx, soby;
Sobel(imf, sobx, -1, 1, 0);
Sobel(imf, soby, -1, 0, 1);
sobx = sobx.mul(sobx);
soby = soby.mul(soby);
Mat grad_abs_val_approx;
cv::pow(sobx + soby, 0.5, grad_abs_val_approx);
Mat filtered;
GaussianBlur(grad_abs_val_approx, filtered, Size(9, 9), 2, 2);
Scalar mean, stdev;
meanStdDev(filtered, mean, stdev);
Mat thresholded;
cv::threshold(filtered, thresholded, mean.val[0] + stdev.val[0], 1.0, CV_THRESH_TOZERO);
// I scale the image at this point so that it is displayed properly
imshow("image", thresholded/50);
And this is how I computed the second image:
Mat thresholded_bin;
cv::threshold(filtered, thresholded_bin, mean.val[0] + stdev.val[0], 1.0, CV_THRESH_BINARY);
Mat converted;
thresholded_bin.convertTo(converted, CV_8UC1);
vector<vector<Point>> contours;
findContours(converted, contours, CV_RETR_LIST, CV_CHAIN_APPROX_NONE);
Mat contour_img = Mat::zeros(converted.size(), CV_8UC1);
drawContours(contour_img, contours, -1, 255);
imshow("contours", contour_img);
Thanks for yours comments and suggestion.
The code provided by #NEJC works perfectly and cover 80% of my use case.
Nevertheless, it does not works with similar case like this
case not solved by the current code
and i don't know why.
Perhaps someone have an idea/clue/solution ?
I continue to improve the code and try to find a more generic solution that can cover more case. I will post it if i ever i find.
In any case, below is the working code based on #NEJC solution and notes.
public static Mat process(Mat original){
Mat src = original.clone();
Mat hsvMat = new Mat();
Mat saturation = new Mat();
Mat sobx = new Mat();
Mat soby = new Mat();
Mat grad_abs_val_approx = new Mat();
Imgproc.cvtColor(src, hsvMat, Imgproc.COLOR_BGR2HSV);
List<Mat> hsv_channels = new ArrayList<Mat>(3);
Core.split(hsvMat, hsv_channels);
Mat hue = hsv_channels.get( 0 );
Mat sat = hsv_channels.get( 1 );
Mat val = hsv_channels.get( 2 );
Imgproc.GaussianBlur(sat, saturation, new Size(9, 9), 2, 2);
Mat imf = new Mat();
saturation.convertTo(imf, CV_32FC1, 0.5f, 0.5f);
Imgproc.Sobel(imf, sobx, -1, 1, 0);
Imgproc.Sobel(imf, soby, -1, 0, 1);
sobx = sobx.mul(sobx);
soby = soby.mul(soby);
Mat sumxy = new Mat();
Core.add(sobx,soby, sumxy);
Core.pow(sumxy, 0.5, grad_abs_val_approx);
sobx.release();
soby.release();
sumxy.release();;
Mat filtered = new Mat();
Imgproc.GaussianBlur(grad_abs_val_approx, filtered, new Size(9, 9), 2, 2);
final MatOfDouble mean = new MatOfDouble();
final MatOfDouble stdev = new MatOfDouble();
Core.meanStdDev(filtered, mean, stdev);
Mat thresholded = new Mat();
Imgproc.threshold(filtered, thresholded, mean.toArray()[0] + stdev.toArray()[0], 1.0, Imgproc.THRESH_TOZERO);
/*
Mat thresholded_bin = new Mat();
Imgproc.threshold(filtered, thresholded_bin, mean.toArray()[0] + stdev.toArray()[0], 1.0, Imgproc.THRESH_BINARY_INV);
Mat converted = new Mat();
thresholded_bin.convertTo(converted, CV_8UC1);
*/
Mat converted = new Mat();
thresholded.convertTo(converted, CV_8UC1);
return converted;
}

Detecting object regions in image opencv

We're currently trying to detect the object regions in medical instruments images using the methods available in OpenCV, C++ version. An example image is shown below:
Here are the steps we're following:
Converting the image to gray scale
Applying median filter
Find edges using sobel filter
Convert the result to binary image using a threshold of 25
Skeletonize the image to make sure we have neat edges
Finding X largest connected components
This approach works perfectly for the image 1 and here is the result:
The yellow borders are the connected components detected.
The rectangles are just to highlight the presence of a connected component.
To get understandable results, we just removed the connected components that are completely inside any another one, so the end result is something like this:
So far, everything was fine but another sample of image complicated our work shown below.
Having a small light green towel under the objects results this image:
After filtering the regions as we did earlier, we got this:
Obviously, it is not what we need..we're excepting something like this:
I'm thinking about clustering the closest connected components found(somehow!!) so we can minimize the impact of the presence of the towel, but don't know yet if it's something doable or someone has tried something like this before? Also, does anyone have any better idea to overcome this kind of problems?
Thanks in advance.
Here's what I tried.
In the images, the background is mostly greenish and the area of the background is considerably larger than that of the foreground. So, if you take a color histogram of the image, the greenish bins will have higher values. Threshold this histogram so that bins having smaller values are set to zero. This way we'll most probably retain the greenish (higher value) bins and discard other colors. Then backproject this histogram. The backprojection will highlight these greenish regions in the image.
Backprojection:
Then threshold this backprojection. This gives us the background.
Background (after some morphological filtering):
Invert the background to get foreground.
Foreground (after some morphological filtering):
Then find the contours of the foreground.
I think this gives a reasonable segmentation, and using this as mask you may be able to use a segmentation like GrabCut to refine the boundaries (I haven't tried this yet).
EDIT:
I tried the GrabCut approach and it indeed refines the boundaries. I've added the code for GrabCut segmentation.
Contours:
GrabCut segmentation using the foreground as mask:
I'm using the OpenCV C API for the histogram processing part.
// load the color image
IplImage* im = cvLoadImage("bFly6.jpg");
// get the color histogram
IplImage* im32f = cvCreateImage(cvGetSize(im), IPL_DEPTH_32F, 3);
cvConvertScale(im, im32f);
int channels[] = {0, 1, 2};
int histSize[] = {32, 32, 32};
float rgbRange[] = {0, 256};
float* ranges[] = {rgbRange, rgbRange, rgbRange};
CvHistogram* hist = cvCreateHist(3, histSize, CV_HIST_ARRAY, ranges);
IplImage* b = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* g = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* r = cvCreateImage(cvGetSize(im32f), IPL_DEPTH_32F, 1);
IplImage* backproject32f = cvCreateImage(cvGetSize(im), IPL_DEPTH_32F, 1);
IplImage* backproject8u = cvCreateImage(cvGetSize(im), IPL_DEPTH_8U, 1);
IplImage* bw = cvCreateImage(cvGetSize(im), IPL_DEPTH_8U, 1);
IplConvKernel* kernel = cvCreateStructuringElementEx(3, 3, 1, 1, MORPH_ELLIPSE);
cvSplit(im32f, b, g, r, NULL);
IplImage* planes[] = {b, g, r};
cvCalcHist(planes, hist);
// find min and max values of histogram bins
float minval, maxval;
cvGetMinMaxHistValue(hist, &minval, &maxval);
// threshold the histogram. this sets the bin values that are below the threshold to zero
cvThreshHist(hist, maxval/32);
// backproject the thresholded histogram. backprojection should contain higher values for the
// background and lower values for the foreground
cvCalcBackProject(planes, backproject32f, hist);
// convert to 8u type
double min, max;
cvMinMaxLoc(backproject32f, &min, &max);
cvConvertScale(backproject32f, backproject8u, 255.0 / max);
// threshold backprojected image. this gives us the background
cvThreshold(backproject8u, bw, 10, 255, CV_THRESH_BINARY);
// some morphology on background
cvDilate(bw, bw, kernel, 1);
cvMorphologyEx(bw, bw, NULL, kernel, MORPH_CLOSE, 2);
// get the foreground
cvSubRS(bw, cvScalar(255, 255, 255), bw);
cvMorphologyEx(bw, bw, NULL, kernel, MORPH_OPEN, 2);
cvErode(bw, bw, kernel, 1);
// find contours of the foreground
//CvMemStorage* storage = cvCreateMemStorage(0);
//CvSeq* contours = 0;
//cvFindContours(bw, storage, &contours);
//cvDrawContours(im, contours, CV_RGB(255, 0, 0), CV_RGB(0, 0, 255), 1, 2);
// grabcut
Mat color(im);
Mat fg(bw);
Mat mask(bw->height, bw->width, CV_8U);
mask.setTo(GC_PR_BGD);
mask.setTo(GC_PR_FGD, fg);
Mat bgdModel, fgdModel;
grabCut(color, mask, Rect(), bgdModel, fgdModel, GC_INIT_WITH_MASK);
Mat gcfg = mask == GC_PR_FGD;
vector<vector<cv::Point>> contours;
vector<Vec4i> hierarchy;
findContours(gcfg, contours, hierarchy, CV_RETR_LIST, CV_CHAIN_APPROX_SIMPLE, cv::Point(0, 0));
for(int idx = 0; idx < contours.size(); idx++)
{
drawContours(color, contours, idx, Scalar(0, 0, 255), 2);
}
// cleanup ...
UPDATE: We can do the above using the C++ interface as shown below.
const int channels[] = {0, 1, 2};
const int histSize[] = {32, 32, 32};
const float rgbRange[] = {0, 256};
const float* ranges[] = {rgbRange, rgbRange, rgbRange};
Mat hist;
Mat im32fc3, backpr32f, backpr8u, backprBw, kernel;
Mat im = imread("bFly6.jpg");
im.convertTo(im32fc3, CV_32FC3);
calcHist(&im32fc3, 1, channels, Mat(), hist, 3, histSize, ranges, true, false);
calcBackProject(&im32fc3, 1, channels, hist, backpr32f, ranges);
double minval, maxval;
minMaxIdx(backpr32f, &minval, &maxval);
threshold(backpr32f, backpr32f, maxval/32, 255, THRESH_TOZERO);
backpr32f.convertTo(backpr8u, CV_8U, 255.0/maxval);
threshold(backpr8u, backprBw, 10, 255, THRESH_BINARY);
kernel = getStructuringElement(MORPH_ELLIPSE, Size(3, 3));
dilate(backprBw, backprBw, kernel);
morphologyEx(backprBw, backprBw, MORPH_CLOSE, kernel, Point(-1, -1), 2);
backprBw = 255 - backprBw;
morphologyEx(backprBw, backprBw, MORPH_OPEN, kernel, Point(-1, -1), 2);
erode(backprBw, backprBw, kernel);
Mat mask(backpr8u.rows, backpr8u.cols, CV_8U);
mask.setTo(GC_PR_BGD);
mask.setTo(GC_PR_FGD, backprBw);
Mat bgdModel, fgdModel;
grabCut(im, mask, Rect(), bgdModel, fgdModel, GC_INIT_WITH_MASK);
Mat fg = mask == GC_PR_FGD;
I would consider a few options. My assumption is that the camera does not move. I haven't used the images or written any code, so this is mostly from experience.
Rather than just looking for edges, try separating the background using a segmentation algorithm. Mixture of Gaussian can help with this. Given a set of images over the same region (i.e. video), you can cancel out regions which are persistent. Then, new items such as instruments will pop out. Connected components can then be used on the blobs.
I would look at segmentation algorithms to see if you can optimize the conditions to make this work for you. One major item is to make sure your camera is stable or you stabilize the images yourself pre-processing.
I would consider using interest points to identify regions in the image with a lot of new material. Given that the background is relatively plain, small objects such as needles will create a bunch of interest points. The towel should be much more sparse. Perhaps overlaying the detected interest points over the connected component footprint will give you a "density" metric which you can then threshold. If the connected component has a large ratio of interest points for the area of the item, then it is an interesting object.
On this note, you can even clean up the connected component footprint by using a Convex Hull to prune the objects you have detected. This may help situations such as a medical instrument casting a shadow on the towel which stretches the component region. This is a guess, but interest points can definitely give you more information than just edges.
Finally, given that you have a stable background with clear objects in view, I would take a look at Bag-of-Features to see if you can just detect each individual object in the image. This may be useful since there seems to be a consistent pattern to the objects in these images. You can build a big database of images such as needles, gauze, scissors, etc. Then BoF, which is in OpenCV will find those candidates for you. You can also mix it in with other operations you are doing to compare results.
Bag of Features using OpenCV
http://www.codeproject.com/Articles/619039/Bag-of-Features-Descriptor-on-SIFT-Features-with-O
-
I would also suggest an idea to your initial version. You can also skip the contours, whose regions have width and height greater than the half the image width and height.
//take the rect of the contours
Rect rect = Imgproc.boundingRect(contours.get(i));
if (rect.width < inputImageWidth / 2 && rect.height < inputImageHeight / 2)
//then continue to draw or use for next purposes.

Detect caps on bottles using opencv and python

I know that there are a hundred topics about my question in all over the web, but i would like to ask specific for my problem because I tried almost all solutions without any success.
I am trying to count circles in an image (yes i have already tried hough circles but due to light reflections, i think, on my object is not very robust).
Then I tried to create a classifier (no success i think there is no enough features so the detection is not good)
I have also tried HSV conversation and tried to find my object with color (again I had some problems because of the light and the variations of colors)
As you can see on image, there are 8 caps and i would like to be able to count them.
Using all of this methods, i was able to detect the objects on an image (because I was optimizing all the parameters of functions for the specific image) but as soon as I load a new, similar, image the results was disappointing.
Please follow this link to see the Image
Bellow you can find parts of everything i have tried:
1. Hough circles
img = cv2.imread('frame71.jpg',1)
img = cv2.medianBlur(img,5)
cimg = cv2.cvtColor(img,cv2.COLOR_GRAY2BGR)
if img == None:
print "There is no image file. Quiting..."
quit()
circles = cv2.HoughCircles(img,cv.CV_HOUGH_GRADIENT,3,50,
param1=55,param2=125,minRadius=25,maxRadius=45)
circles = np.uint16(np.around(circles))
for i in circles[0,:]:
# draw the outer circle
cv2.circle(cimg,(i[0],i[1]),i[2],(0,255,0),2)
# draw the center of the circle
cv2.circle(cimg,(i[0],i[1]),2,(0,0,255),3)
print len(circles[0,:])
cv2.imshow('detected circles',cimg)
cv2.waitKey(0)
cv2.destroyAllWindows()
2. HSV Transform, color detection
def image_process(frame, h_low, s_low, v_low, h_up, s_up, v_up, ksize):
temp = ksize
if(temp%2==1):
ksize = temp
else:
ksize = temp+1
#if(True):
# return frame
#thresh = frame
#try:
#TODO: optimize as much as possiblle this part of code
try:
hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)
lower = np.array([h_low, s_low, v_low],np.uint8)
upper = np.array([h_up,s_up,h_up],np.uint8)
mask = cv2.inRange(hsv, lower, upper)
res = cv2.bitwise_and(hsv,hsv, mask= mask)
thresh = cv2.cvtColor(res, cv2.COLOR_BGR2GRAY)
#thresh = cv2.threshold(res, 50, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.threshold(thresh, 50, 255, cv2.THRESH_BINARY)[1]
thresh = cv2.medianBlur(thresh,ksize)
except Exception as inst:
print type(inst)
#cv2.imshow('thresh', thresh)
return thresh
3. Cascade classifier
img = cv2.imread('frame405.jpg', 1)
cap_cascade = cv2.CascadeClassifier('haar_30_17_16_stage.xml')
caps = cap_cascade.detectMultiScale(img, 1.3, 5)
#print caps
for (x,y,w,h) in caps:
cv2.rectangle(img, (x,y), (x+w,y+h), (255,0,0),2)
#cv2.rectangle(img, (10,10),(100,100),(0,255,255),4)
cv2.imshow('image', img)
cv2.waitKey(0)
cv2.destroyAllWindows()
quit()
About training the classifier I really used a lot of variations of images, samples, negatives and positives, number of stages, w and h but the results was not very accurate.
Finally I would like to know from your experience which is the best method I should follow and I will stick on that in order to optimize my detection. Keep in mind that all images are similiar but NOT identical. There are some differences due to light, movement etc
Than you in advance,
I did some experiment with the sample image. I'm posting my results, and if you find it useful, you can improve it further and optimize. Here are the steps:
downsample the image
perform morphological opening
find Hough circles
cluster the circles by radii (bottle circles should get the same label)
filter the circles by a radius threshold
you can also cluster circles by their center x and y coordinates (I haven't done this)
prepare a mask from the filtered circles and extract the possible bottles region
cluster this region by color
Code is in C++. I'm attaching my results.
Mat im = imread(INPUT_FOLDER_PATH + string("frame71.jpg"));
Mat small;
int kernelSize = 9; // try with different kernel sizes. 5 onwards gives good results
pyrDown(im, small); // downsample the image
Mat morph;
Mat kernel = getStructuringElement(MORPH_ELLIPSE, Size(kernelSize, kernelSize));
morphologyEx(small, morph, MORPH_OPEN, kernel); // open
Mat gray;
cvtColor(morph, gray, CV_BGR2GRAY);
vector<Vec3f> circles;
HoughCircles(gray, circles, CV_HOUGH_GRADIENT, 2, gray.rows/8.0); // find circles
// -------------------------------------------------------
// cluster the circles by radii. similarly you can cluster them by center x and y for further filtering
Mat circ = Mat(circles);
Mat data[3];
split(circ, data);
Mat labels, centers;
kmeans(data[2], 2, labels, TermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0), 2, KMEANS_PP_CENTERS, centers);
// -------------------------------------------------------
Mat rgb;
small.copyTo(rgb);
//cvtColor(gray, rgb, CV_GRAY2BGR);
Mat mask = Mat::zeros(Size(gray.cols, gray.rows), CV_8U);
for(size_t i = 0; i < circles.size(); i++)
{
Point center(cvRound(circles[i][0]), cvRound(circles[i][1]));
int radius = cvRound(circles[i][2]);
float r = centers.at<float>(labels.at<int>(i));
if (r > 30.0f && r < 45.0f) // filter circles by radius (values are based on the sample image)
{
// just for display
circle(rgb, center, 3, Scalar(0,255,0), -1, 8, 0);
circle(rgb, center, radius, Scalar(0,0,255), 3, 8, 0);
// prepare a mask
circle(mask, center, radius, Scalar(255,255,255), -1, 8, 0);
}
}
// use each filtered circle as a mask and extract the region from original downsampled image
Mat rgb2;
small.copyTo(rgb2, mask);
// cluster the masked region by color
Mat rgb32fc3, lbl;
rgb2.convertTo(rgb32fc3, CV_32FC3);
int imsize[] = {rgb32fc3.rows, rgb32fc3.cols};
Mat color = rgb32fc3.reshape(1, rgb32fc3.rows*rgb32fc3.cols);
kmeans(color, 4, lbl, TermCriteria(CV_TERMCRIT_EPS+CV_TERMCRIT_ITER, 10, 1.0), 2, KMEANS_PP_CENTERS);
Mat lbl2d = lbl.reshape(1, 2, imsize);
Mat lbldisp;
lbl2d.convertTo(lbldisp, CV_8U, 50);
Mat lblColor;
applyColorMap(lbldisp, lblColor, COLORMAP_JET);
Results:
Filtered circles:
Masked:
Segmented:
Hello finally i think I found a way to count caps on bottles.
Read image
Teach (find correct values for HSV up/low limits)
Select desire color (using HSV and mask)
Find contours on the masked image
Find the minCircles for contours
Reject all circles beyond thresholds
I have also order a polarized filter which I think it will reduce glares a lot. I am open to suggestions for further improvement (robustness and speed). Both robustness and speed is crucial for my application.
Thank you.

Resources