Go/OpenCV: Filter Contours - opencv

I'm using this library to write an OpenCV app in Golang. I'm trying to do something very basic but can't seem to make it work. I simply want to take a set of contours, remove those contours that don't have a minimum area, then return the filtered result.
This is the current state of my code:
// given *opencv.Seq and image, draw all the contours
func opencvDrawRectangles(img *opencv.IplImage, contours *opencv.Seq) {
for c := contours; c != nil; c = c.HNext() {
rect := opencv.BoundingRect(unsafe.Pointer(c))
fmt.Println("Rectangle: ", rect.X(), rect.Y())
opencv.Rectangle(img,
opencv.Point{ rect.X(), rect.Y() },
opencv.Point{ rect.X() + rect.Width(), rect.Y() + rect.Height() },
opencv.ScalarAll(255.0),
1, 1, 0)
}
}
// return contours that meet the threshold
func opencvFindContours(img *opencv.IplImage, threshold float64) *opencv.Seq {
defaultThresh := 10.0
if threshold == 0.0 {
threshold = defaultThresh
}
contours := img.FindContours(opencv.CV_RETR_LIST, opencv.CV_CHAIN_APPROX_SIMPLE, opencv.Point{0, 0})
if contours == nil {
return nil
}
defer contours.Release()
threshContours := opencv.CreateSeq(opencv.CV_SEQ_ELTYPE_POINT,
int(unsafe.Sizeof(opencv.CvPoint{})))
for ; contours != nil; contours = contours.HNext() {
v := *contours
if opencv.ContourArea(contours, opencv.WholeSeq(), 0) > threshold {
threshContours.Push(unsafe.Pointer(&v))
}
}
return threshContours
}
In opencvFindContours, I'm trying to add to a new variable only those contours that meet the area threshold. When I take those results and pass them into opencvDrawRectangles, contours is filled with nonsense data. If, on the other hand, I just return contours directly in opencvFindContours then pass that to opencvDrawRectangles, I get the rectangles I would expect based on the motion detected in the image.
Does anyone know how to properly filter the contours using this library? I'm clearly missing something about how these data structures work, just not sure what.
However it's best implemented, the main thing I'm trying to figure out here is simply how to take a sequence of contours and filter out ones that fall below a certain area.... all the c++ examples I've seen make this look pretty easy, but I'm finding it quite challenging using a Go wrapper of the C API.

You're taking the Sizeof the pointer that would be returned by CreateSeq. You probably want the Sizeof the struct opencv.CVPoint{} instead.

Related

Matchingproblems when using OpenCVs matchShapes function

I´m trying to find a objekt in a larger Picture with the findContour/matchShape functions (the object can vary so it´s not possible to look after the color or something similar, Featuredetectors like SIFT also doesn´t work because the object could be symetric)
I have written following code:
Mat scene = imread...
Mat Template = imread...
Mat imagegray1, imagegray2, imageresult1, imageresult2;
int thresh=80;
double ans=0, result=0;
// Preprocess pictures
cvtColor(scene, imagegray1,CV_BGR2GRAY);
cvtColor(Template,imagegray2,CV_BGR2GRAY);
GaussianBlur(imagegray1,imagegray1, Size(5,5),2);
GaussianBlur(imagegray2,imagegray2, Size(5,5),2);
Canny(imagegray1, imageresult1,thresh, thresh*2);
Canny(imagegray2, imageresult2,thresh, thresh*2);
vector<vector <Point> > contours1;
vector<vector <Point> > contours2;
vector<Vec4i>hierarchy1, hierarchy2;
// Template
findContours(imageresult2,contours2,hierarchy2,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
// Szene
findContours(imageresult1,contours1,hierarchy1,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
imshow("template", Template);
double helper = INT_MAX;
int idx_i = 0, idx_j = 0;
// Match all contours with eachother
for(int i = 0; i < contours1.size(); i++)
{
for(int j = 0; j < contours2.size(); j++)
{
ans=matchShapes(contours1[i],contours2[j],CV_CONTOURS_MATCH_I1 ,0);
// find the best matching contour
if((ans < helper) )
{
idx_i = i;
helper = ans;
}
}
}
// draw the best contour
drawContours(scene, contours1, idx_i,
Scalar(255,255,0),3,8,hierarchy1,0,Point());
When I'm using a scene where only the Template is located in, i get a good matching result:
But when there are more objects in the pictures i have trouble detecting the object:
Hope someone can tell me whats the problem with the code i´m using. Thanks
You have a huge amount of contours in the second image (almost each letter).
As the matchShape checks for scale-invariant Hu-moments (http://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gab001db45c1f1af6cbdbe64df04c4e944) also a very small contours may fit the shape you are looking for.
Furthermore, the original shape is not distinguished properly like can be seen when excluding all contours with an area smaller 50.
if(contourArea(contours1[i]) > 50)
drawContours(scene, contours1, i, Scalar(255, 255, 0), 1);
To say it with other words, there is no problem with your code. The contour can simply not be detected very well. I would suggest to have a look at approxCurve and convexHull and try to close the contour this way. Or improve the use of Canny in some way.
Then you could use a priori knowledge to restrict the size (and maybe rotation?) of the contour you are looking for.

OpenCV matchShapes() output value

How do I use value from OpenCV matchShapes output? We implemented OpenCV matchShapes function to compare two images, particularly, shapes. But when we obtained the answer we are confused how to use these values?
The code is
- (bool) someMethod:(UIImage *)image :(UIImage *)temp {
RNG rng(12345);
cv::Mat src_base, hsv_base;
cv::Mat src_test1, hsv_test1;
src_base = [self cvMatWithImage:image];
src_test1 = [self cvMatWithImage:temp];
int thresh=150;
double ans=0, result=0;
Mat imageresult1, imageresult2;
cv::cvtColor(src_base, hsv_base, cv::COLOR_BGR2HSV);
cv::cvtColor(src_test1, hsv_test1, cv::COLOR_BGR2HSV);
std::vector<std::vector<cv::Point>>contours1, contours2;
std::vector<Vec4i>hierarchy1, hierarchy2;
Canny(hsv_base, imageresult1, thresh, thresh*2);
Canny(hsv_test1, imageresult2, thresh, thresh*2);
findContours(imageresult1,contours1,hierarchy1,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
for(int i=0;i<contours1.size();i++)
{
//cout<<contours1[i]<<endl;
Scalar color=Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
drawContours(imageresult1,contours1,i,color,1,8,hierarchy1,0,cv::Point());
}
findContours(imageresult2,contours2,hierarchy2,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
for(int i=0;i<contours2.size();i++)
{
Scalar color=Scalar(rng.uniform(0, 255), rng.uniform(0,255), rng.uniform(0,255));
drawContours(imageresult2,contours2,i,color,1,8,hierarchy2,0,cv::Point());
}
for(int i=0;i<contours1.size();i++)
{
ans = matchShapes(contours1[i],contours2[i],CV_CONTOURS_MATCH_I1,0);
cout<<" "<<ans<<endl;
}
std::cout<<"The answer is "<<ans<<endl;
if (ans<=20) {
return true;
}
return false;
}
The output values are
0.225069
0.234417
0
7.63599
0
7.06392
0.335966
0.211358
0.327552
0.842969
0.761659
0.614039
The image is
See my comment on imoutidi's answer. Here is a visual explanation:
The first col are the two original images,the second the canny edges. The 3. col are an arbitrary selection of detected shapes with the same index in both images. As you see, it is not even guaranteed that they correspond to the same image parts as a human would see them. What you end up comparing are different triangles in this case, which say little about the overall shape similarity. The two shape arrays are not even of the same size, since there are more structures in the bottom drawing for example(like small shapes between a thick line). in The 4. col is the last shape in the array. This is the best bet you can make to compare the images. In this example, I get a value of 0.0920794532771 for their similarity.
If I understand correctly your question, you want to know what the return value of matchShapes() stands for.
In your case given the two contours (shapes) the function returns a similarity metric (value). A small value indicates that the two shapes are similar and a big value that they are not.
A good explanation is here: http://docs.opencv.org/3.1.0/d5/d45/tutorial_py_contours_more_functions.html (check the third paragraph).
Also check out the documentation: http://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gaadc90cb16e2362c9bd6e7363e6e4c317

Too many points have been found in contours detected by Opencv findContours function

thanks for paying attention to this question.
I want to detect some moving objects by using Kinect Sensor. The idea is quite simple, which first I will get the difference image between every two frames, and then extract the contours of objects, finally do further processing.
I tried to extract the contours by using Opencv(version 2.4.9) function findContours, but here the problem comes. The function can extract about 30 or 40 contours in each loop, but in each contour there are about billions of points contained in the contours. Also, if I want to use some functions like drawContours or minAreaRect, the program will crash due to memory error.
Here are relative code:
findContours(Black, contours, hierarchy, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE,
Point(0, 0));
//1. If nothing has entered the camera frame, skip
if(contours.size()==0)
{
//cout<<"NoContours ";
continue;
}
//2. Only save the maximum contour(index) as the result
max_index = 0;
for (size_t i = 1; i < contours.size(); i++)
{
//cout << " Num of Points: " << contours[i].size() << endl;
if(contours[max_index].size() < contours[i].size())
{
max_index = int(i);
}
}
//3. If the maximum contour's size is smaller than 5, regard it as noise
//cout << contours[max_index].size() << endl;
if(contours[max_index].size() < 5)
{
continue;
}
//find a smallest RotatedRect to express the contour(error happen)
minRect = minAreaRect(Mat(contours[max_index]));
RotatedRect minEllipse = fitEllipse(contours[max_index]);
Error will happen when it runs to the last two lines code. The mainly reason I think that the function findContours found too many points in every contours, which leads to not enough memory.
I cannot send an image for now, but the function findContours found about 4294966890 points in at least 50% of contours (while others are normal)
Could anyone give some idea about this?
Try to use approxpolydp to simplify your contours.

Segmentation of perspectively distorted barcodes

There are images with perspectively distorted barcodes in them.
They are located and decoded using ZBar.
Now I do not only need the rough location, but the four real corner points of the barcode, that define the enclosing 4-point polygon.
I tried different approaches, but did not yet get the desired result.
One of them was:
convert image to grayscale
threshold image
erode image
floodFill beginning with a pixel known to be part of barcode
obtain the contour around the floodFill result
But around this contour I now would need to find the minimum best fitting 4-point polygon, which seems to be not that easy.
Do you have ideas for better approaches?
You could use the following code and try to reduce your contour to 4-point polygon via approxPoly
vector approx;
for (size_t i = 0; i < contours.size(); i++)
{
approxPolyDP(Mat(contours[i]), approx,
arcLength(Mat(contours[i]), true)*0.02, true);
if (approx.size() == 4 &&
fabs(contourArea(Mat(approx))) > 1000 &&
isContourConvex(Mat(approx)))
{
double maxCosine = 0;
for( int j = 2; j < 5; j++ )
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if( maxCosine < 0.3 )
squares.push_back(approx);
}
}
http://opencv-code.com/tutorials/detecting-simple-shapes-in-an-image/
You can also try the following methods, maybe they will produce good enough results for you:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=minarearect#minarearect
or
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=convexhull#convexhull
OK, I found a solution that works good enough for my use case.
First a scanline is generated from the ZBar result.
Now the first and the last black pixels are found in verion of the image resulting from cv::adaptivethreshold with a large enough blockSize.
From there on the first and the last bar are segmented using cv::findContours.
Now for both end bars the two contour points with the most distance to each others are searched.
They finally define the enclosing 4-point-polygon.
Which is not exactly what I posted in my question, but the additional size due to the elongated guard patterns does not matter in my case.

How to remove unclosed curves in a binary image?

After some processing, I have this binary image:
I want to remove the unclosed curves i.e. the top left and the bottom right curves. Can you suggest me the algorithm for doing this? Thanks.
As #John Zwinck mentions this can be done using floodfill, but I figure your problem is you want to return to the original black background, and retain the contours of the closed shapes. While you could use contours to figure this out, here is a fairly simple approach that will remove all non-closed and unenclosed line segments from an image, even if they are attached to a closed shape, But retain the edges of the closed curves.
floodfill image with white - this removes your problem non-closed lines, but also the borders of your wanted objects.
erode the image, then invert it
AND the image with the original image - thus restoring the borders.
Output:
The code is in python, but should easily translate to the usual C++ cv2 usage.
import cv2
import numpy as np
im = cv2.imread('I7qZP.png',cv2.CV_LOAD_IMAGE_GRAYSCALE)
im2 = im.copy()
mask = np.zeros((np.array(im.shape)+2), np.uint8)
cv2.floodFill(im, mask, (0,0), (255))
im = cv2.erode(im, np.ones((3,3)))
im = cv2.bitwise_not(im)
im = cv2.bitwise_and(im,im2)
cv2.imshow('show', im)
cv2.imwrite('fin.png',im)
cv2.waitKey()
You're looking for Flood Fill: http://en.wikipedia.org/wiki/Flood_fill
I had a bit of a play with an idea, though it wouldn't be the most efficient method by a long shot. I converted the image to an array of 300x300 chars, and it seemed to work well on that. I'm not familiar with opencv.
The idea was go through each pixel and see if it marks the end of a line - if so make that pixel black. Repeat until there are no changes to the picture.
The criteria I used to identify a pixel as the end of a line was to note the number of black-white changes around the loop of that pixel. If there are less than 4 changes it is the end of a line. This won't work if the lines are thicker than 1 px. I could probably come up with something better. It seemed to work with the provided picture.
do {
res = 0;
for (i = 1; i < 299; i++) {
for (j = 1; j < 299; j++) {
if (image[i][j] != 0) {
count = 0;
if (image[i-1][j-1] != image[i-1][j+0]) count++;
if (image[i-1][j+0] != image[i-1][j+1]) count++;
if (image[i-1][j+1] != image[i+0][j+1]) count++;
if (image[i+0][j+1] != image[i+1][j+1]) count++;
if (image[i+1][j+1] != image[i+1][j+0]) count++;
if (image[i+1][j+0] != image[i+1][j-1]) count++;
if (image[i+1][j-1] != image[i+0][j-1]) count++;
if (image[i+0][j-1] != image[i-1][j-1]) count++;
if (count < 4) {
image[i][j] = 0;
res = 1;
}
}
}
}
} while (res);

Resources