How to remove unclosed curves in a binary image? - opencv

After some processing, I have this binary image:
I want to remove the unclosed curves i.e. the top left and the bottom right curves. Can you suggest me the algorithm for doing this? Thanks.

As #John Zwinck mentions this can be done using floodfill, but I figure your problem is you want to return to the original black background, and retain the contours of the closed shapes. While you could use contours to figure this out, here is a fairly simple approach that will remove all non-closed and unenclosed line segments from an image, even if they are attached to a closed shape, But retain the edges of the closed curves.
floodfill image with white - this removes your problem non-closed lines, but also the borders of your wanted objects.
erode the image, then invert it
AND the image with the original image - thus restoring the borders.
Output:
The code is in python, but should easily translate to the usual C++ cv2 usage.
import cv2
import numpy as np
im = cv2.imread('I7qZP.png',cv2.CV_LOAD_IMAGE_GRAYSCALE)
im2 = im.copy()
mask = np.zeros((np.array(im.shape)+2), np.uint8)
cv2.floodFill(im, mask, (0,0), (255))
im = cv2.erode(im, np.ones((3,3)))
im = cv2.bitwise_not(im)
im = cv2.bitwise_and(im,im2)
cv2.imshow('show', im)
cv2.imwrite('fin.png',im)
cv2.waitKey()

You're looking for Flood Fill: http://en.wikipedia.org/wiki/Flood_fill

I had a bit of a play with an idea, though it wouldn't be the most efficient method by a long shot. I converted the image to an array of 300x300 chars, and it seemed to work well on that. I'm not familiar with opencv.
The idea was go through each pixel and see if it marks the end of a line - if so make that pixel black. Repeat until there are no changes to the picture.
The criteria I used to identify a pixel as the end of a line was to note the number of black-white changes around the loop of that pixel. If there are less than 4 changes it is the end of a line. This won't work if the lines are thicker than 1 px. I could probably come up with something better. It seemed to work with the provided picture.
do {
res = 0;
for (i = 1; i < 299; i++) {
for (j = 1; j < 299; j++) {
if (image[i][j] != 0) {
count = 0;
if (image[i-1][j-1] != image[i-1][j+0]) count++;
if (image[i-1][j+0] != image[i-1][j+1]) count++;
if (image[i-1][j+1] != image[i+0][j+1]) count++;
if (image[i+0][j+1] != image[i+1][j+1]) count++;
if (image[i+1][j+1] != image[i+1][j+0]) count++;
if (image[i+1][j+0] != image[i+1][j-1]) count++;
if (image[i+1][j-1] != image[i+0][j-1]) count++;
if (image[i+0][j-1] != image[i-1][j-1]) count++;
if (count < 4) {
image[i][j] = 0;
res = 1;
}
}
}
}
} while (res);

Related

Matchingproblems when using OpenCVs matchShapes function

I´m trying to find a objekt in a larger Picture with the findContour/matchShape functions (the object can vary so it´s not possible to look after the color or something similar, Featuredetectors like SIFT also doesn´t work because the object could be symetric)
I have written following code:
Mat scene = imread...
Mat Template = imread...
Mat imagegray1, imagegray2, imageresult1, imageresult2;
int thresh=80;
double ans=0, result=0;
// Preprocess pictures
cvtColor(scene, imagegray1,CV_BGR2GRAY);
cvtColor(Template,imagegray2,CV_BGR2GRAY);
GaussianBlur(imagegray1,imagegray1, Size(5,5),2);
GaussianBlur(imagegray2,imagegray2, Size(5,5),2);
Canny(imagegray1, imageresult1,thresh, thresh*2);
Canny(imagegray2, imageresult2,thresh, thresh*2);
vector<vector <Point> > contours1;
vector<vector <Point> > contours2;
vector<Vec4i>hierarchy1, hierarchy2;
// Template
findContours(imageresult2,contours2,hierarchy2,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
// Szene
findContours(imageresult1,contours1,hierarchy1,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,cvPoint(0,0));
imshow("template", Template);
double helper = INT_MAX;
int idx_i = 0, idx_j = 0;
// Match all contours with eachother
for(int i = 0; i < contours1.size(); i++)
{
for(int j = 0; j < contours2.size(); j++)
{
ans=matchShapes(contours1[i],contours2[j],CV_CONTOURS_MATCH_I1 ,0);
// find the best matching contour
if((ans < helper) )
{
idx_i = i;
helper = ans;
}
}
}
// draw the best contour
drawContours(scene, contours1, idx_i,
Scalar(255,255,0),3,8,hierarchy1,0,Point());
When I'm using a scene where only the Template is located in, i get a good matching result:
But when there are more objects in the pictures i have trouble detecting the object:
Hope someone can tell me whats the problem with the code i´m using. Thanks
You have a huge amount of contours in the second image (almost each letter).
As the matchShape checks for scale-invariant Hu-moments (http://docs.opencv.org/3.1.0/d3/dc0/group__imgproc__shape.html#gab001db45c1f1af6cbdbe64df04c4e944) also a very small contours may fit the shape you are looking for.
Furthermore, the original shape is not distinguished properly like can be seen when excluding all contours with an area smaller 50.
if(contourArea(contours1[i]) > 50)
drawContours(scene, contours1, i, Scalar(255, 255, 0), 1);
To say it with other words, there is no problem with your code. The contour can simply not be detected very well. I would suggest to have a look at approxCurve and convexHull and try to close the contour this way. Or improve the use of Canny in some way.
Then you could use a priori knowledge to restrict the size (and maybe rotation?) of the contour you are looking for.

Go/OpenCV: Filter Contours

I'm using this library to write an OpenCV app in Golang. I'm trying to do something very basic but can't seem to make it work. I simply want to take a set of contours, remove those contours that don't have a minimum area, then return the filtered result.
This is the current state of my code:
// given *opencv.Seq and image, draw all the contours
func opencvDrawRectangles(img *opencv.IplImage, contours *opencv.Seq) {
for c := contours; c != nil; c = c.HNext() {
rect := opencv.BoundingRect(unsafe.Pointer(c))
fmt.Println("Rectangle: ", rect.X(), rect.Y())
opencv.Rectangle(img,
opencv.Point{ rect.X(), rect.Y() },
opencv.Point{ rect.X() + rect.Width(), rect.Y() + rect.Height() },
opencv.ScalarAll(255.0),
1, 1, 0)
}
}
// return contours that meet the threshold
func opencvFindContours(img *opencv.IplImage, threshold float64) *opencv.Seq {
defaultThresh := 10.0
if threshold == 0.0 {
threshold = defaultThresh
}
contours := img.FindContours(opencv.CV_RETR_LIST, opencv.CV_CHAIN_APPROX_SIMPLE, opencv.Point{0, 0})
if contours == nil {
return nil
}
defer contours.Release()
threshContours := opencv.CreateSeq(opencv.CV_SEQ_ELTYPE_POINT,
int(unsafe.Sizeof(opencv.CvPoint{})))
for ; contours != nil; contours = contours.HNext() {
v := *contours
if opencv.ContourArea(contours, opencv.WholeSeq(), 0) > threshold {
threshContours.Push(unsafe.Pointer(&v))
}
}
return threshContours
}
In opencvFindContours, I'm trying to add to a new variable only those contours that meet the area threshold. When I take those results and pass them into opencvDrawRectangles, contours is filled with nonsense data. If, on the other hand, I just return contours directly in opencvFindContours then pass that to opencvDrawRectangles, I get the rectangles I would expect based on the motion detected in the image.
Does anyone know how to properly filter the contours using this library? I'm clearly missing something about how these data structures work, just not sure what.
However it's best implemented, the main thing I'm trying to figure out here is simply how to take a sequence of contours and filter out ones that fall below a certain area.... all the c++ examples I've seen make this look pretty easy, but I'm finding it quite challenging using a Go wrapper of the C API.
You're taking the Sizeof the pointer that would be returned by CreateSeq. You probably want the Sizeof the struct opencv.CVPoint{} instead.

Compare multiple Image Histograms with Processing

picture histogram
I'm quite new to the processing language. I am trying to create an image comparison tool.
The idea is to get a histogram of a picture (see screenshot below, size is 600x400), which is then compared to 10 other histograms of similar pictures (all size 600x400). The histogram shows the frequency distribution of the gray levels with the number of pure black values displayed on the left and number of pure white values on the right.
In the end I should get a "winning" picture (the one that has the most similar histogram).
Below you can see the code for the image histogram, similar to the processing tutorial example.
My idea was to create a PImage [] for the 10 other pictures to create histograms and then an if statement, but I'm not sure how to code it.
Does anyone have a tip on how to proceed or where to look? I couldn't find a similar post.
Thanks in advance and sorry if the question is very basic!
size(600, 400);
// Load an image from the data directory
// Load a different image by modifying the comments
PImage img = loadImage("image4.jpg");
image(img, 0, 0);
int[] hist = new int[256];
// Calculate the histogram
for (int i = 0; i < img.width; i++) {
for (int j = 0; j < img.height; j++) {
int bright = int(brightness(get(i, j)));
hist[bright]++;
}
}
// Find the largest value in the histogram
int histMax = max(hist);
stroke(255);
// Draw half of the histogram (skip every second value)
for (int i = 0; i < img.width; i += 2) {
// Map i (from 0..img.width) to a location in the histogram (0..255)
int which = int(map(i, 0, img.width, 0, 255));
// Convert the histogram value to a location between
// the bottom and the top of the picture
int y = int(map(hist[which], 0, histMax, img.height, 0));
line(i, img.height, i, y);
}
Not sure if your problem is the implementation in processing or if you don't know how to compare histograms. I assume it is the comparison as the rest is pretty straight forward. Calculate the similarity for every candidate and pick the winner.
Search the web for histogram comparison and among others you will find:
http://docs.opencv.org/2.4/doc/tutorials/imgproc/histograms/histogram_comparison/histogram_comparison.html
OpenCV implements four measures for histogram similarity.
Correlation
where and N is the number of histogram bins
or
Chi-Square
or
Intersection
or
Bhattacharyya-Distance
You can use these measures, but I'm sure you'll find something else as well.

Segmentation of perspectively distorted barcodes

There are images with perspectively distorted barcodes in them.
They are located and decoded using ZBar.
Now I do not only need the rough location, but the four real corner points of the barcode, that define the enclosing 4-point polygon.
I tried different approaches, but did not yet get the desired result.
One of them was:
convert image to grayscale
threshold image
erode image
floodFill beginning with a pixel known to be part of barcode
obtain the contour around the floodFill result
But around this contour I now would need to find the minimum best fitting 4-point polygon, which seems to be not that easy.
Do you have ideas for better approaches?
You could use the following code and try to reduce your contour to 4-point polygon via approxPoly
vector approx;
for (size_t i = 0; i < contours.size(); i++)
{
approxPolyDP(Mat(contours[i]), approx,
arcLength(Mat(contours[i]), true)*0.02, true);
if (approx.size() == 4 &&
fabs(contourArea(Mat(approx))) > 1000 &&
isContourConvex(Mat(approx)))
{
double maxCosine = 0;
for( int j = 2; j < 5; j++ )
{
double cosine = fabs(angle(approx[j%4], approx[j-2], approx[j-1]));
maxCosine = MAX(maxCosine, cosine);
}
if( maxCosine < 0.3 )
squares.push_back(approx);
}
}
http://opencv-code.com/tutorials/detecting-simple-shapes-in-an-image/
You can also try the following methods, maybe they will produce good enough results for you:
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=minarearect#minarearect
or
http://docs.opencv.org/modules/imgproc/doc/structural_analysis_and_shape_descriptors.html?highlight=convexhull#convexhull
OK, I found a solution that works good enough for my use case.
First a scanline is generated from the ZBar result.
Now the first and the last black pixels are found in verion of the image resulting from cv::adaptivethreshold with a large enough blockSize.
From there on the first and the last bar are segmented using cv::findContours.
Now for both end bars the two contour points with the most distance to each others are searched.
They finally define the enclosing 4-point-polygon.
Which is not exactly what I posted in my question, but the additional size due to the elongated guard patterns does not matter in my case.

What is the correct way to apply filter to a image

I was wondering what the correct way would be to apply filter to a image. The image processing textbook that I am reading only talks about the mathematical and theoretical aspect of filters but doesn't talk much the programming part of it !
I came up with this pseudo code could some one tell me if it is correct cause I applied the sobel edge filter to a image and I am not satisfied with the output. I think it detected many unnecessary points as edges and missed out on several points along the edge.
int filter[][] = {{0d,-1d,0d},{-1d,8d,-1d},{0d,-1d,0d}};// I dont exactly remember the //sobel filter
int total = 0;
for(int i = 2;i<image.getWidth()-2;i++)
for(int j = 2;j<image.getHeight()-2;j++)
{
total = 0;
for(int k = 0;k<3;k++)
for(int l = 0;l<3;l++)
{
total += intensity(image.getRGB(i,j)) * filter[i+k][j+l];
}
if(total >= threshold){
image.setRGB(i,j,WHITE);
}
}
int intensity(int color)
{
return (((color >> 16) & 0xFF) + ((color >> 8) & 0xFF) + color)/3;
}
Two issues:
(1) The sober operator includes x-direction and y-direction, they are
int filter[][] = {{1d,0d,-1d},{2d,0d,-2d},{1d,0d,-1d}}; and
int filter[][] = {{1d,2d,1d},{0d,0d,0d},{-1d,-2d,-1d}};
(2) The convolution part:
total += intensity(image.getRGB(i+k,j+l)) * filter[k][l];
Your code doesn't look quiet right to me. In order to apply the filter to the image you must apply the discrete time convolution algorithm http://en.wikipedia.org/wiki/Convolution.
When you do convolution you want to slide the 3x3 filter over the image, moving it one pixel at a time. At each step you multiply the value of the filter 'pixel' by the corresponding value of the image pixel which is under that particular filter 'pixel' (the 9 pixels under the filter are all affected). The values that result should be added up onto a new resulting image as you go.
Thresholding is optional...
The following is your code modified with some notes:
int filter[][] = {{0d,-1d,0d},{-1d,8d,-1d},{0d,-1d,0d}};
//create a new array for the result image on the heap
int newImage[][][3] = ...
//initialize every element in the newImage to 0
for(int i = 0;i<image.getWidth()-1;i++)
for(int j = 0;j<image.getHeight()-1;j++)
for (int k = 0; k<3; k++)
{
newImage[i][j][k] = 0;
}
//Convolve the filter and the image
for(int i = 1;i<image.getWidth()-2;i++)
for(int j = 1;j<image.getHeight()-2;j++)
{
for(int k = -1;k<2;k++)
for(int l = -1;l<2;l++)
{
newImage[i+k][j+l][1] += getRed(image.getRGB(i+k ,j+l)) * filter[k+1][l+1];
newImage[i+k][j+l][2] += getGreen(image.getRGB(i+k ,j+l)) * filter[k+1][l+1];
newImage[i+k][j+l][3] += getBlue(image.getRGB(i+k ,j+l)) * filter[k+1][l+1];
}
}
int getRed(int color)
{
...
}
int getBlue(int color)
{
...
}
int getGreen(int color)
{
...
}
Please note that the code above does not handle the edges of the image exactly right. If you wanted to make it absolutely perfect you'd start by sliding the filter mostly off screen (so the first position would apply the lower right corner of the filter to the image 0,0 pixel of the image. Doing this is really a pain though, so usually its easier just to ignore the 2 pixel border around the edges.
Once you've got that working you can experiment by sliding the Sobel filter in the horizontal and then the vertical directions. You will notice that the filter acts most strongly on lines which are perpendicular to the direction of travel (to the filter). So for the best results apply the filter in the horizontal and then the vertical direction (using the same newImage). That way you will detect vertical as well as horizontal lines equally well. :)
You have some serious undefined behavior going on here. The array filter is 3x3 but the subscripts you're using i+k and j+l are up to the size of the image. It looks like you've misplaced this addition:
total += intensity(image.getRGB(i+k,j+l)) * filter[k][l];
Use GPUImage, it's quite good for you.

Resources