OpenCV - How to remove small line segments from contour? - opencv

Is there a way to remove small line segments from a contour?
For example, in this image, the largest contour is specified by green color and it's approximation is specified by blue color:
As contour is a set of Points, I guess we can do something to remove segments of contours that are in red circles. For example by detecting and removing small lines or small sub contours or another way. But I do not know how I can do it.
Please remember I want to remove them after finding contour and not before that. Do you know how I can remove them? Or any idea?

I've found that contourArea is good for removing small, isolated contours. This snippet illustrates how you might proceed:
findContours(edges, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE);
...
// Prune contours
vector<vector<Point> > prunedContours;
for (size_t i = 0; i< contours.size(); i++)
{
if (contourArea(contours[i]) > minArea)
{
prunedContours.push_back(contours[i]);
}
}
If the "loops" or extraneous contour regions are part of the larger contour of interest, take a look at approxPolyDP. It's possible that a coarse approximation of your original contour can omit the extraneous features.

Related

Remove Boxes/rectangles from image

I have the following image.
this image
I would like to remove the orange boxes/rectangle around numbers and keep the original image clean without any orange grid/rectangle.
Below is my current code but it does not remove it.
Mat mask = new Mat();
Mat src = new Mat();
src = Imgcodecs.imread("enveloppe.jpg",Imgcodecs.CV_LOAD_IMAGE_COLOR);
Imgproc.cvtColor(src, hsvMat, Imgproc.COLOR_BGR2HSV);
Scalar lowerThreshold = new Scalar(0, 50, 50);
Scalar upperThreshold = new Scalar(25, 255, 255);
Mat mask = new Mat();
Core.inRange(hsvMat, lowerThreshold, upperThreshold, mask);
//src.setTo(new scalar(255,255,255),mask);
what to do next ?
How can i remove the orange boxes/rectangle from the original images ?
Update:
For information , the mask contains exactly all the boxes/rectangle that i want to remove. I don't know how to use this mask to remove boxes/rectangle from the source (src) image as if they were not present.
This is what I did to solve the problem. I solved the problem in C++ and I used OpenCV.
Part 1: Find box candidates
Firstly I wanted to isolate the signal that was specific for red channel. I splitted the image into three channels. I then subtracted the red channel from blue channel and the red from green channel. After that I subtracted both previous subtraction results from one another. The final subtraction result is shown on the image below.
using namespace cv;
using namespace std;
Mat src_rgb = imread("image.jpg");
std::vector<Mat> channels;
split(src_rgb, channels);
Mat diff_rb, diff_rg;
subtract(channels[2], channels[0], diff_rb);
subtract(channels[2], channels[1], diff_rg);
Mat diff;
subtract(diff_rb, diff_rg, diff);
My next goal was to divide the parts of obtained image into separate "groups". To do that, I smoothed the image a little bit with a Gaussian filter. Then I applied a threshold to obtain a binary image; finally I looked for external contours within that image.
GaussianBlur(diff, diff, cv::Size(11, 11), 2.0, 2.0);
threshold(diff, diff, 5, 255, THRESH_BINARY);
vector<vector<Point>> contours;
findContours(diff, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
Click to see subtraction result, Gaussian blurred image, thresholded image and detected contours.
Part 2: Inspect box candidates
After that, I had to make an estimate whether the interior of each contour contained a number or something else. I made an assumption that numbers will always be printed with black ink and that they will have sharp edges. Therefore I took a blue channel image and I applied just a little bit of Gaussian smoothing and convolved it with a Laplacian operator.
Mat blurred_ch2;
GaussianBlur(channels[2], blurred_ch2, cv::Size(7, 7), 1, 1);
Mat laplace_result;
Laplacian(blurred_ch2, laplace_result, -1, 1);
I then took the resulting image and applied the following procedure for every contour separately. I computed a standard deviation of the pixel values within the contour interior. Standard deviation was high inside the contours that surrounded numbers; and it was low inside the two contours that surrounded the dog's head and the letters on top of the stamp.
That is why I could appliy the standard deviation threshold. Standard deviation was approx. twice larger for contours containing numbers so this was an easy way to only select the contours that contained numbers. Then I drew the contour interior mask. I used erosion and subtraction to obtain the "box edge mask".
The final step was fairly easy. I computed an estimate of average pixel value nearby the box on every channel of the image. Then I changed all pixel values under the "box edge mask" to those values on every channel. After I repeated that procedure for every box contour, I merged all three channels into one.
Mat mask(src_rgb.size(), CV_8UC1);
for (int i = 0; i < contours.size(); ++i)
{
mask.setTo(0);
drawContours(mask, contours, i, cv::Scalar(200), -1);
Scalar mean, stdev;
meanStdDev(laplace_result, mean, stdev, mask);
if (stdev.val[0] < 10.0) continue;
Mat eroded;
erode(mask, eroded, cv::Mat(), cv::Point(-1, -1), 6);
subtract(mask, eroded, mask);
for (int c = 0; c < src_rgb.channels(); ++c)
{
erode(mask, eroded, cv::Mat());
subtract(mask, eroded, eroded);
Scalar mean, stdev;
meanStdDev(channels[c], mean, stdev, eroded);
channels[c].setTo(mean, mask);
}
}
Mat final_result;
merge(channels, final_result);
imshow("Final Result", final_result);
Click to see red channel of the image, the result of convolution with Laplacian operator, drawn mask of the box edges and the final result.
Please note
This code is far from being optimal, especially the last loop does quite a lot of unnecessary work. But I think that in this case readability is more important (and the author of the question did not request an optimized solution anyway).
Looking towards more general solution
After I posted the initial reply, the author of the question noted that the digits can be of any color and their edges are not necessarily sharp. That means that above procedure can fail because of various reasons. I altered the input image so that it contains different kinds of numbers (click to see the image) and you can run my algorithm on this input and analyze what goes wrong.
The way I see it, one of these approaches is needed (or perhaps a mixture of both) to obtain a more "general" solution:
concentrate only on rectangle shape and color (confirm that the box candidate is really an orange box and remove it regardless of what is inside)
concentrate on numbers only (run a proper number detection algorithm inside the interior of every box candidate; if it contains a single number, remove the box)
I will give a trivial example of the first approach. If you can assume that orange box size will always be the same, just check the box size instead of standard deviation of the signal in the last loop of the algorithm:
Rect rect = boundingRect(contours[i]);
float area = rect.area();
if (area < 1000 || area > 1200) continue;
Warning: actual area of rectangles is around 600Px^2, but I took into account the Gaussian Blurring, which caused the contour to expand. Please also note that if you use this approach you no longer need to perform blurring or laplace operations on blue channel image.
You can also add other simple constraints to that condition; ratio between width and height is the first one that comes to my mind. Geometric properties can also be a good option (right angles, straight edges, convexness ...).

OpenCV - Separating letter contours in text segmentation

I’ve been working with text recognition in a dataset of images. I want to segment the characters of the image using components and finding contours of a thresholded image. However, many of the characters are merged with each other and with other components in the image.
Can you give me some idea for separating them? Thanks for the help!
Below are some examples, and part of my code:
Mat placa_contornos = processContourns(img_placa_adaptativeTreshold_mean);
vector<vector<Point>> contours_placa;
findContours(placa_contornos,
contours_placa,
CV_RETR_EXTERNAL, externos)
CV_CHAIN_APPROX_NONE);
vector<vector<Point> >::iterator itc = contours_placa.begin();
while (itc != contours_placa.end()) {
//Create bounding rect of object
Rect mr = boundingRect(Mat(*itc));
rectangle(imagem_placa_cor, mr, Scalar(0, 255, 0));
++itc;
}
imshow("placa con rectangles", imagem_placa_cor);
Results examples
original image, binarized image, result
I would try to erode the binary image more to see if that helps. You may also want to try fixing the skew and then removing the bottom line that connects the letters
Also, this might be relevant: Recognize the characters of license plate
You can try an opening operation on your thresholded image to get rid of the noise. You can adjust the kernel size based on your need.
// Get a rectangular kernel with size 7
Mat element = getStructuringElement(0, Size(7, 7), Point(1, 1));
// Apply the morphology operation
morphologyEx(placa_contornos, result, CV_MORPH_OPEN, element);
It gives the following intermediate output on your thresholded image, I guess it would improve your detection.

Detect two intersecting rectangles separately in opencv

I can detect rectangles that are separate from each other. However, I am having problems with rectangles in contact such as below:
Two rectangles in contact
I should detect 2 rectangles in the image. I am using findContours as expected and I have tried various modes:CV_RETR_TREE, CV_RETR_LIST. I always get the outermost single contour as shown below:
Outermost contour detected
I have tried with or without canny edge detection. What I do is below:
cv::Mat element = cv::getStructuringElement(cv::MORPH_RECT, cv::Size(3,3));
cv::erode(__mat,__mat, element);
cv::dilate(__mat,__mat, element);
// Find contours
std::vector<std::vector<cv::Point> > contours;
cv::Mat coloredMat;
cv::cvtColor(__mat, coloredMat, cv::COLOR_GRAY2BGR);
int thresh = 100;
cv::Mat canny_output;
cv::Canny( __mat, canny_output, thresh, thresh*2, 3 );
cv::findContours(canny_output, contours, CV_RETR_TREE, CV_CHAIN_APPROX_SIMPLE);
How can I detect both two rectangles separately?
If you already know the dimensions of the rectangle, you can use generalizedHoughTransform
If the dimensions of the rectangles are not known, you can use distanceTransform. The local maxima will give you the center location as well as the distance from the center to the nearest edge (which will be equal to half the short side of your rect). Further processing with corner detection / watershed and you should be able to find the orientation and dimensions (though this method may fail if the two rectangles overlap each other by a lot)
simple corner detection and brute force search (just try out all possible rectangle combinations given the corner points and see which one best matches the image, note that a rectangle can be defined given only 3 points) might also work

findContours for blob finding

I'm using OpenCV's findContours() for blob-finding, by floodfilling at an arbitrary seed point in the contour and taking the bounding rectangle of the floodfill. However, when two blobs touch at a corner, e.g.
they share a contour, so only one of the two blobs will be floodfilled, depending on which seed point was chosen.
I could change the floodfill connectivity setting from 4 to 8, so that the blobs are fused in the floodfill. What I'd really like to do instead is ignore the small defect and count only the big blob. Can this be done without substantially changing the algorithm?
Unlike for floodfill, there's no way to use findContours with 4-connectivity natively in OpenCV.
You should take a look at findContours() documentation.
findContours can return multiple contours if they appear in the image, in your case, if you choose 4-connectivity, you should get 2 contours, and then you can compare their bounding box size to decide which one to keep.
cv::Mat img = cv::imread('test.png', 0);
std::vector<std::vector<cv::Point> > contours;
std::vector<cv::Vec4i> hierarchy;
cv::findContours(img, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
for (size_t i = 0;i < contours.size(); ++i) {
cv::Rect bbox = cv::boundingRect(contours[i]);
std::cout<<"Contour"<<i<<" Area"<<bbox.area()<<std::endl;
}
Hope this helps.

OpenCV converting Canny edges to contours

I have an OpenCV application fed from a webcam stream of an office interior (lot's of details) where I have to find an artificial marker. The marker is a black square on white background. I use Canny to find edges and cvFindContours for contouring, then approxPolyDP and co. for filtering and finding candidates, then use local histogram to filter further, bla bla bla...
This works more or less, but not exactly how I want. FindContours always returns a closed loop, even if Canny creates a non-closed line. I get a contour walking on both sides of the line forming a loop. For closed edges on the Canny image (my marker), I get 2 contours, one on the inside, and an other on the outside.
I have to problems with this operation:
I get 2 contours for each marker (not that serious)
the most trivial filtering is not usable (reject non-closed contours)
So my question: is it possible to get non-closed contours for non-closed Canny edges?
Or what is the standard way to solve the above 2 issues?
Canny is a very good tool, but I need a way convert the 2D b/w image, into something easily process-able. Something like connected components listing all pixels in walking order of the component. So I can filter for loops, and feed it into approxPolyDP.
Update: I missed some important detail: the marker can be in any orientation (it's not front facing the camera, no right angles), in fact what I'm doing is 3D orientation estimation, based on the 2D projection of the marker.
I found a clean and easy solution for the 2 issues in the question. The trick is enable 2 level hierarchy generation (in findCountours) and look for contours which have a parent. This will return the inner contour of closed Canny edges and nothing more. Non-closed edges are discarded automatically, and each marker will have a single contour.
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(CannyImage, contours, hierarchy, CV_RETR_CCOMP, CV_CHAIN_APPROX_NONE, Point(0,0) );
for (unsigned int i=0; i<contours.size(); i++)
if (hierarchy[i][3] >= 0) //has parent, inner (hole) contour of a closed edge (looks good)
drawContours(contourImage, contours, i, Scalar(255, 0, 0), 1, 8);
It also works the other way around, that is: look for contours which have a child (hierarchy[i][2] >= 0), but in my case the parent check yields better results.
I had the same problem with duplicate contours and even dilate and erode could not solve it:
Mat src=imread("E:\\test.bmp"),gry,bin,nor,dil,erd;
GaussianBlur( src, nor, Size(5,5),0 );
cvtColor(nor,gry,CV_BGR2GRAY);
Canny(gry,bin,100,150,5,true);
dilate(bin,dil,Mat());
erode(dil,erd,Mat());
Mat tmp=bin.clone();
vector<vector<Point>> conts;
vector<Vec4i> hier;
findContours(tmp,conts,hier,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE);
This image (test.bmp) contains 3 contours but findContours returned 6!
I used threshold and problem solved:
Mat src=imread("E:\\test.bmp"),gry,bin,nor,dil,erd;
GaussianBlur( src, nor, Size(5,5),0 );
cvtColor(nor,gry,CV_BGR2GRAY);
threshold(gry,bin,0,255,THRESH_BINARY+THRESH_OTSU);
vector<vector<Point>> conts;
vector<Vec4i> hier;
findContours(bin,conts,hier,CV_RETR_TREE,CV_CHAIN_APPROX_SIMPLE);
Now it returns 4 contours which the 1st one is the image boundary(contour with index 0) and can be easily skipped.
This how I would do it
1. Canny for edge detection
2. Use houghtransform to detect the edges.
3. Detect the two edges that do an angle of 90.

Resources