Finding length of contour in opencv - image-processing

This is regarding a project that concerns detection of text in an image using OpenCV in C. The process is to detect the colors inside and outside the corresponding contours and the way to do that is to draw normals on the contours in equal spaced positions and extract the pixel colors in the corresponding positions of the normals end-points.
I am trying to implement this using the following code but it's not working. I mean, its drawing the normals but not in and equi-spaced fashion.
for( ; contours!=0 ; contours = contours->h_next )
{
CvScalar color = CV_RGB( rand()&255, rand()&255, rand()&255 );
cvDrawContours( cc_color, contours, color, CV_RGB(0,0,0), -1, 1, 8, cvPoint(0,0) );
ptr = contours;
for( i=1; i<ptr->total; i++)
{
p1 = CV_GET_SEQ_ELEM( CvPoint, ptr, i );
p2 = CV_GET_SEQ_ELEM( CvPoint, ptr, i+1 );
x1 = p1->x;
y1 = p1->y;
x2 = p2->x;
y2 = p2->y;
printf("%d %d %d %d\n",x1,y1,x2,y2);
draw_normals(x1,y1,x2,y2);
}
}
So is there a way to find the length of a contour so that I can divide it by the number of normals I want to draw. Thanks in advance.
EDIT: The draw_normal function draws the normals between two points passed to it as parameters.

So is there a way to find the length of a contour?
Yes, you can find length of a contour using OpenCV standard function , cvarcLength().
Check Documentation here.

Related

Non connecting morphological filter

After some simple preprocessing I am receiving boolean mask of segmented images.
I want to "enhance" borders of the mask and make them more smooth. For that I am using OPEN morphology filter with a rather big circle kernel , it works very well until the distance between segmented objects is enough. But In alot of samples objects stick together. Is there exists some more or less simple method to smooth such kind of images without changing its morphology ?
Without applying a morphological filter first, you can try to detect the external contours of the image. Now you can draw these external contours as filled contours and then apply your morphological filter. This works because now you don't have any holes to fill. This is fairly simple.
Another approach:
find external contours
take the x, y of coordinates of the contour points. you can consider these as 1-D signals and apply a smoothing filter to these signals
In the code below, I've applied the second approach to a sample image.
Input image
External contours without any smoothing
After applying a Gaussian filter to x and y 1-D signals
C++ code
Mat im = imread("4.png", 0);
Mat cont = im.clone();
Mat original = Mat::zeros(im.rows, im.cols, CV_8UC3);
Mat smoothed = Mat::zeros(im.rows, im.cols, CV_8UC3);
// contour smoothing parameters for gaussian filter
int filterRadius = 5;
int filterSize = 2 * filterRadius + 1;
double sigma = 10;
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
// find external contours and store all contour points
findContours(cont, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE, Point(0, 0));
for(size_t j = 0; j < contours.size(); j++)
{
// draw the initial contour shape
drawContours(original, contours, j, Scalar(0, 255, 0), 1);
// extract x and y coordinates of points. we'll consider these as 1-D signals
// add circular padding to 1-D signals
size_t len = contours[j].size() + 2 * filterRadius;
size_t idx = (contours[j].size() - filterRadius);
vector<float> x, y;
for (size_t i = 0; i < len; i++)
{
x.push_back(contours[j][(idx + i) % contours[j].size()].x);
y.push_back(contours[j][(idx + i) % contours[j].size()].y);
}
// filter 1-D signals
vector<float> xFilt, yFilt;
GaussianBlur(x, xFilt, Size(filterSize, filterSize), sigma, sigma);
GaussianBlur(y, yFilt, Size(filterSize, filterSize), sigma, sigma);
// build smoothed contour
vector<vector<Point> > smoothContours;
vector<Point> smooth;
for (size_t i = filterRadius; i < contours[j].size() + filterRadius; i++)
{
smooth.push_back(Point(xFilt[i], yFilt[i]));
}
smoothContours.push_back(smooth);
drawContours(smoothed, smoothContours, 0, Scalar(255, 0, 0), 1);
cout << "debug contour " << j << " : " << contours[j].size() << ", " << smooth.size() << endl;
}
Not 100% sure what you are trying to achieve, but this may be an avenue to explore... the tool potrace takes images and converts them to vectorised images which involves smoothing. It prefers PGM format input files so I use ImageMagick to prepare them. Anyway, here is an example of the command and the result so see what you think:
convert disks.png pgm:- | potrace - -s -o out.svg
I have converted the resulting SVG file to a PNG so I can upload it to SO.

Distinguish rock scences using opencv

I am struggling with finding the appropriate contour algorithm for a low quality image. The example image shows a rock scene:
What I am trying to achieve is to find contours arround features such as:
light areas
dark areas
grey1 areas
grey2 areas
etc. until grey-n areas
(The number of areas shall be a parameter of choice)
I do not want to take a simple binary-threshold but rather use some sort of contour-finding (for example watershed or other). The major feature-lines shall be kept, noise within a feature-are can be flattened.
The result of my code can be seen on the images to the right.
Unfortunately, as you can easily tell, the colors do not really represent the original large-scale image features! For example: check out the two areas that I circled with red - these features are almost completely flooded with another color. What I imagine is that at least the very light and the very dark areas are covered by its own color.
cv::Mat cv_src = cv::imread(argv[1]);
cv::Mat output;
cv::Mat cv_src_gray;
cv::cvtColor(cv_src, cv_src_gray, cv::COLOR_RGB2GRAY);
double clipLimit = 0.1;
cv::Size titleGridSize = cv::Size(8,8);
cv::Ptr<cv::CLAHE> clahe = cv::createCLAHE(clipLimit, titleGridSize);
clahe->apply(cv_src_gray, output);
cv::equalizeHist(output, output);
cv::cvtColor(output, cv_src, cv::COLOR_GRAY2RGB);
// Create binary image from source image
cv::Mat bw;
cv::cvtColor(cv_src, bw, cv::COLOR_BGR2GRAY);
cv::threshold(bw, bw, 180, 255, cv::THRESH_BINARY);
// Perform the distance transform algorithm
cv::Mat dist;
cv::distanceTransform(bw, dist, cv::DIST_L2, CV_32F);
// Normalize the distance image for range = {0.0, 1.0}
cv::normalize(dist, dist, 0, 1., cv::NORM_MINMAX);
// Threshold to obtain the peaks
cv::threshold(dist, dist, .2, 1., cv::THRESH_BINARY);
// Create the CV_8U version of the distance image
cv::Mat dist_8u;
dist.convertTo(dist_8u, CV_8U);
// Find total markers
std::vector<std::vector<cv::Point> > contours;
cv::findContours(dist_8u, contours, cv::RETR_EXTERNAL, cv::CHAIN_APPROX_SIMPLE);
int ncomp = contours.size();
// Create the marker image for the watershed algorithm
cv::Mat markers = cv::Mat::zeros(dist.size(), CV_32S);
// Draw the foreground markers
for (int i = 0; i < ncomp; i++)
cv::drawContours(markers, contours, i, cv::Scalar::all(i+1), -1);
// Draw the background marker
cv::circle(markers, cv::Point(5,5), 3, CV_RGB(255,255,255), -1);
// Perform the watershed algorithm
cv::watershed(cv_src, markers);
// Generate random colors
std::vector<cv::Vec3b> colors;
for (int i = 0; i < ncomp; i++)
{
int b = cv::theRNG().uniform(0, 255);
int g = cv::theRNG().uniform(0, 255);
int r = cv::theRNG().uniform(0, 255);
colors.push_back(cv::Vec3b((uchar)b, (uchar)g, (uchar)r));
}
// Create the result image
cv::Mat dst = cv::Mat::zeros(markers.size(), CV_8UC3);
// Fill labeled objects with random colors
for (int i = 0; i < markers.rows; i++)
{
for (int j = 0; j < markers.cols; j++)
{
int index = markers.at<int>(i,j);
if (index > 0 && index <= ncomp)
dst.at<cv::Vec3b>(i,j) = colors[index-1];
else
dst.at<cv::Vec3b>(i,j) = cv::Vec3b(0,0,0);
}
}
// Show me what you got
imshow("final_result", dst);
I think you can use a simple clustering such as k-means for this, then examine the cluster centers (or the mean and standard deviations of each cluster). I quickly tried it in matlab.
im = imread('tvBqt.jpg');
gr = rgb2gray(im);
x = double(gr(:));
idx = kmeans(x, 4);
cl = reshape(idx, 600, 472);
figure,
subplot(1, 2, 1), imshow(gr, []), title('original')
subplot(1, 2, 2), imshow(label2rgb(cl), []), title('clustered')
The result:
You could try using SLIC Superpixels. I tried it and showed some good results. You could vary the parameters to get better clustering.
SLIC Superpixels
SLIC Superpixels with OpenCV C++
SLIC Superpixels with OpenCV Python

OpenCV - find bounding box of largest blob in binary image

What is the most efficient way to find the bounding box of the largest blob in a binary image using OpenCV? Unfortunately, OpenCV does not have specific functionality for blob detection. Should I just use findContours() and search for the largest in the list?
Here. It. Is.
(FYI: try not to be lazy and figure out what happens in my function below.
cv::Mat findBiggestBlob(cv::Mat & matImage){
int largest_area=0;
int largest_contour_index=0;
vector< vector<Point> > contours; // Vector for storing contour
vector<Vec4i> hierarchy;
findContours( matImage, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
for( int i = 0; i< contours.size(); i++ ) {// iterate through each contour.
double a=contourArea( contours[i],false); // Find the area of contour
if(a>largest_area){
largest_area=a;
largest_contour_index=i; //Store the index of largest contour
//bounding_rect=boundingRect(contours[i]); // Find the bounding rectangle for biggest contour
}
}
drawContours( matImage, contours, largest_contour_index, Scalar(255), CV_FILLED, 8, hierarchy ); // Draw the largest contour using previously stored index.
return matImage;
}
If you want to use OpenCV libs, check out OpenCVs SimpleBlobDetector. Here's another stack overflow showing a small tutorial of it: How to use OpenCV SimpleBlobDetector
This only gives you key points though. You could use this as an initial search to find the blob you want, and then possibly use the findContours algorithm around the most likely blobs.
Also the more information you know about your blob, you can provide parameters to filter out the blobs you don't want. You might want to test out the area parameters of the SimpleBlobDetector. Possibly could could compute the area based on the size of the area of the image and then iteratively allow for a smaller blob if the algorithm does not detect any blobs.
Here is the link to the main OpenCV documentation: http://docs.opencv.org/modules/features2d/doc/common_interfaces_of_feature_detectors.html#simpleblobdetector
To find the bounding box of the largest blob, I used findContours, followed by the following code:
double maxArea = 0;
for (MatOfPoint contour : contours) {
double area = Imgproc.contourArea(contour);
if (area > maxArea) {
maxArea = area;
largestContour = contour;
}
}
Rect boundingRect = Imgproc.boundingRect(largestContour);
Since no one has posted a complete OpenCV solution, here's a simple approach using thresholding + contour area filtering
Input image
Largest blob/contour highlighted in green
import cv2
# Load image, grayscale, Gaussian blur, and Otsu's threshold
image = cv2.imread('1.png')
gray = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
thresh = cv2.threshold(gray, 0, 255, cv2.THRESH_BINARY_INV + cv2.THRESH_OTSU)[1]
# Find contours and sort using contour area
cnts = cv2.findContours(thresh, cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
cnts = cnts[0] if len(cnts) == 2 else cnts[1]
cnts = sorted(cnts, key=cv2.contourArea, reverse=True)
for c in cnts:
# Highlight largest contour
cv2.drawContours(image, [c], -1, (36,255,12), 3)
break
cv2.imshow('thresh', thresh)
cv2.imshow('image', image)
cv2.waitKey()
TimZaman, your code has a bug but I can't comment so I start a new and correct answer.
Here is my solution based on 1"'s and TimZaman's ideas:
Mat measure::findBiggestBlob(cv::Mat &src){
int largest_area=0;
int largest_contour_index=0;
Mat temp(src.rows,src.cols,CV_8UC1);
Mat dst(src.rows,src.cols,CV_8UC1,Scalar::all(0));
src.copyTo(temp);
vector<vector<Point>> contours; // storing contour
vector<Vec4i> hierarchy;
findContours( temp, contours, hierarchy,CV_RETR_CCOMP, CV_CHAIN_APPROX_SIMPLE );
for( int i = 0; i< contours.size(); i++ ) // iterate
{
double a=contourArea( contours[i],false); //Find the largest area of contour
if(a>largest_area)
{
largest_area=a;
largest_contour_index=i;
}
}
drawContours( dst, contours,largest_contour_index, Scalar(255), CV_FILLED, 8, hierarchy );
// Draw the largest contour
return dst;
}

iOS + Tesseract Ocr + OpenCV

I wrote a digital OCR for ios.
I have a test image png with two digits 5 and 4.
I find the contours. How do I transfer the contour one at tesseract?
init tesseract:
tess = new tesseract::TessBaseAPI();
tess->Init([dataPath cStringUsingEncoding:NSUTF8StringEncoding], "eng");
tess->SetPageSegMode(tesseract::PSM_SINGLE_CHAR); //<-- !!!!
tess->tesseract::TessBaseAPI::SetVariable("tessedit_char_whitelist", "0123456789");
Function for detect contours:
- (std::vector<std::vector<cv::Point> >)findSquaresInImage:(cv::Mat)_image {
std::vector<std::vector<cv::Point> > squares;
cv::Mat pyr, timg, gray0(_image.size(), CV_8U), gray;
int thresh = 50, N = 11;
cv::pyrDown(_image, pyr, cv::Size(_image.cols/2, _image.rows/2));
cv::pyrUp(pyr, timg, _image.size());
std::vector<std::vector<cv::Point> > contours;
int ch[] = {0, 0};
mixChannels(&timg, 1, &gray0, 1, ch, 1);
for( int l = 0; l < N; l++ ) {
if( l == 0 ) {
cv::Canny(gray0, gray, 0, thresh, 5);
cv::dilate(gray, gray, cv::Mat(), cv::Point(-1,-1));
}
else {
gray = gray0 >= (l+1)*255/N;
}
cv::findContours(gray, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
std::vector<cv::Point> approx;
CvRect rec1;
std::string str;
std::map<int,IplImage*> pic_list;
for( size_t i = 0; i < contours.size(); i++ )
{
rec1 = cv::boundingRect(contours[i]);
if (rec1.height > 0.5*gray.rows && rec1.width < 0.756*gray.cols) {
NSLog(#"%d %d %d %d", rec1.width, rec1.height, rec1.x, rec1.y);
cv::approxPolyDP(cv::Mat(contours[i]), approx, arcLength(cv::Mat(contours[i]), true)*0.02, true);
squares.push_back(approx);
}
}
}
return squares; }
function for draw contours:
cv::Mat debugSquares( std::vector<std::vector<cv::Point> > squares, cv::Mat image ) {
for ( int i = 0; i< squares.size(); i++ ) {
// draw contour
cv::drawContours(image, squares, i, cv::Scalar(255,0,0), 1, 8, std::vector<cv::Vec4i>(), 0, cv::Point());
// draw bounding rect
cv::Rect rect = boundingRect(cv::Mat(squares[i]));
cv::rectangle(image, rect.tl(), rect.br(), cv::Scalar(0,255,0), 2, 8, 0);
// draw rotated rect
cv::RotatedRect minRect = minAreaRect(cv::Mat(squares[i]));
cv::Point2f rect_points[4];
minRect.points( rect_points );
for ( int j = 0; j < 4; j++ ) {
cv::line( image, rect_points[j], rect_points[(j+1)%4], cv::Scalar(0,0,255), 1, 8 ); // blue
}
}
return image;
}
method for btn Click:
- (IBAction)onMath:(id)sender {
UIImage *image = [UIImage imageNamed:#"test1.png"];
cv::Mat iMat = [self cvMatFromUIImage:image];
std::vector<std::vector<cv::Point> > sq = [self findSquaresInImage:iMat];
cv::Mat hui = debugSquares(sq, iMat);
image = [self UIImageFromCVMat:hui];
self.imView.image = image;
}
image after:
link to project on github: https://github.com/MaxPatsy/iORC
Can you check this answer here
I described some tips for preparing images for Tesseract here: Using tesseract to recognize license plates
In your example, there are several things going on...
You need to get the text to be black and the rest of the image white (not the reverse). That's what character recognition is tuned on. Grayscale is ok, as long as the background is mostly full white and the text mostly full black; the edges of the text may be gray (antialiased) and that may help recognition (but not necessarily - you'll have to experiment)
One of the issues you're seeing is that in some parts of the image, the text is really "thin" (and gaps in the letters show up after thresholding), while in other parts it is really "thick" (and letters start merging). Tesseract won't like that :) It happens because the input image is not evenly lit, so a single threshold doesn't work everywhere. The solution is to do "locally adaptive thresholding" where a different threshold is calculated for each neighbordhood of the image. There are many ways of doing that, but check out for example:
Adaptive gaussian thresholding in OpenCV with cv2.adaptiveThreshold(...,cv2.ADAPTIVE_THRESH_GAUSSIAN_C,...)
Local Otsu's method
Local adaptive histogram equalization
Another problem you have is that the lines aren't straight. In my experience Tesseract can handle a very limited degree of non-straight lines (a few percent of perspective distortion, tilt or skew), but it doesn't really work with wavy lines. If you can, make sure that the source images have straight lines :) Unfortunately, there is no simple off-the-shelf answer for this; you'd have to look into the research literature and implement one of the state of the art algorithms yourself (and open-source it if possible - there is a real need for an open source solution to this). A Google Scholar search for "curved line OCR extraction" will get you started, for example:
Text line Segmentation of Curved Document Images
Lastly: I think you would do much better to work with the python ecosystem (ndimage, skimage) than with OpenCV in C++. OpenCV python wrappers are ok for simple stuff, but for what you're trying to do they won't do the job, you will need to grab many pieces that aren't in OpenCV (of course you can mix and match). Implementing something like curved line detection in C++ will take an order of magnitude longer than in python (* this is true even if you don't know python).
Good luck!

OpenCV: Incorrect contour around blobs

I'm trying to draw contours around blobs in a binary image, however, sometimes, openCV draws a single contour around two distinct blobs. below is an example. How can i solve this issue?
Here it should draw two bounding boxes for the blob on the right and separately for the one of the left. I agree they are close but enough distance in between them. I'm only drawing External contours instead of the tree or list. I'm also using cvFindNextContour(contourscanner) as this is a easier implementation for my case.
Thanks
EDIT:
Image displayed in the "output" window is from a different function which does just image subtraction. Image displayed in the "contours" window is in the function pplfind(). "output" image is passed to img_con().
IplImage* img_con(IplImage* image){
int ppl;
CvMemStorage* memstr = cvCreateMemStorage();
IplImage* edges = cvCreateImage(cvGetSize(image),8,1);
cvCanny(image,edges,130,255);
CvContourScanner cscan = cvStartFindContours(image,memstr,sizeof(CvContour),CV_RETR_EXTERNAL,CV_CHAIN_APPROX_NONE,cvPoint(0,0));
ppl = pplfind(cscan,cvGetSize(image));
if (ppl !=0 )
printf("Estimated number of people: %d\n",ppl);
cvEndFindContours(&cscan);
cvClearMemStorage(memstr);
return edges;
}
int pplfind(CvContourScanner cscan, CvSize frSize){
ofstream file; char buff[50];
file.open("box.txt",ofstream::app);
int ppl =0;
CvSeq* c;
IplImage *out = cvCreateImage(frSize,8,3);
while (c = cvFindNextContour(cscan)){
CvRect box = cvBoundingRect(c,1);
if ((box.height > int(box.width*1.2))&&(box.height>20)){//&&(box.width<20)){//
ppl++;
cvRectangle(out,cvPoint(box.x,box.y),cvPoint(box.x+box.width,box.y+box.height),CV_RGB(255,0,50),1);
cvShowImage("contours",out);
//cvWaitKey();
}
//printf("Box Height: %d , Box Width: %d ,People: %d\n",box.height,box.width,ppl);
//cvWaitKey(0);
int coord = sprintf_s(buff,"%d,%d,%d\n",box.width,box.height,ppl);
file.write(buff,coord);
}
file.close();
cvReleaseImage(&out);
return ppl;
}
I've never used cvFindNextContour, but running cvFindContours with CV_RETR_EXTERNAL on your image seems to work fine:
I use OpenCV + Python, so this code might not be useful for you, but for the sake of completeness here it goes:
contours = cv.findContours(img, cv.CreateMemStorage(0), mode=cv.CV_RETR_EXTERNAL)
while contours:
(x,y,w,h) = cv.BoundingRect(contours)
cv.Rectangle(colorImg, (x,y), (x+w,y+h), cv.Scalar(0,255,255,255))
contours = contours.h_next()
Edit: you asked how to draw only those contours with certain properties; it would be something like this:
contours = cv.findContours(img, cv.CreateMemStorage(0), mode=cv.CV_RETR_EXTERNAL)
while contours:
(x,y,w,h) = cv.BoundingRect(contours)
if h > w*1.2 and h > 20:
cv.Rectangle(colorImg, (x,y), (x+w,y+h), cv.Scalar(0,255,255,255))
contours = contours.h_next()

Resources