Building a rectangle with a group of points in opencv - opencv

I am trying to build a rectangle using opencv with these points but I am not sure how to go about it? I would like to build the rectangle to be able to get the four corner points.

Method 1
This method will be useful when your image contains contour which not represent your rectangle sides
Here first thing you need to do is find the centre of each contour,
you may proceed with find OpenCV moment or
minEnclosingCircle after findcontour. Now you have set of
points representing your rectangle.
Next step is classify the points to sides of rectangle like top, bottom, left and right. That is find the points which are lying on the same line these link and discussion might be helpful.
After sorting(classify the points which lies on same line) you can easily find out top, bottom, right and left by extending these the lines and find four intersection of each line, where the minimum y-value stand for top, minimum x stand for left, maximum x stand for right and maximum y stand of bottom.
Edit:
Method 2
Instead of doing all above step you can simply find out four corners as described below.
Find centre points of all contour.
Find points with minimum x and maximum x which will represent two corner.
Find points with minimum y and maximum y which will represent the other two corner.
Now you can decide which point top left, top right, bottom left and bottom right by looking on these values.
-> From set of four points consider set of two points with minimum y-value. Now consider these two points and your top left corner will be point with minimum x value and top right corner will be the point with maximum x.
-> Similarly from the remaining two points(Set of points with maximum y values) find the point with minimum x value which will be bottom left and points with maximum x will be bottom right corner.
Code for method 2
Mat src=imread("src.png",0);
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
findContours( src, contours, hierarchy,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
vector<Point2f>center( contours.size() );
vector<float>radius( contours.size() );
for( int i = 0; i< contours.size(); i++ ){
minEnclosingCircle(contours[i], center[i], radius[i] );
circle(src,center[i],radius[i], Scalar(255),1,8,0);
}
float top_left=0, top_right=0, bot_left=0,bot_right=0;
float idx_min_x=0,idx_min_y=0,idx_max_x=0,idx_max_y=0;
for( int i = 0; i< contours.size(); i++ ){
if(center[idx_max_x].x<center[i].x) idx_max_x=i;
if(center[idx_min_x].x>center[i].x) idx_min_x=i;
if(center[idx_max_y].y<center[i].y) idx_max_y=i;
if(center[idx_max_y].y>center[i].y) idx_min_y=i;
}
vector<Point2f>corners;
corners.push_back (center[idx_max_x]);
corners.push_back (center[idx_min_x]);
corners.push_back (center[idx_max_y]);
corners.push_back (center[idx_min_y]);
Point tmp;
for( int i = 0; i< corners.size(); i++ ) {
for( int j = 0; j< corners.size()-1; j++ ) {
if(corners[j].y>corners[j+1].y){
tmp=corners[j+1];
corners[j+1]=corners[j];
corners[j]=tmp;
}
}
}
if(corners[0].x>corners[1].x){ top_left=1; top_right=0;}
else { top_left=0; top_right=1;}
if(corners[2].x>corners[3].x){ bot_left=3; bot_right=2;}
else { bot_left=2; bot_right=3;}
line(src,corners[top_left],corners[top_right], Scalar(255),1,8,0);
line(src,corners[bot_left],corners[bot_right], Scalar(255),1,8,0);
line(src,corners[top_left],corners[bot_left], Scalar(255),1,8,0);
line(src,corners[top_right],corners[bot_right], Scalar(255),1,8,0);
imshow("src",src);
waitKey();
Result:

This post seems to be about trapezoids, not about rectangles.
For everyone looking for a solution regarding rectangles:
I merged all my points into one contour: How to merge contours in opencv?.
Then create a rectangle around that contour & draw it :
rect = cv2.minAreaRect(merged_contour)
box = cv2.boxPoints(rect)
box = np.intp(box) #np.intp: Integer used for indexing (same as C ssize_t; normally either int32 or int64)
cv2.drawContours(image, [box], 0, (0,0,255), 1)

Related

OpenCV: how can I detect a closed contour consisting of dots & segments?

I want to use OpenCV to detect an imperfect elliptic contour in binary images. Unfortunately, the elliptic contour consists of separate large dots with even larger gaps (of up to 25 pixels) between them.
I have tried OpenCV contour detection, but it doesn't work. It only marks the locations of individual dots instead of generating one enclosure for the contour.
How can I detect the contour using OpenCV? Please help. Here's a sample image
My final goal is to fit the dotted loop with an ellipse. Other dots nearby are noise data points. I have tried to get the contour centers for each dot or cluster of dots and put the xy coordinates of those centers into an array. I hope that FitEllipse will capture only the contour centers forming that loop. But when I do the FitEllipse, I got an exception "Emgu.CV.Util.CvException: 'OpenCV: n >= 0 && (depth == CV_32F || depth == CV_32S)' "
private void btnGO_Click(object sender, EventArgs e)
{
Mat pic = new Mat();
pic = CvInvoke.Imread("test image.png",
Emgu.CV.CvEnum.ImreadModes.Grayscale);
VectorOfVectorOfPoint contours= new VectorOfVectorOfPoint();
Mat hierarchy = new Mat();
CvInvoke.FindContours(pic, contours, hierarchy,
Emgu.CV.CvEnum.RetrType.Tree,
Emgu.CV.CvEnum.ChainApproxMethod.ChainApproxSimple);
Moments moments = new Moments();
List<Point> contourCentersList = new List<Point>();
for (int i = 0; i < contours.Size; i++)
{
int x;
int y;
moments = CvInvoke.Moments(contours[i]);
if (moments.M00 == 0)
continue;
x = Convert.ToInt32(moments.M10 / moments.M00);
y = Convert.ToInt32(moments.M01 / moments.M00);
ContourCentersList.Add(new Point(x, y));
}
Mat contourCenters = new Mat();
contourCenters.SetTo(contourCentersList.ToArray());
RotatedRect ellipse = new RotatedRect();
ellipse = CvInvoke.FitEllipse(contourCenters);
}
In order to obtain a contour you can pre-process the image to get the not connected areas connected.
For this purpose you can for example apply blur and then threshold the image.
Below your image and the result of applying blur and threshold:

Counting erythrocytes

I'm trying to count the number of erythrocytes on a microscope image. These are the smaller cells. (I've tried first using CNN and sliding window, but it was too slow, so I'm looking for a simplier segmentation)
My approach is:
threshold
find and draw all contours filled so that the cells won't have holes,
make distance transform
iterating over all maxima
masking out a current maximum with a circle having the radius of the maximum and storing the maximum position
My problem is, some cells have a "hole" in the middle - bright area similar by the value to background. If I threshold the image, some of the cell-masks become not a circle but a half circle, with the distance-transform values far below expected value.
I've marked the cells having the "holes" on the mask image.
Hov could I close the hole or the circle? Is there a threshold method or trick?
Below is the part of code responsible for cell extraction:
cv::adaptiveThreshold(_imgIn ,th, 255, ADAPTIVE_THRESH_GAUSSIAN_C, (bgblack ? CV_THRESH_BINARY: CV_THRESH_BINARY_INV), 35, 5 );//| CV_THRESH_OTSU);
Mat kernel1 = Mat::ones(3, 3, CV_8UC1);
for (int i=0; i< 5;i++)
{
dilate(th, th, kernel1);
erode(th, th, kernel1);
}
vector<vector<Point> > contours;
findContours(th, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
mask = 0;
for( unsigned int i = 0; i < contours.size(); i++ )
{
drawContours(mask, contours, i, Scalar(255), CV_FILLED);
}
cv::distanceTransform(mask, dist, CV_DIST_L2, 3);
}
double min, max;
cv::Point pmax;
Mat tmp1 = dist.clone();
while (true)
{
cv::minMaxLoc(tmp1, 0, &max, 0, &pmax);
if ( max < 5 )
break;
cv::circle(_imgIn, pmax, 3 , cv::Scalar(0), CV_FILLED );
cv::circle(tmp1, pmax, max , cv::Scalar(0), CV_FILLED );
}
Closing holes
Closing is an important operator from the field of mathematical morphology. Like its dual operator opening, it can be derived from the fundamental operations of erosion and dilation. Like those operators it is normally applied to binary images, although there are graylevel versions. Closing is similar in some ways to dilation in that it tends to enlarge the boundaries of foreground (bright) regions in an image (and shrink background color holes in such regions), but it is less destructive of the original boundary shape. As with other morphological operators, the exact operation is determined by a structuring element. The effect of the operator is to preserve background regions that have a similar shape to this structuring element, or that can completely contain the structuring element, while eliminating all other regions of background pixels.
In Open CV this looks as follows
import cv2 as cv
import numpy as np
img = cv.imread('j.png',0)
kernel = np.ones((5,5),np.uint8)
erosion = cv.erode(img,kernel,iterations = 1)
closing = cv.morphologyEx(img, cv.MORPH_CLOSE, kernel)
Full documentation here.

How to determine the width of the lines?

I need to detect the width of these lines:
These lines are parallel and have some noise on them.
Currently, what I do is:
1.Find the center using thinning (ZhangSuen)
ZhanSuenThinning(binImage, thin);
2.Compute the distance transform
cv::distanceTransform(binImage, distImg, CV_DIST_L2, CV_DIST_MASK_5);
3.Accumulate the half distance around the center
double halfWidth = 0.0;
int count = 0;
for(int a = 0; a < thinImg.cols; a++)
for(int b = 0; b < thinImg.rows; b++)
if(thinImg.ptr<uchar>(b, a)[0] > 0)
{
halfWidth += distImg.ptr<float>(b, a)[0];
count ++;
}
4.Finally, get the actual width
width = halfWidth / count * 2;
The result, isn't quite good, where it's wrong around 1-2 pixels. On bigger Image, the result is even worse, Any suggestion?
You can adapt barcode reader algorithms which is the faster way to do it.
Scan horizontal and vertical lines.
Lets X the length of the horizontal intersection with black line an Y the length of the vertical intersection (you can have it be calculating the median value of several X and Y if there are some noise).
X * Y / 2 = area
X²+Y² = hypotenuse²
hypotenuse * width / 2 = area
So : width = 2 * area / hypotenuse
EDIT : You can also easily find the angle by using PCA.
Al you need is find RotatedRect for each contour in your image, here is OpenCV tutorial how to do it. Then just take the values of 'size' from rotated rectangle where you will get height and width of contour, the height and width may interchange for different alignment of contour. Here in the above image the height become width and width become height.
Contour-->RotatedRect
|
'--> Size2f size
|
|-->width
'-->height
After find contour just do
RotatedRect minRect = minAreaRect( Mat(contours[i]) );
Size2f contourSize=minRect.size // width and height of the rectangle
Rotated rectangle for each contour
Here is C++ code
Mat src=imread("line.png",1);
Mat thr,gray;
blur(src,src,Size(3,3));
cvtColor(src,gray,CV_BGR2GRAY);
Canny(gray,thr,50, 190, 3, false );
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( thr.clone(),contours,hierarchy,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,Point(0,0));
vector<RotatedRect> minRect( contours.size() );
for( int i = 0; i < contours.size(); i++ )
minRect[i] = minAreaRect( Mat(contours[i]) );
for( int i = 0; i< contours.size(); i++ )
{
cout<<" Size ="<<minRect[i].size<<endl; //The width may interchange according to contour alignment
Size2f s=minRect[i].size;
// rotated rectangle
Point2f rect_points[4]; minRect[i].points( rect_points );
for( int j = 0; j < 4; j++ )
line( src, rect_points[j], rect_points[(j+1)%4], Scalar(0,0,255), 1, 8 );
}
imshow("src",src);
imshow("Canny",thr);
One quick and simple suggestion:
Count the total number of black pixels.
Detect the length of each line. (perhaps with CVHoughLinesP, or simply the diagonal of the bounding box around each thinned line)
Divide the number of black pixels by the sum of all line lengths, that should give you the average line width.
I am not sure whether that is more accurate than your existing approach though. The irregular end parts of each line might throw it of.
One thing you could try that could increase the accuracy for that case:
Measure the average angle of the lines
Rotate the image so the lines are aligned horizontally
crop a rectangular subsection of your shape, so all lines have the same length
(you can get the contour of your shape by morphological closing, then find a rectangle that is entirely contained within the shape. Make sure that the horizontal edges of the rectangle are inbetween lines)
then count the number of black pixels again (count gray pixels caused by rotating the image as x% of a whole pixel)
Divide by (rectangle_width * number_of_lines_in_rectangle)
Hough line fits to find each line
From each pixel on each line fit, scan in the perpendicular direction to get the distance to the edge. Find the edge using a spline fit or similar sub-pixel method.
Depending on your needs/desires, take the median or average distance. To eliminate problems with outliers, throw out the distances below the 10th percentile and above the 90th percentile before calculating the mean or median. You might also report the size using statistics: line width W, standard deviation S.
Although a connected components algorithm can be used to find the lines, it won't find the "true" edges as nicely as a spline fit.
The image like you shown is noisy/blurry and thus the number of black pixels might not reflect line properties; for example, black pixels can be partially attributed to salt-and-pepper noise. You can get rid of it with morphological erosion but this will affect your lines as well.
A better way is to extract connected components, delete small ones that likely come from noise or small blobs, then calculate the number of pixels and divide it on the number of lines. This approach will also help you to analyse the shape of the objects in your image and get rid of any artefacts other than noise or lines.
A different real word situation is when you have some grey pixels close to a line border. You can either use a threshold to discard them or count them with some weight<1. This will compensate for blur in your image. By the way, rotation of the image may increase the blur since it is typically done with interpolation and smoothing.

how to find blur corner position with opencv?

I want to find the corner position of an blurred image with a corner inside it. like the following example:
I can make sure that only one corner is inside the image, and I assume that
the corner is part of a black and white chessboard.
How can I detect the cross position with openCV?
Thanks!
Usually you can determine the corner using the gradient:
Gx = im[i][j+1] - im[i][j-1]; Gy = im[i+1][j] – im[i-1][j];
G^2 = Gx^2 + Gy^2;
teta = atan2 (Gy, Gx);
As your image is blurred, you should compute the gradient at a larger scale:
Gx = im[i][j+delta] - im[i][j-delta]; Gy = im[i+ delta][j] – im[i- delta][j];
Here is the result that I obtained for delta = 50:
The gradient norm (multiplied by 20)
gradient norm http://imageshack.us/scaled/thumb/822/xdpp.jpg
The gradient direction:
gradient direction http://imageshack.us/scaled/thumb/844/h6zp.jpg
another solution
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
Mat img=imread("c:/data/corner.jpg");
Mat gray;
cvtColor(img,gray,CV_BGR2GRAY);
threshold(gray,gray,100,255,CV_THRESH_BINARY);
int step=15;
std::vector<Point> points;
for(int i=0;i<gray.rows;i+=step)
for(int j=0;j<gray.cols;j+=step)
if(gray.at<uchar>(i,j)==255)
points.push_back(Point(j,i));
//fit a rotated rectangle
RotatedRect box = minAreaRect(Mat(points));
//circle(img,box.center,2,Scalar(255,0,0),-1);
//invert it,fit again and get average of centers(may not be needed if a 'good' threshold is found)
Point p1=Point(box.center.x,box.center.y);
points.clear();
gray=255-gray;
for(int i=0;i<gray.rows;i+=step)
for(int j=0;j<gray.cols;j+=step)
if(gray.at<uchar>(i,j)==255)
points.push_back(Point(j,i));
box = minAreaRect(Mat(points));
Point p2=Point(box.center.x,box.center.y);
//circle(img,p2,2,Scalar(0,255,0),-1);
circle(img,Point((p1.x+p2.x)/2,(p1.y+p2.y)/2),3,Scalar(0,0,255),-1);
imshow("img",img);
waitKey();
return 0;
}
Rather than work right away at a ridiculously large scale, as suggested by others, I recommend downsizing first (which has the effect of deblurring), do one pass of Harris to find the corner, then upscale its position and do a pass of findCornerSubpix at full resolution with a large window (large enough to encompass the obvious saddle point of the intensity).
In this way you get the best of both worlds: fast detection to initialize the refinement, and accurate refinement given the original imagery.
See also this other relevant answer

How to get size of an area in JavaCV

In my project I want to get the size from the greatest homogeneous area of a specific color (in my example below it's the blue sky).
My first idea is to convert the orginal image:
to an binary image, detect the skycolor and create a mask with this area:
But how can I get the size and the position of these white pixels? I want a efficient method, which says true, if the picture has a blue sky in the upper 1/3 of the picture.
Any ideas? Should I create a "global mask" (see image 3 in comment) and compare it with the binary picture? Or is there an easier way?
Thank you.
The algorithm is the following:
Convert input image to YCbCr color space which is good to detect blue (and also red) color:
To convert some image to another color space use cvtColor.
Extract blue channel from it:
Use function extractChannel to extract needed channel.
Detect regions with biggest value [0-255] of blue color. I used function minMaxIdx and then just multiplied maximum on 0.8 (this is threshold). You can use more complex methods like histogram analysation.
Make a mask of blue color:
For this I used threshold function with calculated in step 3 threshold (as parameter).
Find all blue contours in mask. In OpenCV it's easy - just use findContours.
And, finally, detect contour with biggest square and find its coordinates (center). To calculate contour with biggest square you can use function contourArea.
Also instead of steps 1-4 you can convert image to HSV and using inRange detect blue color.
Here's my c++ impementation:
Mat inMat = imread("input.jpg"), blueMat, threshMat;
cvtColor(inMat, blueMat, CV_BGR2YCrCb);//convert to YCrCb color space
extractChannel(blueMat, blueMat, 2);//get blue channel
//find max value of blue color
//or you can use histograms
//or more complex mathod
double blueMax;
minMaxIdx(blueMat, 0, &blueMax);
blueMax *= 0.8;
//make binary mask
threshold(blueMat, threshMat, blueMax, 255, THRESH_BINARY);
//finding all blue contours:
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours(blueMat, contours, hierarchy, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE);
double maxSquare = 0;
vector<Point> maxContour;
//finding contours with biggest square:
for (size_t i=0; i<contours.size(); i++)
{
double square = contourArea(contours[i]);
if (square > maxSquare)
{
maxContour = contours[i];
maxSquare = square;
}
}
//output results:
Point center = centerPolygon(maxContour);
cout << "square = " << maxSquare << endl;
cout << "position: x: " << center.x << ", y: " << center.y << endl;
Here's centerPolygon function:
Point centerPolygon(const vector<Point>& points)
{
int x=0, y=0;
for (size_t i=0; i<points.size(); i++)
{
x += points[i].x;
y += points[i].y;
}
return Point(x/points.size(), y/points.size());
}
The output of program is next:
square = 263525
position: x: 318, y: 208
You can convert this code to JavaCV - see this tutorial.

Resources