I need to detect the width of these lines:
These lines are parallel and have some noise on them.
Currently, what I do is:
1.Find the center using thinning (ZhangSuen)
ZhanSuenThinning(binImage, thin);
2.Compute the distance transform
cv::distanceTransform(binImage, distImg, CV_DIST_L2, CV_DIST_MASK_5);
3.Accumulate the half distance around the center
double halfWidth = 0.0;
int count = 0;
for(int a = 0; a < thinImg.cols; a++)
for(int b = 0; b < thinImg.rows; b++)
if(thinImg.ptr<uchar>(b, a)[0] > 0)
{
halfWidth += distImg.ptr<float>(b, a)[0];
count ++;
}
4.Finally, get the actual width
width = halfWidth / count * 2;
The result, isn't quite good, where it's wrong around 1-2 pixels. On bigger Image, the result is even worse, Any suggestion?
You can adapt barcode reader algorithms which is the faster way to do it.
Scan horizontal and vertical lines.
Lets X the length of the horizontal intersection with black line an Y the length of the vertical intersection (you can have it be calculating the median value of several X and Y if there are some noise).
X * Y / 2 = area
X²+Y² = hypotenuse²
hypotenuse * width / 2 = area
So : width = 2 * area / hypotenuse
EDIT : You can also easily find the angle by using PCA.
Al you need is find RotatedRect for each contour in your image, here is OpenCV tutorial how to do it. Then just take the values of 'size' from rotated rectangle where you will get height and width of contour, the height and width may interchange for different alignment of contour. Here in the above image the height become width and width become height.
Contour-->RotatedRect
|
'--> Size2f size
|
|-->width
'-->height
After find contour just do
RotatedRect minRect = minAreaRect( Mat(contours[i]) );
Size2f contourSize=minRect.size // width and height of the rectangle
Rotated rectangle for each contour
Here is C++ code
Mat src=imread("line.png",1);
Mat thr,gray;
blur(src,src,Size(3,3));
cvtColor(src,gray,CV_BGR2GRAY);
Canny(gray,thr,50, 190, 3, false );
vector<vector<Point> > contours;
vector<Vec4i> hierarchy;
findContours( thr.clone(),contours,hierarchy,CV_RETR_EXTERNAL,CV_CHAIN_APPROX_SIMPLE,Point(0,0));
vector<RotatedRect> minRect( contours.size() );
for( int i = 0; i < contours.size(); i++ )
minRect[i] = minAreaRect( Mat(contours[i]) );
for( int i = 0; i< contours.size(); i++ )
{
cout<<" Size ="<<minRect[i].size<<endl; //The width may interchange according to contour alignment
Size2f s=minRect[i].size;
// rotated rectangle
Point2f rect_points[4]; minRect[i].points( rect_points );
for( int j = 0; j < 4; j++ )
line( src, rect_points[j], rect_points[(j+1)%4], Scalar(0,0,255), 1, 8 );
}
imshow("src",src);
imshow("Canny",thr);
One quick and simple suggestion:
Count the total number of black pixels.
Detect the length of each line. (perhaps with CVHoughLinesP, or simply the diagonal of the bounding box around each thinned line)
Divide the number of black pixels by the sum of all line lengths, that should give you the average line width.
I am not sure whether that is more accurate than your existing approach though. The irregular end parts of each line might throw it of.
One thing you could try that could increase the accuracy for that case:
Measure the average angle of the lines
Rotate the image so the lines are aligned horizontally
crop a rectangular subsection of your shape, so all lines have the same length
(you can get the contour of your shape by morphological closing, then find a rectangle that is entirely contained within the shape. Make sure that the horizontal edges of the rectangle are inbetween lines)
then count the number of black pixels again (count gray pixels caused by rotating the image as x% of a whole pixel)
Divide by (rectangle_width * number_of_lines_in_rectangle)
Hough line fits to find each line
From each pixel on each line fit, scan in the perpendicular direction to get the distance to the edge. Find the edge using a spline fit or similar sub-pixel method.
Depending on your needs/desires, take the median or average distance. To eliminate problems with outliers, throw out the distances below the 10th percentile and above the 90th percentile before calculating the mean or median. You might also report the size using statistics: line width W, standard deviation S.
Although a connected components algorithm can be used to find the lines, it won't find the "true" edges as nicely as a spline fit.
The image like you shown is noisy/blurry and thus the number of black pixels might not reflect line properties; for example, black pixels can be partially attributed to salt-and-pepper noise. You can get rid of it with morphological erosion but this will affect your lines as well.
A better way is to extract connected components, delete small ones that likely come from noise or small blobs, then calculate the number of pixels and divide it on the number of lines. This approach will also help you to analyse the shape of the objects in your image and get rid of any artefacts other than noise or lines.
A different real word situation is when you have some grey pixels close to a line border. You can either use a threshold to discard them or count them with some weight<1. This will compensate for blur in your image. By the way, rotation of the image may increase the blur since it is typically done with interpolation and smoothing.
Related
I'm trying to count the number of erythrocytes on a microscope image. These are the smaller cells. (I've tried first using CNN and sliding window, but it was too slow, so I'm looking for a simplier segmentation)
My approach is:
threshold
find and draw all contours filled so that the cells won't have holes,
make distance transform
iterating over all maxima
masking out a current maximum with a circle having the radius of the maximum and storing the maximum position
My problem is, some cells have a "hole" in the middle - bright area similar by the value to background. If I threshold the image, some of the cell-masks become not a circle but a half circle, with the distance-transform values far below expected value.
I've marked the cells having the "holes" on the mask image.
Hov could I close the hole or the circle? Is there a threshold method or trick?
Below is the part of code responsible for cell extraction:
cv::adaptiveThreshold(_imgIn ,th, 255, ADAPTIVE_THRESH_GAUSSIAN_C, (bgblack ? CV_THRESH_BINARY: CV_THRESH_BINARY_INV), 35, 5 );//| CV_THRESH_OTSU);
Mat kernel1 = Mat::ones(3, 3, CV_8UC1);
for (int i=0; i< 5;i++)
{
dilate(th, th, kernel1);
erode(th, th, kernel1);
}
vector<vector<Point> > contours;
findContours(th, contours, CV_RETR_EXTERNAL, CV_CHAIN_APPROX_NONE);
mask = 0;
for( unsigned int i = 0; i < contours.size(); i++ )
{
drawContours(mask, contours, i, Scalar(255), CV_FILLED);
}
cv::distanceTransform(mask, dist, CV_DIST_L2, 3);
}
double min, max;
cv::Point pmax;
Mat tmp1 = dist.clone();
while (true)
{
cv::minMaxLoc(tmp1, 0, &max, 0, &pmax);
if ( max < 5 )
break;
cv::circle(_imgIn, pmax, 3 , cv::Scalar(0), CV_FILLED );
cv::circle(tmp1, pmax, max , cv::Scalar(0), CV_FILLED );
}
Closing holes
Closing is an important operator from the field of mathematical morphology. Like its dual operator opening, it can be derived from the fundamental operations of erosion and dilation. Like those operators it is normally applied to binary images, although there are graylevel versions. Closing is similar in some ways to dilation in that it tends to enlarge the boundaries of foreground (bright) regions in an image (and shrink background color holes in such regions), but it is less destructive of the original boundary shape. As with other morphological operators, the exact operation is determined by a structuring element. The effect of the operator is to preserve background regions that have a similar shape to this structuring element, or that can completely contain the structuring element, while eliminating all other regions of background pixels.
In Open CV this looks as follows
import cv2 as cv
import numpy as np
img = cv.imread('j.png',0)
kernel = np.ones((5,5),np.uint8)
erosion = cv.erode(img,kernel,iterations = 1)
closing = cv.morphologyEx(img, cv.MORPH_CLOSE, kernel)
Full documentation here.
How do I find the maximum rounding I can apply to either corner for any amount of rounding on the other corner?
Answers to questions from the comments:
1) The inner and outer large arcs (those that are 90 degrees wide here) always have the same center
2) When asking for the maximum rounding that you can do, what are the constraints on the other, smaller circle? Does it need to be at least some radius? Otherwise you are doing to end up with just one rounding.
One of the two rounding circle's radius is given. There are no other constraints other than the maximum of the other circle which I just can't find.
If the "fixed" corner that I refer to has zero rounding then I'm searching for the maximum rouding that can be applied with only the other corner.
3) What constitutes as the maximum rounding? Are you trying to choose between the two examples above? Or is finding either of those cases considered a solution?
Either of the shown cases is a perfect solution. E.g. in the first image the the radius of the smaller circle might be given. Then I'm looking for the maximum radius of the larger one.
These images are just examples for perfect solutions.
4) is there any constraints on the two arcs? What happens if the arcs can't fit a full circle? Would the answer be the largest that fits?
How exactly do you mean that the arcs can't fit a full circle?
The all circles are perfect circles, but I can't figure out the max size of the rounding possible, or how to calculate it's position. Here's some images that describe the problem.
Given that the origin of the coordinate system is at the center point of the inner and outer large arcs...
For the first case where the large circle is tangent to the outer edge, the center point of the large circle is
x = R cos(t) / (1 + cos(t))
y = R sin(t) / (1 + cos(t))
where R is the radius of the outer arc segment, and t is the angle between the x-axis and the ray from the origin through the center of the large circle.
For the second case where the large circle is tangent to the inner edge, the center point of the large circle is
x = R cos(t) / (1 - cos(t))
y = R sin(t) / (1 - cos(t))
where R is the radius of the inner arc segment, and t is the angle...
In both cases, the radius of the circle is equal to its x coordinate. The range of t is between some minimum angle and PI/2. At PI/2, the circle is vanishingly small. At the minimum angle, the y value is equal to the opposite radius. In other words, for the first case where the large circle is tangent to the outer edge, the minimum angle is such that y is equal to the inner radius. Whereas if the circle is tangent to the inner edge, the minimum angle is such that y is equal to the outer radius. It can be proven mathematically that the minimum angle is the same for both cases (tangent to inner and tangent to outer both have the same minimum angle for a given inner and outer radius). However, computing the minimum angle is a bit of a challenge. The only way I know how to do it is by playing the high/low game, e.g.
- (CGFloat)computeAngleForOuterTangentGivenY:(CGFloat)Y
{
CGFloat y;
double high = M_PI_2;
double low = 0;
double mid = M_PI_4;
while ( high - low > 1e-9 )
{
y = (self.outerRadius * sin( mid )) / (1.0 + cos( mid ));
if ( y > Y )
high = mid;
else
low = mid;
mid = (low + high) / 2.0;
}
return( mid );
}
- (CGFloat)computeAngleForInnerTangentGivenY:(CGFloat)Y
{
CGFloat y;
double high = M_PI_2;
double low = 0;
double mid = M_PI_4;
while ( high - low > 1e-9 )
{
y = (self.innerRadius * sin( mid )) / (1.0 - cos( mid ));
if ( y > Y )
low = mid;
else
high = mid;
mid = (low + high) / 2.0;
}
return( mid );
}
It takes about 30 passes for the loop to converge to an answer.
To find the coordinates of the small circle, note that the small circle has the same y value as the large circle, and is tangent to the opposite edge of the arc segment. Therefore, compute the angle t for the small circle based on its y value using the appropriate high/low algorithm, and then compute the x value using the formulas above.
QED
The question isn't posed correctly without showing both ends of the line segment. Suppose for a moment that each line segment is a data structure that maintains not only the end points, but also cap radius in each point, and also knows the angle going out to the next endpoint that this line will attach to. Each cap radius will subtract from the length of the line segment that has to be stroked as a rectangle. Assume you have a line of interest between points B and C, where B joins to another (longer) segment A, and C joins to another (longer) segment D. If line BC is length 10, with cap radius B and cap radius C both set to 4, then you will only render rectangle of length 2 for the straight part of the line segment, while length 4 is used to draw the arc to A, and another length 4 is used to draw the arc to D.
Furthermore, the maximum cap radius for C is constrained not only by BC and B's cap radius, but also by CD and D's cap radius.
I am trying to build a rectangle using opencv with these points but I am not sure how to go about it? I would like to build the rectangle to be able to get the four corner points.
Method 1
This method will be useful when your image contains contour which not represent your rectangle sides
Here first thing you need to do is find the centre of each contour,
you may proceed with find OpenCV moment or
minEnclosingCircle after findcontour. Now you have set of
points representing your rectangle.
Next step is classify the points to sides of rectangle like top, bottom, left and right. That is find the points which are lying on the same line these link and discussion might be helpful.
After sorting(classify the points which lies on same line) you can easily find out top, bottom, right and left by extending these the lines and find four intersection of each line, where the minimum y-value stand for top, minimum x stand for left, maximum x stand for right and maximum y stand of bottom.
Edit:
Method 2
Instead of doing all above step you can simply find out four corners as described below.
Find centre points of all contour.
Find points with minimum x and maximum x which will represent two corner.
Find points with minimum y and maximum y which will represent the other two corner.
Now you can decide which point top left, top right, bottom left and bottom right by looking on these values.
-> From set of four points consider set of two points with minimum y-value. Now consider these two points and your top left corner will be point with minimum x value and top right corner will be the point with maximum x.
-> Similarly from the remaining two points(Set of points with maximum y values) find the point with minimum x value which will be bottom left and points with maximum x will be bottom right corner.
Code for method 2
Mat src=imread("src.png",0);
vector< vector <Point> > contours; // Vector for storing contour
vector< Vec4i > hierarchy;
findContours( src, contours, hierarchy,CV_RETR_EXTERNAL, CV_CHAIN_APPROX_SIMPLE ); // Find the contours in the image
vector<Point2f>center( contours.size() );
vector<float>radius( contours.size() );
for( int i = 0; i< contours.size(); i++ ){
minEnclosingCircle(contours[i], center[i], radius[i] );
circle(src,center[i],radius[i], Scalar(255),1,8,0);
}
float top_left=0, top_right=0, bot_left=0,bot_right=0;
float idx_min_x=0,idx_min_y=0,idx_max_x=0,idx_max_y=0;
for( int i = 0; i< contours.size(); i++ ){
if(center[idx_max_x].x<center[i].x) idx_max_x=i;
if(center[idx_min_x].x>center[i].x) idx_min_x=i;
if(center[idx_max_y].y<center[i].y) idx_max_y=i;
if(center[idx_max_y].y>center[i].y) idx_min_y=i;
}
vector<Point2f>corners;
corners.push_back (center[idx_max_x]);
corners.push_back (center[idx_min_x]);
corners.push_back (center[idx_max_y]);
corners.push_back (center[idx_min_y]);
Point tmp;
for( int i = 0; i< corners.size(); i++ ) {
for( int j = 0; j< corners.size()-1; j++ ) {
if(corners[j].y>corners[j+1].y){
tmp=corners[j+1];
corners[j+1]=corners[j];
corners[j]=tmp;
}
}
}
if(corners[0].x>corners[1].x){ top_left=1; top_right=0;}
else { top_left=0; top_right=1;}
if(corners[2].x>corners[3].x){ bot_left=3; bot_right=2;}
else { bot_left=2; bot_right=3;}
line(src,corners[top_left],corners[top_right], Scalar(255),1,8,0);
line(src,corners[bot_left],corners[bot_right], Scalar(255),1,8,0);
line(src,corners[top_left],corners[bot_left], Scalar(255),1,8,0);
line(src,corners[top_right],corners[bot_right], Scalar(255),1,8,0);
imshow("src",src);
waitKey();
Result:
This post seems to be about trapezoids, not about rectangles.
For everyone looking for a solution regarding rectangles:
I merged all my points into one contour: How to merge contours in opencv?.
Then create a rectangle around that contour & draw it :
rect = cv2.minAreaRect(merged_contour)
box = cv2.boxPoints(rect)
box = np.intp(box) #np.intp: Integer used for indexing (same as C ssize_t; normally either int32 or int64)
cv2.drawContours(image, [box], 0, (0,0,255), 1)
I want to find the corner position of an blurred image with a corner inside it. like the following example:
I can make sure that only one corner is inside the image, and I assume that
the corner is part of a black and white chessboard.
How can I detect the cross position with openCV?
Thanks!
Usually you can determine the corner using the gradient:
Gx = im[i][j+1] - im[i][j-1]; Gy = im[i+1][j] – im[i-1][j];
G^2 = Gx^2 + Gy^2;
teta = atan2 (Gy, Gx);
As your image is blurred, you should compute the gradient at a larger scale:
Gx = im[i][j+delta] - im[i][j-delta]; Gy = im[i+ delta][j] – im[i- delta][j];
Here is the result that I obtained for delta = 50:
The gradient norm (multiplied by 20)
gradient norm http://imageshack.us/scaled/thumb/822/xdpp.jpg
The gradient direction:
gradient direction http://imageshack.us/scaled/thumb/844/h6zp.jpg
another solution
#include <opencv2/opencv.hpp>
using namespace cv;
int main()
{
Mat img=imread("c:/data/corner.jpg");
Mat gray;
cvtColor(img,gray,CV_BGR2GRAY);
threshold(gray,gray,100,255,CV_THRESH_BINARY);
int step=15;
std::vector<Point> points;
for(int i=0;i<gray.rows;i+=step)
for(int j=0;j<gray.cols;j+=step)
if(gray.at<uchar>(i,j)==255)
points.push_back(Point(j,i));
//fit a rotated rectangle
RotatedRect box = minAreaRect(Mat(points));
//circle(img,box.center,2,Scalar(255,0,0),-1);
//invert it,fit again and get average of centers(may not be needed if a 'good' threshold is found)
Point p1=Point(box.center.x,box.center.y);
points.clear();
gray=255-gray;
for(int i=0;i<gray.rows;i+=step)
for(int j=0;j<gray.cols;j+=step)
if(gray.at<uchar>(i,j)==255)
points.push_back(Point(j,i));
box = minAreaRect(Mat(points));
Point p2=Point(box.center.x,box.center.y);
//circle(img,p2,2,Scalar(0,255,0),-1);
circle(img,Point((p1.x+p2.x)/2,(p1.y+p2.y)/2),3,Scalar(0,0,255),-1);
imshow("img",img);
waitKey();
return 0;
}
Rather than work right away at a ridiculously large scale, as suggested by others, I recommend downsizing first (which has the effect of deblurring), do one pass of Harris to find the corner, then upscale its position and do a pass of findCornerSubpix at full resolution with a large window (large enough to encompass the obvious saddle point of the intensity).
In this way you get the best of both worlds: fast detection to initialize the refinement, and accurate refinement given the original imagery.
See also this other relevant answer
multiplying each pixel by the average blurring mask *(1/9) but the result is totally different.
PImage toAverageBlur(PImage a)
{
PImage aBlur = new PImage(a.width, a.height);
aBlur.loadPixels();
for(int i = 0; i < a.width; i++)
{
for(int j = 0; j < a.height; j++)
{
int pixelPosition = i*a.width + j;
int aPixel = ((a.pixels[pixelPosition] /9));
aBlur.pixels[pixelPosition] = color(aPixel);
}
}
aBlur.updatePixels();
return aBlur;
}
Currently, you are not applying an average filter, you are only scaling the image by a factor of 1/9, which would make it darker. Your terminology is good, you are trying to apply a 3x3 moving average (or neighbourhood average), also known as a boxcar filter.
For each pixel i,j, you need to take the sum of (i-1,j-1), (i-1,j), (i-1,j+1), (i,j-1), (i,j),(i,j+1),(i+1,j-1),(i+1,j),(i+1,j+1), then divide by 9 (for a 3x3 average). For this to work, you need to not consider the pixels on the image edge, which do not have 9 neighbours (so you start at pixel (1,1), for example). The output image will be a pixel smaller on each side. Alternatively, you can mirror values out to add an extra line to your input image which will make the output image the same size as the original.
There are more efficient ways of doing this, for example using FFT based convolution; these methods are faster because they don't require looping.