How to group letters in OpenCV knowing their RotatedRects? - opencv

I have an image with letters, for example like this:
It's a binary image obtained from previous image processing stages and I know boundingRect and RotatedRect of every letter, but these letters are not grouped in words yet. It is worth mentioning, that RotatedRect can be returned from minAreaRect() or fitEllipse(), what is shown here and here. In my case RotatedRects look like this:
Blue rectangles are obtained from minAreaRect and red are obtained from fitEllipse. They give a little different boxes (center, width, height, angle), but the biggest difference is in values of angle. In first option angle changes from -90 to 0 degrees , in second case angle changes from 0 to 180 degrees. My problem is: how to group these letters in words, basing on parameters of RotatedRects? I can check angle of every RotatedRect and also measure distance between centers of every two RotatedRects. With simple assumptions on direction of text and distance between letters my algorithm of grouping works. But in more complicated case I encounter a problem. For example, in the image below there are few groups of text, with different directions, different angles and distances between letters.
Problems are when letter from one word is close to letter from other word and when angle of RotatedRect inside given word is more different than the angles of its neighbours. What could be the best way to connect letters in right words?

First, you need to define metric. It may be Euclidian 3D distance for example, defined as ||delta_X,delta_y,Delta_angle|| , where delta_X and delta_Y are distances beetween rectangle centers along x and y coordinate, and Delta_angle as a distance between angular orientation.
In short, your rectangles transforms to 3D data points, with coordinates (x,y,angle).
After you define this. You can use clusetering algorithm on your data. Seems DBSCAN should work good here. Check this article for example: link it may help to choose clustering algorithm.

I extended the aforementioned metric by a few other elements related to geometric properties of letters and words (distances, angles, areas, a ratio of neighboring letters areas, etc.) and now it works fine. Thanks for the suggestion.

Related

Finding All Pixels Within Certain Range in Polar Coordinates

I want to find all pixels in an image (in Cartesian coordinates) which lie within certain polar range, r_min r_max theta_min and theta_max. So in other words I have some annular section defined with the parameters mentioned above and I want to find integer x,y coordinates of the pixels which lie within it. The brute force solution comes to mid offcourse (going through all the pixels of the image and checking if it is within it) but I am wondering if there is some more efficient solution to it.
Thanks
In the brute force solution, you can first determine the tight bounding box of the area, by computing the four vertexes and including the four cardinal extreme points as needed. Then for every pixel, you will have to evaluate two circles (quadratic expressions) and two straight lines (linear expressions). By doing the computation incrementally (X => X+1) the number of operations drops to about nothing.
Inside a circle
f(X,Y) = X²+Y²-2XXc-2YYc+Xc²+Yc²-R² <= 0
Incrementally,
f(X+1,Y) = f(X,Y)+2X+1-2Xc <= 0
If you really want to avoid that overhead, you will resort to scanline conversion techniques. First think of filling a slanted rectangle. Drawing two horizontal lines by the intermediate vertices, you decompose the rectangle in two triangles and a parallelogram. Then for any scanline that crosses one of these shapes, you know beforehand what pair of sides you will intersect. From there, you know what portion of the scanline you need to fill.
You can generalize to any shape, in particular your circle segment. Be prepared to a relatively subtle case analysis, but finding the intersections themselves isn't so hard. It may help to split the domain with a vertical through the center so that any horizontal always meets the outline twice, never four times.
We'll assume the center of the section is at 0,0 for simplicity. If not, it's easy to change by offsetting all the coordinates.
For each possible y coordinate from r_max to -r_max, find the x coordinates of the circle of both radii: -sqrt(r*r-y*y) and sqrt(r*r-y*y). For every point that is inside the r_max circle and outside the r_min circle, it might be part of the section and will need further testing.
Now do the same x coordinate calculations, but this time with the line segments described by the angles. You'll need some conditional logic to determine which side of the line is inside and which is outside, and whether it affects the upper or lower part of the section.

Understanding Distance Transform in OpenCV

What is Distance Transform?What is the theory behind it?if I have 2 similar images but in different positions, how does distance transform help in overlapping them?The results that distance transform function produce are like divided in the middle-is it to find the center of one image so that the other is overlapped just half way?I have looked into the documentation of opencv but it's still not clear.
Look at the picture below (you may want to increase you monitor brightness to see it better). The pictures shows the distance from the red contour depicted with pixel intensities, so in the middle of the image where the distance is maximum the intensities are highest. This is a manifestation of the distance transform. Here is an immediate application - a green shape is a so-called active contour or snake that moves according to the gradient of distances from the contour (and also follows some other constraints) curls around the red outline. Thus one application of distance transform is shape processing.
Another application is text recognition - one of the powerful cues for text is a stable width of a stroke. The distance transform run on segmented text can confirm this. A corresponding method is called stroke width transform (SWT)
As for aligning two rotated shapes, I am not sure how you can use DT. You can find a center of a shape to rotate the shape but you can also rotate it about any point as well. The difference will be just in translation which is irrelevant if you run matchTemplate to match them in correct orientation.
Perhaps if you upload your images it will be more clear what to do. In general you can match them as a whole or by features (which is more robust to various deformations or perspective distortions) or even using outlines/silhouettes if they there are only a few features. Finally you can figure out the orientation of your object (if it has a dominant orientation) by running PCA or fitting an ellipse (as rotated rectangle).
cv::RotatedRect rect = cv::fitEllipse(points2D);
float angle_to_rotate = rect.angle;
The distance transform is an operation that works on a single binary image that fundamentally seeks to measure a value from every empty point (zero pixel) to the nearest boundary point (non-zero pixel).
An example is provided here and here.
The measurement can be based on various definitions, calculated discretely or precisely: e.g. Euclidean, Manhattan, or Chessboard. Indeed, the parameters in the OpenCV implementation allow some of these, and control their accuracy via the mask size.
The function can return the output measurement image (floating point) - as well as a labelled connected components image (a Voronoi diagram). There is an example of it in operation here.
I see from another question you have asked recently you are looking to register two images together. I don't think the distance transform is really what you are looking for here. If you are looking to align a set of points I would instead suggest you look at techniques like Procrustes, Iterative Closest Point, or Ransac.

Detection of pattern of circles using opencv

I have to detect the pattern of 6 circles using opencv. I have detected the circles and their centroids by using thresholding and contour function in opencv.
Now I have to define the relation between these circles in a way that should be invariant to scale and rotation. With this I would be able to detect this pattern in various views. I have to use this pattern for determining the object pose.
How can I achieve scale/rotation invariance? Do you have any reference I could read about it?
To make your pattern invariant toward rotation & scale, you have to normalize the direction and the scale when detecting your pattern. Here is a simple algorithm to achieve this
detect centers and circle size (you say you have already achieved this - good!)
compute the average center using a simple mean. Express all the centers from this mean
find the farthest center using a simple norm (euclidian is good enough)
scale the center position and the circle sizes so that this maximum distance is 1.0
rotate the centers so that coordinates of the farthest one is (1.0, 0)
you're done. You are now the proud owner of a scale/rotation invariant pattern detector!! Congratulations!
Now you can find patterns, transform them as suggested, and compare center position & circle sizes.
It is not entirely clear to me if you need to find the rotation, or merely get rid of it, or detect if the circles actually form the pattern you linked. Either way, the answer is much the same.
I would start by finding the two circles that have only one neighbour. For each circle centroid calculate the distance to the closest two neighbours. If the distances differ in more than say 10%, the centroid belongs to an "end" circle (one of the top ones in your link).
Now that you have found the two end circles, rotate them so that they are horizontal to each other. If the other centroids are now above them, rotate another 180 degrees so that the pattern ends up in the orientation you want.
Now you can calculate the scaling from the average inter-centroid distance.
Hope that helps.
Your question sounds exactly like what the SURF algorithm does. It finds groups of interest and groups them together in a way invarant to rotation and scale, and can find the same object in other pictures.
Just search for OpenCV and SURF.

Opencv match contour image

I'd like to know what would be the best strategy to compare a group of contours, in fact are edges resulting of a canny edges detection, from two pictures, in order to know which pair is more alike.
I have this image:
http://i55.tinypic.com/10fe1y8.jpg
And I would like to know how can I calculate which one of these fits best to it:
http://i56.tinypic.com/zmxd13.jpg
(it should be the one on the right)
Is there anyway to compare the contours as a whole?
I can easily rotate the images but I don't know what functions to use in order to calculate that the reference image on the right is the best fit.
Here it is what I've already tried using opencv:
matchShapes function - I tried this function using 2 gray scales images and I always get the same result in every comparison image and the value seems wrong as it is 0,0002.
So what I realized about matchShapes, but I'm not sure it's the correct assumption, is that the function works with pairs of contours and not full images. Now this is a problem because although I have the contours of the images I want to compare, they are hundreds and I don't know which ones should be "paired up".
So I also tried to compare all the contours of the first image against the other two with a for iteration but I might be comparing,for example, the contour of the 5 against the circle contour of the two reference images and not the 2 contour.
Also tried simple cv::compare function and matchTemplate, none with success.
Well, for this you have a couple of options depending on how robust you need your approach to be.
Simple Solutions (with assumptions):
For these methods, I'm assuming your the images you supplied are what you are working with (i.e., the objects are already segmented and approximately the same scale. Also, you will need to correct the rotation (at least in a coarse manner). You might do something like iteratively rotate the comparison image every 10, 30, 60, or 90 degrees, or whatever coarseness you feel you can get away with.
For example,
for(degrees = 10; degrees < 360; degrees += 10)
coinRot = rotate(compareCoin, degrees)
// you could also try Cosine Similarity, or even matchedTemplate here.
metric = SAD(coinRot, targetCoin)
if(metric > bestMetric)
bestMetric = metric
coinRotation = degrees
Sum of Absolute Differences (SAD): This will allow you to quickly compare the images once you have determined an approximate rotation angle.
Cosine Similarity: This operates a bit differently by treating the image as a 1D vector, and then computes the the high-dimensional angle between the two vectors. The better the match the smaller the angle will be.
Complex Solutions (possibly more robust):
These solutions will be more complex to implement, but will probably yield more robust classifications.
Haussdorf Distance: This answer will give you an introduction on using this method. This solution will probably also need the rotation correction to work properly.
Fourier-Mellin Transform: This method is an extension of Phase Correlation, which can extract the rotation, scale, and translation (RST) transform between two images.
Feature Detection and Extraction: This method involves detecting "robust" (i.e., scale and/or rotation invariant) features in the image and comparing them against a set of target features with RANSAC, LMedS, or simple least squares. OpenCV has a couple of samples using this technique in matcher_simple.cpp and matching_to_many_images.cpp. NOTE: With this method you will probably not want to binarize the image, so there are more detectable features available.

Deforming an image so that curved lines become straight lines

I have an image with free-form curved lines (actually lists of small line-segments) overlayed onto it, and I want to generate some kind of image-warp that will deform the image in such a way that these curves are deformed into horizontal straight lines.
I already have the coordinates of all the line-segment points stored separately so they don't have to be extracted from the image. What I'm looking for is an appropriate method of warping the image such that these lines are warped into straight ones.
thanks
You can use methods similar to those developed here:
http://www-ui.is.s.u-tokyo.ac.jp/~takeo/research/rigid/
What you do, is you define an MxN grid of control points which covers your source image.
You then need to determine how to modify each of your control points so that the final image will minimize some energy function (minimum curvature or something of this sort).
The final image is a linear warp determined by your control points (think of it as a 2D mesh whose texture is your source image and whose vertices' positions you're about to modify).
As long as your energy function can be expressed using linear equations, you can globally solve your problem (figuring out where to send each control point) using linear equations solver.
You express each of your source points (those which lie on your curved lines) using bi-linear interpolation weights of their surrounding grid points, then you express your restriction on the target by writing equations for these points.
After solving these linear equations you end up with destination grid points, then you just render your 2D mesh with the new vertices' positions.
You need to start out with a mapping formula that given an output coordinate will provide the corresponding coordinate from the input image. Depending on the distortion you're trying to correct for, this can get exceedingly complex; your question doesn't specify the problem in enough detail. For example, are the curves at the top of the image the same as the curves on the bottom and the same as those in the middle? Do horizontal distances compress based on the angle of the line? Let's assume the simplest case where the horizontal coordinate doesn't need any correction at all, and the vertical simply needs a constant correction based on the horizontal. Here x,y are the coordinates on the input image, x',y' are the coordinates on the output image, and f() is the difference between the drawn line segment and your ideal straight line.
x = x'
y = y' + f(x')
Now you simply go through all the pixels of your output image, calculate the corresponding point in the input image, and copy the pixel. The wrinkle here is that your formula is likely to give you points that lie between input pixels, such as y=4.37. In that case you'll need to interpolate to get an intermediate value from the input; there are many interpolation methods for images and I won't try to get into that here. The simplest would be "nearest neighbor", where you simply round the coordinate to the nearest integer.

Resources