I have an image with free-form curved lines (actually lists of small line-segments) overlayed onto it, and I want to generate some kind of image-warp that will deform the image in such a way that these curves are deformed into horizontal straight lines.
I already have the coordinates of all the line-segment points stored separately so they don't have to be extracted from the image. What I'm looking for is an appropriate method of warping the image such that these lines are warped into straight ones.
thanks
You can use methods similar to those developed here:
http://www-ui.is.s.u-tokyo.ac.jp/~takeo/research/rigid/
What you do, is you define an MxN grid of control points which covers your source image.
You then need to determine how to modify each of your control points so that the final image will minimize some energy function (minimum curvature or something of this sort).
The final image is a linear warp determined by your control points (think of it as a 2D mesh whose texture is your source image and whose vertices' positions you're about to modify).
As long as your energy function can be expressed using linear equations, you can globally solve your problem (figuring out where to send each control point) using linear equations solver.
You express each of your source points (those which lie on your curved lines) using bi-linear interpolation weights of their surrounding grid points, then you express your restriction on the target by writing equations for these points.
After solving these linear equations you end up with destination grid points, then you just render your 2D mesh with the new vertices' positions.
You need to start out with a mapping formula that given an output coordinate will provide the corresponding coordinate from the input image. Depending on the distortion you're trying to correct for, this can get exceedingly complex; your question doesn't specify the problem in enough detail. For example, are the curves at the top of the image the same as the curves on the bottom and the same as those in the middle? Do horizontal distances compress based on the angle of the line? Let's assume the simplest case where the horizontal coordinate doesn't need any correction at all, and the vertical simply needs a constant correction based on the horizontal. Here x,y are the coordinates on the input image, x',y' are the coordinates on the output image, and f() is the difference between the drawn line segment and your ideal straight line.
x = x'
y = y' + f(x')
Now you simply go through all the pixels of your output image, calculate the corresponding point in the input image, and copy the pixel. The wrinkle here is that your formula is likely to give you points that lie between input pixels, such as y=4.37. In that case you'll need to interpolate to get an intermediate value from the input; there are many interpolation methods for images and I won't try to get into that here. The simplest would be "nearest neighbor", where you simply round the coordinate to the nearest integer.
Related
I've been working off a variant of the opencv squares sample to detect rectangles. It's working fine for closed rectangles, but I was wondering what approaches I could take to detect rectangles that have openings ie missing corners, lines that are too short.
I perform some dilation, which closes small gaps but not these larger ones.
I considered using a convex hull or bounding rect to generate a contour for comparison but since the edges of the rectangle are disconnected, each would read as a separate contour.
I think the first step is to detect which lines are candidates for forming a complete rectangle, and then perform some sort of line extrapolation. This seems promising, but my rectangle edges won't lie perfectly horizontally or vertically.
I'm trying to detect the three leftmost rectangles in this image:
Perhaps this paper is of interest? Rectangle Detection based on a Windowed Hough Transform
Basically, take the hough line transform of the image. You will get maximums at the locations in (theta, rho) space which relate to the places where there are lines. The larger the value, the longer/straighter the line. Maybe do a threshold to only get the best lines. Then, we are trying to look for pairs of lines which are
1) parallel: the maximums occur at similar theta values
2) similar length: the values of the maximums are similar
3) orthogonal to another pair of lines: theta values are 90 degrees away from other pairs' theta values
There are some more details in the paper, such as doing the transform in a sliding window, and then using an error metric to consolidate multiple matches.
I am doing a project in opencv to detect handwritten characters from a user filled form. I have made algorithm to detect the skew angle of the scanned image using Hough Line Transform. But it does not work when the image is 180 degree rotated since 0 and 180 degree are treated as same by Hough Line function. My image contains some rectangles to fill data in them and some text. So how do i detect if a scanned image is 180 degree rotated or not?
Since I will have to first correct the skew angle of the image then only I can detect exactly where on the image user filled data (which I need to extract) lies using rectangle coordinates from the empty template form provided earlier, answers without using chacater recognition are appreciated.
To lift the 180° degrees ambiguity, only OCR can tell you: perform two reads on the deskewed text, one using the given angle, the other one using the angle + 180°, and keep the most successful read.
Unless you have some a priori information it's the only way, as other image processing operations don't know about characters.
UPDATE:
Some strings are forever ambiguous, like 0689HINOSXZ <=> ZXSONIH6890.
If the layout of the text is known (boxes) and asymmetric, it is a relatively easy matter to check matching of the text strings to the layout: choose a box (such as the topmost) and a string (the topmost), and align them by translation; then see how the other boxes and strings match (using a nearest neighbor rule) and establish the correspondences. Compare results with the straight and flipped layout, and keep the best overall area of overlap.
For reliability, it can be better to try more than a starting box/string pair, as there can be some ambiguity to which is the topmost (it could even be missing).
Isn't your problem more general? Let's say, you detect a skew angle of +45 degrees and rotate the image by -45 degrees. Then it could still be that the image is rotated by 180 degrees because it was not rotated +45 degrees but -135 instead.
Anyway, to the actual question: I am not an expert in character recognition but I think if you use it anyway in your application, couldn't you just try character recognition for both rotations and then choose the one that gets stronger response?
If you match the rectangles in your template with those of the skew corrected image, you'll be able to get the correct orientation (but only if there's no symmetry in the placement of those rectangles). For matching you may be able to use the rectangles in your template as a mask to extract regions from skew corrected image.
EDIT
Suppose your template and the skew corrected image look like this (in the best case where there are no displacements in skew corrected) :
Then you can use the template as a mask to copy data from skew corrected image. Then check what fraction of the white pixels in the template is contained in the copied image. This value will be very low for a 180 degree rotated image.
But as you say, this won't work in practice because of the displacements. Then may be you can try template matching (cross correlation) in which you use the template image as the template. Location of the strongest peak and the strength would give you some indication of the orientation. You can perform template matching at a reduced resolution so it runs faster.
You could try to match keypoints (Harris, Sift, ...) from the scanned image and the empty template. With the matched points you can easily find a transformation to align the scanned image with the template. This may work for your case, but you are more likely to succeed if the are some textured logos in the images, as it's usually the case for forms.
Can't you simple compute two cross-correlations? One with 180 rotation and one without? The one with the matching rectangle should give you a higher correlation maximum (provided the image contrast of the remaining page is not too misleading, but some pre-filtering could help here.)
What is Distance Transform?What is the theory behind it?if I have 2 similar images but in different positions, how does distance transform help in overlapping them?The results that distance transform function produce are like divided in the middle-is it to find the center of one image so that the other is overlapped just half way?I have looked into the documentation of opencv but it's still not clear.
Look at the picture below (you may want to increase you monitor brightness to see it better). The pictures shows the distance from the red contour depicted with pixel intensities, so in the middle of the image where the distance is maximum the intensities are highest. This is a manifestation of the distance transform. Here is an immediate application - a green shape is a so-called active contour or snake that moves according to the gradient of distances from the contour (and also follows some other constraints) curls around the red outline. Thus one application of distance transform is shape processing.
Another application is text recognition - one of the powerful cues for text is a stable width of a stroke. The distance transform run on segmented text can confirm this. A corresponding method is called stroke width transform (SWT)
As for aligning two rotated shapes, I am not sure how you can use DT. You can find a center of a shape to rotate the shape but you can also rotate it about any point as well. The difference will be just in translation which is irrelevant if you run matchTemplate to match them in correct orientation.
Perhaps if you upload your images it will be more clear what to do. In general you can match them as a whole or by features (which is more robust to various deformations or perspective distortions) or even using outlines/silhouettes if they there are only a few features. Finally you can figure out the orientation of your object (if it has a dominant orientation) by running PCA or fitting an ellipse (as rotated rectangle).
cv::RotatedRect rect = cv::fitEllipse(points2D);
float angle_to_rotate = rect.angle;
The distance transform is an operation that works on a single binary image that fundamentally seeks to measure a value from every empty point (zero pixel) to the nearest boundary point (non-zero pixel).
An example is provided here and here.
The measurement can be based on various definitions, calculated discretely or precisely: e.g. Euclidean, Manhattan, or Chessboard. Indeed, the parameters in the OpenCV implementation allow some of these, and control their accuracy via the mask size.
The function can return the output measurement image (floating point) - as well as a labelled connected components image (a Voronoi diagram). There is an example of it in operation here.
I see from another question you have asked recently you are looking to register two images together. I don't think the distance transform is really what you are looking for here. If you are looking to align a set of points I would instead suggest you look at techniques like Procrustes, Iterative Closest Point, or Ransac.
The application PhotoFiltre has an option to stretch part of an image. You select a rectangular shape and you can then grab and move the vertexes somewhere else to make any quadrangle. The image part which you selected will stretch along. Hopefully these images make my point a little clearer:
Is there a general algorithm which can handle this? I would like to obtain the same effect on HTML5 canvas - given an image and the resulting corner points, I would like to be able to draw the stretched image in such a way that it fills the new quadrangle neatly.
A while ago I asked something similar, where the solution was to divide the image up in triangles and stretch each triangle so that each three points correspond to the three points on the original image. This technique turned out to be rather exprensive and I would like if there is a more general method of accomplishing this.
I would like to use this in a 3D renderer, but I would like to work with a (2D) quadrangle.
I don't know whether PhotoFiltre internally also uses triangles, or whether it uses another (cheaper) algorithm to stretch an image like this.
Does someone perhaps know if there is a cheaper or more general method/algorithm to stretch a rectangular image, so that it fills a quadrangle given four points?
The normal method is to start with the destination, pick an appropriate grid size and then for each point in the new shape calculate the corresponding point in the source image (possibly with interpolation depending on the quality you need)
Affine transform.
Given four points for the "stretched" figure and four points for the figure it should match (e.g. a rectangle), an affine transform provides the spatial mapping you need. For each point (x1,y1) in the original image there is a corresponding point (x2,y2) in the second, "stretched" image.
For each integer-valued pixel (x2, y2) in the stretched image, use the affine transform to find the corresponding real-valued point (x1, y1) in the original image and apply its color to (x2,y2).
http://demonstrations.wolfram.com/AffineTransform/
You'll find sample code for Java and other languages online. .NET has the Matrix class.
I have a set of points to define a shape. These points are in order and essentially are my "selection".
I want to be able to contract this selection by an arbitrary amount to get a smaller version of my original shape.
In a basic example with a triangle, the points are simply moved along their normal which is defined by the points to the left and the right of the points in question.
Eventually all 3 points will meet and form one point but until that point they will make a smaller and smaller triangle.
For more complex shapes, when moving the individual points inward, they may pass through the outer edge of the shape resulting in weird artifacts. Obviously I'll need to cull these points and remove them from the array.
Any help in exactly how I can do that would be greatly appreciated.
Thanks!
This is just an idea but couldn't you find the center of mass of the object, create a vector from the center to each point, and move each point along this vector?
To find the center of mass would of course involve averaging each x and y coordinate. Getting a vector is as simple a subtracting the point in question with the center point. Normalizing and scaling are common vector operations that can be found with the Google.
EDIT
Another way to interpret what you're asking is you want to erode your collection of points. As in morphology erosion. This is typically applied to binary images but you can slightly modify the concept to work with a collection of points. Essentially, you need to write a function that, given a point, will return true (black) or false (white) depending on if that point is inside or outside the shape defined by your points. You'd have to look up how to do that for shapes that aren't always concave (it's harder but not impossible).
Now, obviously, every single one of your actual points will return false because they're all on the border (by definition). However, you now have a matrix of points around your point of interest that define where is "inside" and where is "outside". Average all of the "inside" points and move your actual point along the vector between itself and towards this average. You could play with different erosion kernels to see what works best.
You could even work with a kernel with floating point weights instead of either/or values which will affect your average calculation proportional to their weights. With this, you could approximate a circular kernel with a low number of points. Try the simpler method first.
Find the selection center (as suggested by colithium)
Map the selection points to the coordinate system with the selection center at (0,0). For example, if the selection center is at (150,150), and a given selection point is at (125,75), the mapped position of the point becomes (-25,-75).
Scale the mapped points (multiply X and Y by something in the range of 0.0..1.0)
Remap the points back to the original coordinate system
Only simple maths required, no need to muck about normalizing vectors.