How to transform the image? - image-processing

You can see the lanes are askew. I want to make the lanes perpendicular.
I used Photoshop's perspective transformation function, got the result:
Although the lanes are vertical now, the cars in the far end become large, the cars in the near end become so small. That is not what I want.
I tried Photoshop's warp function. Photoshop gave me 8 control points and I finally got my ideal result.
What is the name of that kind of transformation?
How to do the transformation programmatically? I'm using C# + EmguCV(OpenCV)
Thanks a lot.

It is called Radial Distortion. It is commonly fixed by Browns model. Here is a tutorial on how to fix it using Photoshop.
Be aware that in your case, you should first fix the radial distortion, and only then do a projective transformation (Homography), since radial distortion is a property of the lens, whereas the projective transformation is a property of the world you are taking a look at.

Apart from correcting for radial distortion, the perspective can be corrected by applying a homography transform (assuming the road is flat)

Related

Basic camera calibration without checkerboard

The problem
I don't have access to the camera that was taking pictures below. You can find the source video here https://youtu.be/C7hS3enWh94?t=343
I would like to perform coarse camera calibration using only the information I have in the video frames (which is a road line that supposed to be straight but looks rounded in the images and luckily covers most of the sensor area over time).
What I need
I'm looking for a quick and dirty way to find coarse camera distortion parameters because I think there is no way to accurately estimate camera calibration parameters using only available information.
I'm out of ideas on how to progress with this problem. My idea is too complicated and would take much effort to implement with low guarantee that it would actually work. So the question I have is actually more of a brainstorm on hypothetical approaches to solving this problem.
P.S.
I thought it should be possible to implement Hough transform to look for circle curvature (circle radius that would accommodate most pixels) but the curvature we're looking at would definitely result with very large radius. In turn it might not form perfect circle and rather be an ellipse because of imperfect 90 degrees angle at which camera looks down to road. This complicates Hough Transform implementation significantly.
Another way I was thinking was to use random search algorithms such as genetic algorithm to fiddle distortion + scale + rotation + translation parameters that would result with one image being fit on top of another image perfectly. But again it would take me much time to complete anything like this.
Any better ideas from OpenCV gurus out there?

Which straight line detection algorithms will still detect lines that have suffered a degree of lens distortion in photos?

I've found other questions on the topic of detecting straight lines in images which I will read up on.
But I'm aware the in many photos the real life straight lines end up curved.
I don't have to de-curve fish-eye distortion or anything extreme.
But I would like to handle a "typical" amount of curve distortion as though they are still straight lines.
Are there some algorithms or techniques that can handle this in a "good enough" manner?
Here's an old photo of mine of a book showing the kind of curved straight lines I had in mind. It's a good example for the curvature and lens distortion. (It may not be a good example generally due to the other lines in the background, but that's beside the point of the current question.)
The curvature of the edges doesn't seem that severe, and at worst the Hough transform will just break the edges in a few segments.
I would be more worried by the lack of contrast of the edges (white on white) which can make the detection fail.
As it turns out, one of the most popular techniques used for straight line detection also exists in versions that work with curves.
It's called the "Hough Transform".
Wikipedia article
Stackoverflow tag
It was originally for detecting straight lines, but has been generalized to also work with curves and other arbitrary shapes. From the Wikipedia article:
The classical Hough transform was concerned with the identification of
lines in the image, but later the Hough transform has been extended to
identifying positions of arbitrary shapes, most commonly circles or
ellipses.
There are even papers on the specific topic of using Hough transforms to deal with lens distortion:
Automatic Lens Distortion Correction Using One-Parameter Division Models
A Hough transform-based method for radial lens distortion correction
Wide-Angle Lens Distortion Correction Using Division Models
Automatic curve selection for lens distortion correction using Hough transform energy
Blind radial distortion compensation from video using fast Hough transform

Expand homography matrix for distortion

I have two set of corresponding matches that I want to compute Homography Matrix between them. However, I found that the transformation between this points can not be modeled using just the Homography Matrix. I figured this by observing some lines in the original set of points have not represented as lines in the second set.
For example:
The previous state is very extreme in real the distortion is much less than that. It is usually a distortion because of the first set of points were extracted from image that was taken by scanner where the other set of points were extracted from a photo taken by mobile phone.
The Question:
How can I expand or Generalize the Homography matrix to make it includes this case? Or in other words, I want a non-line-preserve transformation model to use it instead of the Homography Matrix, Any Suggestion?
P.S OpenCV library is prefered if there is something ready to use.
EDIT:
Eliminating the distortion may not be an option for me because the photos are somewhat complex and I do not have the same Camera always plus I supposed to deal with images from unknown source (back-end separated from front-end). However, I have a reference which is planner and a query which has perspective + distoration effect which I want to correct it after I could found the corresponding pair matches.
It would be better if you had provided some examples of your images, so that we can understand your case better. From the description it seems that you are dealing with camera distortion.
Typical approach is to perform camera calibration once, then undistort each frame and finally work with images where straight lines look straight. All of these tasks are possible with OpenCV, consider the link above.
In case you cannot perform camera calibration to estimate distortion - there isn't much you can do. Try to calculate and apply homography on unrectified images - if the cameras don't have wide angle lens this should look ok (consider this case for example)

Three Dimensional Hough Space

Im searching for radius and the center coordinates of circle in a image. have already tried 2D Hough transform. but my circle radius is also a unknown. Im still a beginner to Computer vision so need guild lines and help for implementing three dimensional hough space.
You implement it just like 2D Hough space, but with an additional parameter. Pseudo code would look like this:
for each (x,y) in image
for each test_radius in [min_radius .. max_radius]
for each point (tx,ty) in the circle with radius test_radius around (x,y)
HoughSpace(tx,ty,test_radius) += image(x,y)
Thiton gives you the correct approach to formalize the problem. But then, you will run in other problems inherent to the hough transform:
how do you visualize the parameter space? You may implement something with a library like VTK, but 3D visualization of data is always a difficult topic. The visualization is important for debugging your detection algorithm and is one of the nice thing with 2D hough transform
the local maximum detection is non trivial. The new dimension will mean that your parameter space will be more sparse. You will have more tuning to do in this area
If you are looking for a circle detection algorithm, you may have better options than the hough transform (google "Fast Circle Detection Using Gradient Pair Vectors" looks good to me)

OpenCV detect corners

I'm using OpenCV on the iPhone. I want to find a Sudoku in a photo.
I started with some Gaussian Blur, Adaptive Threshold, Inverting the image and Dilate.
Then I did some findContour and drawContour to isolate the Sudoku grid.
Then I've used the Hough Transform to find lines and what I need to do now is find the corners of the grid. The Sudoku photo may be taken in an angle so I need to find the corners so I can crop and warp the image correctly.
This is how two different photos may look. One is pretty straight and one in an angle:
Probabilistic Hough
http://img96.imageshack.us/i/skrmavbild20110424kl101.png/
http://img846.imageshack.us/i/skrmavbild20110424kl101.png/
(Standard Hough comes in a comment. I can't post more than two links)
So, what would be the best approach to find those corners? And which of the two transform is easiest to use?
Best Regards
Linus
Why not use OpenCV's corner detection? Take a look at cvCornerHarris().
Alternatively, take a look at cvGoodFeaturesToTrack(). It's the Swiss Army Knife of feature detection and can be configured to use the Harris corner detector (among others).
I suggest the following approach. First, find all intersections of lines. It is helpful to sepparate lines into "horisontal" and "vertical" by angle (i.e. find two major directions of lines). Then find the convex hull of acquired points. Now you have corners and some points on the boundaries. You can remove the latter by analysing the angle between neighbour points in the convex hull. Corners will have the angle about 90 degrees and points on the boundaries - about 180 degrees.

Resources