Fourier Transformation - image local patch - image-processing

Still I am not clear about the Fourier transformation. I know it represents the frequency information of the images and I can reconstruct an image using the fourier transformation.
Say, I have an image I(x,y). Its fourier transformation is F(I). I want to reconstruct a small rectangular area in that image starting from (x1,y1) and ending at (x2,y2) without reconstructing the whole image.
Is it possible to reconstruct only a small patch from F(I)?

To answer your question, yes it is possible; do an inverse FFT and then crop the image normally. If it seems like a cop-out it is because you're attempting to do a time-domain task in frequency domain which isn't going to be very natural.
If you insist that the calculation be done in frequency domain I think you should be able to phase shift the image to the origin (x1 + y1) then inverse FFT and discard samples outside (x2 - x1, y2 - y1).
The fundamental problem is that in frequency domain each bin (or pixel for a 2D FFT) represents a frequency and phase across the entire image in time domain. Discarding a single pixel in frequency domain results in a loss of that frequency information for the whole image and cannot be localized.

Related

Homography and Affine Transformation

Hi i am a beginner in computer vision and i wish to know what exactly is the difference between a homography and affine tranformation, if you want to find the translation between two images which one would you use and why?. From papers and definitions I found online, I am yet to find the difference between them and where one is used instead of the other.
Thanks for your help.
A picture is worth a thousand words:
I have set it down in the terms of a layman.
Homography
A homography, is a matrix that maps a given set of points in one image to the corresponding set of points in another image.
The homography is a 3x3 matrix that maps each point of the first image to the corresponding point of the second image. See below where H is the homography matrix being computed for point x1, y1 and x2, y2
Consider the points of the images present below:
In the case above, there are 4 homography matrices generated.
Where is it used?
You may want to align the above depicted images. You can do so by using the homography.
Here the second image is mapped with respect to the first
Another application is Panoramic Stitching
Visit THIS BLOG for more
Affine transformation
An affine transform generates a matrix to transform the image with respect to the entire image. It does not consider certain points as in the case of homography.
Hence in affine transformation the parallelism of lines is always preserved (as mentioned by EdChum ).
Where is it used?
It is used in areas where you want to alter the entire image:
Rotation (self understood)
Translation (shifting the entire image by a certain length either to top/bottom or left/right)
Scaling (it is basically shrinking or blowing up an image)
See THIS PAGE for more

Measure distance to object with a single camera in a static scene

let's say I am placing a small object on a flat floor inside a room.
First step: Take a picture of the room floor from a known, static position in the world coordinate system.
Second step: Detect the bottom edge of the object in the image and map the pixel coordinate to the object position in the world coordinate system.
Third step: By using a measuring tape measure the real distance to the object.
I could move the small object, repeat this three steps for every pixel coordinate and create a lookup table (key: pixel coordinate; value: distance). This procedure is accurate enough for my use case. I know that it is problematic if there are multiple objects (an object could cover an other object).
My question: Is there an easier way to create this lookup table? Accidentally changing the camera angle by a few degrees destroys the hard work. ;)
Maybe it is possible to execute the three steps for a few specific pixel coordinates or positions in the world coordinate system and perform some "calibration" to calculate the distances with the computed parameters?
If the floor is flat, its equation is that of a plane, let
a.x + b.y + c.z = 1
in the camera coordinates (the origin is the optical center of the camera, XY forms the focal plane and Z the viewing direction).
Then a ray from the camera center to a point on the image at pixel coordinates (u, v) is given by
(u, v, f).t
where f is the focal length.
The ray hits the plane when
(a.u + b.v + c.f) t = 1,
i.e. at the point
(u, v, f) / (a.u + b.v + c.f)
Finally, the distance from the camera to the point is
p = √(u² + v² + f²) / (a.u + b.v + c.f)
This is the function that you need to tabulate. Assuming that f is known, you can determine the unknown coefficients a, b, c by taking three non-aligned points, measuring the image coordinates (u, v) and the distances, and solving a 3x3 system of linear equations.
From the last equation, you can then estimate the distance for any point of the image.
The focal distance can be measured (in pixels) by looking at a target of known size, at a known distance. By proportionality, the ratio of the distance over the size is f over the length in the image.
Most vision libraries (including opencv) have built in functions that will take a couple points from a camera reference frame and the related points from a Cartesian plane and generate your warp matrix (affine transformation) for you. (some are fancy enough to include non-linearity mappings with enough input points, but that brings you back to your time to calibrate issue)
A final note: most vision libraries use some type of grid to calibrate off of ie a checkerboard patter. If you wrote your calibration to work off of such a sheet, then you would only need to measure distances to 1 target object as the transformations would be calculated by the sheet and the target would just provide the world offsets.
I believe what you are after is called a Projective Transformation. The link below should guide you through exactly what you need.
Demonstration of calculating a projective transformation with proper math typesetting on the Math SE.
Although you can solve this by hand and write that into your code... I strongly recommend using a matrix math library or even writing your own matrix math functions prior to resorting to hand calculating the equations as you will have to solve them symbolically to turn it into code and that will be very expansive and prone to miscalculation.
Here are just a few tips that may help you with clarification (applying it to your problem):
-Your A matrix (source) is built from the 4 xy points in your camera image (pixel locations).
-Your B matrix (destination) is built from your measurements in in the real world.
-For fast recalibration, I suggest marking points on the ground to be able to quickly place the cube at the 4 locations (and subsequently get the altered pixel locations in the camera) without having to remeasure.
-You will only have to do steps 1-5 (once) during calibration, after that whenever you want to know the position of something just get the coordinates in your image and run them through step 6 and step 7.
-You will want your calibration points to be as far away from eachother as possible (within reason, as at extreme distances in a vanishing point situation, you start rapidly losing pixel density and therefore source image accuracy). Make sure that no 3 points are colinear (simply put, make your 4 points approximately square at almost the full span of your camera fov in the real world)
ps I apologize for not writing this out here, but they have fancy math editing and it looks way cleaner!
Final steps to applying this method to this situation:
In order to perform this calibration, you will have to set a global home position (likely easiest to do this arbitrarily on the floor and measure your camera position relative to that point). From this position, you will need to measure your object's distance from this position in both x and y coordinates on the floor. Although a more tightly packed calibration set will give you more error, the easiest solution for this may simply be to have a dimension-ed sheet(I am thinking piece of printer paper or a large board or something). The reason that this will be easier is that it will have built in axes (ie the two sides will be orthogonal and you will just use the four corners of the object and used canned distances in your calibration). EX: for a piece of paper your points would be (0,0), (0,8.5), (11,8.5), (11,0)
So using those points and the pixels you get will create your transform matrix, but that still just gives you a global x,y position on axes that may be hard to measure on (they may be skew depending on how you measured/ calibrated). So you will need to calculate your camera offset:
object in real world coords (from steps above): x1, y1
camera coords (Xc, Yc)
dist = sqrt( pow(x1-Xc,2) + pow(y1-Yc,2) )
If it is too cumbersome to try to measure the position of the camera from global origin by hand, you can instead measure the distance to 2 different points and feed those values into the above equation to calculate your camera offset, which you will then store and use anytime you want to get final distance.
As already mentioned in the previous answers you'll need a projective transformation or simply a homography. However, I'll consider it from a more practical view and will try to summarize it short and simple.
So, given the proper homography you can warp your picture of a plane such that it looks like you took it from above (like here). Even simpler you can transform a pixel coordinate of your image to world coordinates of the plane (the same is done during the warping for each pixel).
A homography is basically a 3x3 matrix and you transform a coordinate by multiplying it with the matrix. You may now think, wait 3x3 matrix and 2D coordinates: You'll need to use homogeneous coordinates.
However, most frameworks and libraries will do this handling for you. What you need to do is finding (at least) four points (x/y-coordinates) on your world plane/floor (preferably the corners of a rectangle, aligned with your desired world coordinate system), take a picture of them, measure the pixel coordinates and pass both to the "find-homography-function" of your desired computer vision or math library.
In OpenCV that would be findHomography, here an example (the method perspectiveTransform then performs the actual transformation).
In Matlab you can use something from here. Make sure you are using a projective transformation as transform type. The result is a projective tform, which can be used in combination with this method, in order to transform your points from one coordinate system to another.
In order to transform into the other direction you just have to invert your homography and use the result instead.

SIFT matches and recognition?

I am developing an application where I am using SIFT + RANSAC and Homography to find an object (OpenCV C++,Java). The problem I am facing is that where there are many outliers RANSAC performs poorly.
For this reasons I would like to try what the author of SIFT said to be pretty good: voting.
I have read that we should vote in a 4 dimension feature space, where the 4 dimensions are:
Location [x, y] (someone says Traslation)
Scale
Orientation
While with opencv is easy to get the match scale and orientation with:
cv::Keypoints.octave
cv::Keypoints.angle
I am having hard time to understand how I can calculate the location.
I have found an interesting slide where with only one match we are able to draw a bounding box:
But I don't get how I could draw that bounding box with just one match. Any help?
You are looking for the largest set of matched features that fit a geometric transformation from image 1 to image 2. In this case, it is the similarity transformation, which has 4 parameters: translation (dx, dy), scale change ds, and rotation d_theta.
Let's say you have matched to features: f1 from image 1 and f2 from image 2. Let (x1,y1) be the location of f1 in image 1, let s1 be its scale, and let theta1 be it's orientation. Similarly you have (x2,y2), s2, and theta2 for f2.
The translation between two features is (dx,dy) = (x2-x1, y2-y1).
The scale change between two features is ds = s2 / s1.
The rotation between two features is d_theta = theta2 - theta1.
So, dx, dy, ds, and d_theta are the dimensions of your Hough space. Each bin corresponds to a similarity transformation.
Once you have performed Hough voting, and found the maximum bin, that bin gives you a transformation from image 1 to image 2. One thing you can do is take the bounding box of image 1 and transform it using that transformation: apply the corresponding translation, rotation and scaling to the corners of the image. Typically, you pack the parameters into a transformation matrix, and use homogeneous coordinates. This will give you the bounding box in image 2 corresponding to the object you've detected.
When using the Hough transform, you create a signature storing the displacement vectors of every feature from the template centroid (either (w/2,h/2) or with the help of central moments).
E.g. for 10 SIFT features found on the template, their relative positions according to template's centroid is a vector<{a,b}>. Now, let's search for this object in a query image: every SIFT feature found in the query image, matched with one of template's 10, casts a vote to its corresponding centroid.
votemap(feature.x - a*, feature.y - b*)+=1 where a,b corresponds to this particular feature vector.
If some of those features cast successfully at the same point (clustering is essential), you have found an object instance.
Signature and voting are reverse procedures. Let's assume V=(-20,-10). So during searching in the novel image, when the two matches are found, we detect their orientation and size and cast a respective vote. E.g. for the right box centroid will be V'=(+20*0.5*cos(-10),+10*0.5*sin(-10)) away from the SIFT feature because it is in half size and rotated by -10 degrees.
To complete Dima's , one needs to add that the 4D Hough space is quantized into a (possibly small) number of 4D boxes, where each box corresponds to the simiéarity given by its center.
Then, for each possible similarity obtained via a tentative matching of features, add 1 into the corresponding box (or cell) in the 4D space. The output similarity is given by the cell with the more votes.
In order to computethe transform from 1 match, just use Dima's formulas in his answer. For several pairs of matches, you may need to use some least squares fit.
Finally, the transform can be applied with the function cv::warpPerspective(), where the third line of the perspective matrix is set to [0,0,1].

Fast patch extraction using homography

Suppose you have an homography H relating a planar surface depicted in two different images, Ir and I. Ir is the reference image, where the planar surface is parallel to the image plane (and practically occupy the entire image). I is a run-time image (a photo of the planar surface taken at an arbitrary viewpoint). Let H be such that:
p = Hp', where p is a point in Ir and p' is the corresponding point in I.
Suppose you have two points p1=(x1,y) and p2=(x2,y), with x1 < x2, relative to the image Ir. Note that they belong to the same row (common y). Let H'=H^(-1). Using H', you can compute the corresponding points in I of the following points: (x1,y),(x1+1,y),...,(x2,y).
The question is: is there a way to avoid the matrix-vector multiplication to compute all those points? The easiest way that comes to me is to use the homography to compute the corresponding point of p1 and p2 (call them p1' and p2'). To obtain the others (that is: (x1+1,y), (x1+2,y),...,(x2-1, y)), linear interpolate p1' and p2' in the image I.
But since there is a projective transformation between Ir and I, i think that this method is quite imprecise.
Any other idea? This question is relative to the fact that i need a computational efficient way to extract a lot of (small) patches (of around 10x10 pixels) around a point p in Ir in a real-time software.
Thank you.
Ps.
Maybe the fact that i am using smal patches would make using linear interpolation a suitable approach?
You have a projective transform and, unfortunately, ratio of lengths are not invariant under this type of transformation.
My suggestion: explore the cross ratio because it is invariant under projective transformations. I think that for each 3 points you can get a "cheaper" 4th, avoiding the matrix-vector computation and using the cross ratio instead. But I have not put all the stuff in paper to verify if the cross ratio alternative is that much "computationally cheaper" than the matrix-vector multiplication.

image transformations

So I've been using gnu-gsl and CImg to implement some of the fundamental projective space techniques for affine and metric rectification.
I've completed computing an affine rectification but, I'm having a hard time figuring out how to apply the affine rectification matrix to the original (input) image.
My current thought process is to iterate across the input image for each pixel coordinate. Then multiply the original pixel coordinate (converted to a homogeneous coordinate) by the affine rectification matrix to get the output pixel coordinate.
Then access the output image using the output pixel coordinate and conduct a blend (addition) operation on the output image's pixel location with the pixel color from the original image.
Does that sound right? I'm getting a lot of really weird values after multiplying the original pixel coordinate by the affine rectification matrix.
No, your values should not be weird. Why don't you make a simple example, a small scale with a small translation; e.g.
x' = 1.01*x + 0.0*y + 5;
y' = 0.0*x + 0.98*y + 10;
Now the pixel at (10,10) should map to (15.1,19.8), right ?
If you want to make a nice output image, you should find the forward projection and then back project to the input image and interpolate there rather than try to blend into the output image. Otherwise you will end up with gaps in the output.
You need to be careful with your terminology here; it sounds to me like you are doing projections, sometimes called warping in the computer graphics community. Rectification is something else, but it depends on what you are doing.

Resources