cvRemap to replace cvWarpPerspective - image-processing

Below transformation is what I want to do.
For each tile in source image, I know the coordinate of each corner, and I know the coordinate of each corresponding corner in the output image, so I can call cvWarpPerspective to warp each tile and then connect the quadrangles together to get the final output image.
Can cvRemap do this in one transformation? If yes, how do I construct the map (mapx, and mapy) from the coordinate that I have so to pass to the cvRemap function? I've searched the EmguCV documentation but could not find a cvRemap example.

I have no experience with emgu (actually I have a low esteem for it) but I can explain about remap and warpPerspective. It may help you find the corresponding functions in emgu.
remap takes an input picture, and a coordinate relocation map, and applies it to the output. The map stores info like this: take pixel (1, 4) and move it to (3, 5). The pixels that are not defined in the map are filled with 0, or other value, depending on the extra parameters. Note that it also uses some interpolation for smooth results.
warpPerspective takes a geometric perspective transform, calculates internally the map for the transform, and then calls remap() to apply it to the input. Actually, many functions in OpenCV use remap, or can use it. warpAffine, lens correction, and others build their custom maps, then call remap to apply them.
The perspective transform is defined by a 3.3 matrix H. So, each coordinate in the input image will be shifted according to the H matrix:
[ h11 h12 h13 ] [ x ]
[x' y' 1] = [ h21 h22 h23 ] * [ y ]
[ h31 h32 h33 ] [ 1 ]
warpPerspective applies the inverse transform for each point in the destination image, to find out where in the source image is the pixel that should be moved in, and stores that info in the map.
You can take that code and make a custom function in your app, but I do not know how easy is to do it in C#. C++ would have been a piece of cake.
Final note: I have used the term map, although there are two map parameters in the remap() function. And to be more confusing, they can have completely different meanings.
First valid combination is where mapx contains the coordinate transforms of the x coordinate, in a width-by-height image, one channel. mapy is the corresponding map for y dimension. The coordinates are floating-point values, reflecting the fact that coordinate transforms from one image to another do not map exactly to integer values. By example, pixel (2, 5) may map to (3.456, 7.293)
second possibility is to store the integer coordinates in mapx, for both x, and y, in two channels, and keep an interpolation weights table in the second parameter, mapy. It is usually far easier to generate the first format, however the second is faster to process. To understand the sencod format you should read the OpenCV sources, because it's not documented anywere.

Related

What does the output array contain in calcOpticalFlowSF

I need to read the displacement of each pixel in each stage using the Optical flow - Simple Flow tracking algorithm.
I tried the code mentioned here:
How to make Simpleflow work
The code works fine. However, I don't know what does the flow array contain because its values have a strange format, does it contain the displacement or the new position of the pixel or non of them? And is there any way to read these values in order to track the pixel?
Thanks!
After readin their paper SimpleFlow: A Non-iterative, Sublinear Optical Flow Algorithm, authors clearly state that the resulting flow matrice contains the displacement of the pixel, i.e. the pixel p1 in the Ft image is at the position (x,y), and the position of p1 in the Ft+i is (x+u, y+v), (u,v) are the values saved in the resulting flow matrice for each pixel from Ft.

IDL Transforming a PNG Given Input Image and Location of Output Pixels

I'm using IDL to take an image of the outside surface of a cylinder and flatten it to obtain a roughly accurate non-curved picture. I have already done the math necessary to know that given an input pixel at location (x,y)[with 0,0 being the center of the image], where the output pixel (x',y') should be, but I cannot figure out how to apply this to build my new image. I am also aware that due to the fact that the flattened image is larger than the original image, some pixels on the final image will not have a corresponding input pixel (do they appear black? Transparent?), but I'll deal with that when I get there.
Any ideas would be greatly appreciated.
Try the interpol routine. If you have a) an array of input coordinates, b) an array of input values, and c) an array of output coordinates you wish to interpolate to, interpol (not interpolate, which is similar but not the same) will do the magic. Here is copy-pasted from the IDL docs:
Result = INTERPOL( V, X, XOUT [, /LSQUADRATIC] [, /NAN] [, /QUADRATIC] [, /SPLINE] )
V would be your input values, X your input coordinates, and XOUT your output coordinates. They do not have to be on a regular grid, so nice for the kind of mapping you want.

Should stereo projection be internally consistent?

I'm working on a problem where I have a calibrated stereo pair and am identifying stereo matches. I then project those matches using perspectiveTransform to give me (x, y, z) coordinates.
Later, I'm taking those coordinates and reprojecting them into my original, unrectified image using projectPoints with takes my left camera's M and D parameters. I was surprised to find that, despite all of this happening within the same calibration, the points do not project on the correct part of the image (they have about a 5 pixel offset, depending where they are in the image). This offset seems to change with different calibrations.
My question is: should I expect this, or am I likely doing something wrong? It seems like the calibration ought to be internally consistent.
Here's a screenshot of just a single point being remapped (drawn with the two lines):
(ignore the little boxes, those are something else)
I was doing something slightly wrong. When reprojecting from 3D to 2D, I missed that stereoRectify returns R1, the output rectification rotation matrix. When calling projectPoints, I needed to pass the inverse of that matrix as the second parameter (rvec).

Matlab Camera Calibration - Correct lens distortion

In the Computer Vision System Toolbox for Matlab there are three types of interpolation methods used for Correct lens distortion.
Interpolation method for the function to use on the input image. The interp input interpolation method can be the string, 'nearest', 'linear', or 'cubic'.
My question is: what is the difference between 'nearest', 'linear', or 'cubic' ? and which one implemented in "Zhang" and "Heikkila, J, and O. Silven" methods.
I can't access the paged at the link you wrote in your question (it asks for a username and password) and so I assume your linked page has the same contents of the page http://www.mathworks.it/it/help/vision/ref/undistortimage.html which I quote here:
J = undistortImage(I,cameraParameters,interp) removes lens distortion from the input image, I and specifies the
interpolation method for the function to use on the input image.
Input Arguments
I — Input image
cameraParameters — Object for storing camera parameters
interp — Interpolation method
'linear' (default) | 'nearest' | 'cubic'
Interpolation method for the function to use on
the input image. The interp input interpolation method can be the
string, 'nearest', 'linear', or 'cubic'.
Furthermore, I assume you are referring to these papers:
ZHANG, Zhengyou. A flexible new technique for camera calibration. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 2000, 22.11: 1330-1334.
HEIKKILA, Janne; SILVEN, Olli. A four-step camera calibration procedure with implicit image correction. In: Computer Vision and Pattern Recognition, 1997. Proceedings., 1997 IEEE Computer Society Conference on. IEEE, 1997. p. 1106-1112.
I have searched for the word "interpolation" in the two pdf documents Zhang and Heikkila and Silven and I did not find any direct statement about the interpolation method they have used.
To my knowledge, in general, a camera calibration method is concerned on how to estimate the intrinsic, extrinsic and lens distortion parameters (all these parameters are inside the input argument cameraParameters of Matlab's undistortImage function); the interpolation method is part of a different problem, i.e. the problem of "Geometric Image Transformations".
I quote from the OpenCV's page Geometric Image Transformation (I have slightly modified the original omitting some details and adding some definitions, I assume you are working with grey level image):
The functions in this section perform various geometrical
transformations of 2D images. They do not change the image content but
deform the pixel grid and map this deformed grid to the destination
image. In fact, to avoid sampling artifacts, the mapping is done in
the reverse order, from destination to the source. That is, for each
pixel (x, y) of the destination image, the functions compute
coordinates of the corresponding “donor” pixel in the source image and
copy the pixel value:
dst(x,y) = src(f_x(x,y), f_y(x,y))
where
dst(x,y) is the grey value of the pixel located at row x and column y in the destination image
src(x,y) is the grey value of the pixel located at row x and column y in the source image
f_x is a function that maps the row x and the column y to a new row, it just uses coordinates and not the grey level.
f_y is a function that maps the row x and the column y to a new column, it just uses coordinates and not the grey level.
The actual implementations of the geometrical transformations, from
the most generic remap() and to the simplest and the fastest resize()
, need to solve two main problems with the above formula:
• Extrapolation of non-existing pixels. Similarly to the filtering
functions described in the previous section, for some (x,y) , either
one of f_x(x,y) , or f_y(x,y) , or both of them may fall outside of
the image. In this case, an extrapolation method needs to be used.
OpenCV provides the same selection of extrapolation methods as in the
filtering functions. In addition, it provides the method
BORDER_TRANSPARENT . This means that the corresponding pixels in the
destination image will not be modified at all.
• Interpolation of pixel
values. Usually f_x(x,y) and f_y(x,y) are floating-point numbers. This
means that <f_x, f_y> can be either an affine or
perspective transformation, or radial lens distortion correction, and
so on. So, a pixel value at fractional coordinates needs to be
retrieved. In the simplest case, the coordinates can be just rounded
to the nearest integer coordinates and the corresponding pixel can be
used. This is called a nearest-neighbor interpolation. However, a
better result can be achieved by using more sophisticated
interpolation methods, where a polynomial function is fit into some
neighborhood of the computed pixel (f_x(x,y), f_y(x,y)), and then the
value of the polynomial at (f_x(x,y), f_y(x,y)) is taken as the
interpolated pixel value. In OpenCV, you can choose between several
interpolation methods. See resize() for details.
For a "soft" introduction see also for example Cambridge in colour - DIGITAL IMAGE INTERPOLATION.
So let's say you need the grey level of pixel at x=20.2 y=14.7, since x and y are number with a fractional part different from zero you will need to "invent" (compute) the grey level in some way. In the simplest case ('nearest' interpolation) you just say that the grey level at (20.2,14.7) is the grey level you retrieve at (20,15), it is called "nearest" because 20 is the nearest integer value to 20.2 and 15 is the nearest integer value to 14.7.
In the (bi)'linear' interpolation you will compute the value at (20.2,14.7) with a combination of the grey levels of the four pixels at (20,14), (20,15), (21,14), (21,15); for the details on how to compute the combination see the Wikipedia page which has a numeric example.
The (bi)'cubic' interpolation considers the combination of sixteen pixels in order to compute the value at (20.2,14.7), see the Wikipedia page.
I suggest you to try all the three methods, with the same input image, and see the differences in the output image.
Interpolation method is actually independent of the camera calibration. Any time you apply a geometric transformation to an image, such as rotation, re-sizing, or distortion compensation, the pixels in the new image will correspond to points between the pixels of the old image. So you have to interpolate their values somehow.
'nearest' means you simply use the value of the nearest pixel.
'linear' means you use bi-linear interpolation. The new pixel's value is a weighted sum of the values of the neighboring pixels in the input image, where the weights are proportional to distances.
'cubic' means you use a bi-cubic interpolation, which is more complicated than bi-linear, but may give you a smoother image.
A good description of these interpolation methods is given in the documentation for the interp2 function.
And finally, just to clarify, the undistortImage function is in the Computer Vision System Toolbox.

Deforming an image so that curved lines become straight lines

I have an image with free-form curved lines (actually lists of small line-segments) overlayed onto it, and I want to generate some kind of image-warp that will deform the image in such a way that these curves are deformed into horizontal straight lines.
I already have the coordinates of all the line-segment points stored separately so they don't have to be extracted from the image. What I'm looking for is an appropriate method of warping the image such that these lines are warped into straight ones.
thanks
You can use methods similar to those developed here:
http://www-ui.is.s.u-tokyo.ac.jp/~takeo/research/rigid/
What you do, is you define an MxN grid of control points which covers your source image.
You then need to determine how to modify each of your control points so that the final image will minimize some energy function (minimum curvature or something of this sort).
The final image is a linear warp determined by your control points (think of it as a 2D mesh whose texture is your source image and whose vertices' positions you're about to modify).
As long as your energy function can be expressed using linear equations, you can globally solve your problem (figuring out where to send each control point) using linear equations solver.
You express each of your source points (those which lie on your curved lines) using bi-linear interpolation weights of their surrounding grid points, then you express your restriction on the target by writing equations for these points.
After solving these linear equations you end up with destination grid points, then you just render your 2D mesh with the new vertices' positions.
You need to start out with a mapping formula that given an output coordinate will provide the corresponding coordinate from the input image. Depending on the distortion you're trying to correct for, this can get exceedingly complex; your question doesn't specify the problem in enough detail. For example, are the curves at the top of the image the same as the curves on the bottom and the same as those in the middle? Do horizontal distances compress based on the angle of the line? Let's assume the simplest case where the horizontal coordinate doesn't need any correction at all, and the vertical simply needs a constant correction based on the horizontal. Here x,y are the coordinates on the input image, x',y' are the coordinates on the output image, and f() is the difference between the drawn line segment and your ideal straight line.
x = x'
y = y' + f(x')
Now you simply go through all the pixels of your output image, calculate the corresponding point in the input image, and copy the pixel. The wrinkle here is that your formula is likely to give you points that lie between input pixels, such as y=4.37. In that case you'll need to interpolate to get an intermediate value from the input; there are many interpolation methods for images and I won't try to get into that here. The simplest would be "nearest neighbor", where you simply round the coordinate to the nearest integer.

Resources