Armadillo ifft dimension - signal-processing

Does Armadillo ifft support "dim" as Octave/Matlab?

The Armadillo documentation reads "If given a matrix, the transform is done on each column vector of the matrix". So, no, it does not support dim, but you can treat Armadillo's ifft() method as Octave/MATLAB's with dim = 1 (columns).
If you instead want to perform ifft() with rows, then you can just transpose your input matrix...
mat X; // This is the matrix you want to call ifft() on.
// The second transpose is optional, but it will get the Y matrix to have
// the same dimension as the X matrix.
mat Y = ifft(X.t()).t();

Related

pytorch affine_grid: what is the theta input?

When trying to use torch.nn.functional.affine_grid, it requires a theta affine matrix of size (N x 3 x 4) according to the documentation. I thought a general affine matrix is (N x 4 x 4). What is the supposed affine matrix format in pytorch?
An example of 3D rotation affine input would be ideal. Appreciate your help.
The dimensions you mention are applicable for the case of 3D inputs, that is you wish to apply 3D geometric transforms on the input tensor x of shape bxcxdxhxw.
A transformation to points in 3D (represented as 4-vector in homogeneous coordinates as (x, y, z, 1)) should be, in the general case, a 4x4 matrix as you noted.
However, since we restrict ourselves to homogeneous coordinates, i.e., the fourth coordinate must be 1, the 4th row of the matrix must be (0, 0, 0, 1) (see this).
Therefore, there's no need to explicitly code this last row.
To conclude, a 3D transformation composed of a 3x3 rotation R and 3d translation t is simply the 3x4 matrix:
theta = [R t]

How do I compute an averaged profile in OpenCV

I call an average profile the 1D signal obtained by averaging along the rows or columns in a rectangular image.
Px := Σ(y=1,H) I(x, y) / H
and
Py := Σ(x=1,W) I(x, y) / W
I couldn't find that in the API, maybe by not using the appropriate terminology.
I don't want a box filter, just one value per row/column. The sum instead of the average is equally good.
You can use the reduce function: http://docs.opencv.org/2.4/modules/core/doc/operations_on_arrays.html#reduce
using namespace cv;
// Average over rows
Mat mean_over_rows;
reduce(input_mat, mean_over_rows, 0, CV_REDUCE_AVG);
// Average over the columns
Mat mean_over_cols;
reduce(input_mat, mean_over_cols, 1, CV_REDUCE_AVG);
You can use the CV_REDUCE_SUM flag if you just want the summed projection along the desired axes.

Camera motion from corresponding images

I'm trying to calculate a new camera position based on the motion of corresponding images.
the images conform to the pinhole camera model.
As a matter of fact, I don't get useful results, so I try to describe my procedure and hope that somebody can help me.
I match the features of the corresponding images with SIFT, match them with OpenCV's FlannBasedMatcher and calculate the fundamental matrix with OpenCV's findFundamentalMat (method RANSAC).
Then I calculate the essential matrix by the camera intrinsic matrix (K):
Mat E = K.t() * F * K;
I decompose the essential matrix to rotation and translation with singular value decomposition:
SVD decomp = SVD(E);
Matx33d W(0,-1,0,
1,0,0,
0,0,1);
Matx33d Wt(0,1,0,
-1,0,0,
0,0,1);
R1 = decomp.u * Mat(W) * decomp.vt;
R2 = decomp.u * Mat(Wt) * decomp.vt;
t1 = decomp.u.col(2); //u3
t2 = -decomp.u.col(2); //u3
Then I try to find the correct solution by triangulation. (this part is from http://www.morethantechnical.com/2012/01/04/simple-triangulation-with-opencv-from-harley-zisserman-w-code/ so I think that should work correct).
The new position is then calculated with:
new_pos = old_pos + -R.t()*t;
where new_pos & old_pos are vectors (3x1), R the rotation matrix (3x3) and t the translation vector (3x1).
Unfortunately I got no useful results, so maybe anyone has an idea what could be wrong.
Here are some results (just in case someone can confirm that any of them is definitely wrong):
F = [8.093827077399547e-07, 1.102681999632987e-06, -0.0007939604310854831;
1.29246107737264e-06, 1.492629957878578e-06, -0.001211264339006535;
-0.001052930954975217, -0.001278667878010564, 1]
K = [150, 0, 300;
0, 150, 400;
0, 0, 1]
E = [0.01821111092414898, 0.02481034499174221, -0.01651092283654529;
0.02908037424088439, 0.03358417405226801, -0.03397110489649674;
-0.04396975675562629, -0.05262169424538553, 0.04904210357279387]
t = [0.2970648246214448; 0.7352053067682792; 0.6092828956013705]
R = [0.2048034356172475, 0.4709818957303019, -0.858039396912323;
-0.8690270040802598, -0.3158728880490416, -0.3808101689488421;
-0.4503860776474556, 0.8236506374002566, 0.3446041331317597]
First of all you should check if
x' * F * x = 0
for your point correspondences x' and x. This should be of course only the case for the inliers of the fundamental matrix estimation with RANSAC.
Thereafter, you have to transform your point correspondences to normalized image coordinates (NCC) like this
xn = inv(K) * x
xn' = inv(K') * x'
where K' is the intrinsic camera matrix of the second image and x' are the points of the second image. I think in your case it is K = K'.
With these NCCs you can decompose your essential matrix like you described. You triangulate the normalized camera coordinates and check the depth of your triangulated points. But be careful, in literature they say that one point is sufficient to get the correct rotation and translation. From my experience you should check a few points since one point can be an outlier even after RANSAC.
Before you decompose the essential matrix make sure that E=U*diag(1,1,0)*Vt. This condition is required to get correct results for the four possible choices of the projection matrix.
When you've got the correct rotation and translation you can triangulate all your point correspondences (the inliers of the fundamental matrix estimation with RANSAC). Then, you should compute the reprojection error. Firstly, you compute the reprojected position like this
xp = K * P * X
xp' = K' * P' * X
where X is the computed (homogeneous) 3D position. P and P' are the 3x4 projection matrices. The projection matrix P is normally given by the identity. P' = [R, t] is given by the rotation matrix in the first 3 columns and rows and the translation in the fourth column, so that P is a 3x4 matrix. This only works if you transform your 3D position to homogeneous coordinates, i.e. 4x1 vectors instead of 3x1. Then, xp and xp' are also homogeneous coordinates representing your (reprojected) 2D positions of your corresponding points.
I think the
new_pos = old_pos + -R.t()*t;
is incorrect since firstly, you only translate the old_pos and you do not rotate it and secondly, you translate it with a wrong vector. The correct way is given above.
So, after you computed the reprojected points you can calculate the reprojection error. Since you are working with homogeneous coordinates you have to normalize them (xp = xp / xp(2), divide by last coordinate). This is given by
error = (x(0)-xp(0))^2 + (x(1)-xp(1))^2
If the error is large such as 10^2 your intrinsic camera calibration or your rotation/translation are incorrect (perhaps both). Depending on your coordinate system you can try to inverse your projection matrices. On that account you need to transform them to homogeneous coordinates before since you cannot invert a 3x4 matrix (without the pseudo inverse). Thus, add the fourth row [0 0 0 1], compute the inverse and remove the fourth row.
There is one more thing with reprojection error. In general, the reprojection error is the squared distance between your original point correspondence (in each image) and the reprojected position. You can take the square root to get the Euclidean distance between both points.
To update your camera position, you have to update the translation first, then update the rotation matrix.
t_ref += lambda * (R_ref * t);
R_ref = R * R_ref;
where t_ref and R_ref are your camera state, R and t are new calculated camera rotation and translation, and lambda is the scale factor.

OpenCV: Efficient Difference-of-Gaussian

I am trying to implement difference of guassians (DoG), for a specific case of edge detection. As the name of the algorithm suggests, it is actually fairly straightforward:
Mat g1, g2, result;
Mat img = imread("test.png", CV_LOAD_IMAGE_COLOR);
GaussianBlur(img, g1, Size(1,1), 0);
GaussianBlur(img, g2, Size(3,3), 0);
result = g1 - g2;
However, I have the feeling that this can be done more efficiently. Can it perhaps be done in less passes over the data?
The question here has taught me about separable filters, but I'm too much of an image processing newbie to understand how to apply them in this case.
Can anyone give me some pointers on how one could optimise this?
Separable filters work in the same way as normal gaussian filters. The separable filters are faster than normal Gaussian when the image size is large. The filter kernel can be formed analytically and the filter can be separated into two 1 dimensional vectors, one horizontal and one vertical.
for example..
consider the filter to be
1 2 1
2 4 2
1 2 1
this filter can be separated into horizontal vector (H) 1 2 1 and vertical vector(V) 1 2 1. Now these sets of two filters are applied to the image. Vector H is applied to the horizontal pixels and V to the vertical pixels. The results are then added together to get the Gaussian Blur. I'm providing a function that does the separable Gaussian Blur. (Please dont ask me about the comments, I'm too lazy :P)
Mat sepConv(Mat input, int radius)
{
Mat sep;
Mat dst,dst2;
int ksize = 2 *radius +1;
double sigma = radius / 2.575;
Mat gau = getGaussianKernel(ksize, sigma,CV_32FC1);
Mat newgau = Mat(gau.rows,1,gau.type());
gau.col(0).copyTo(newgau.col(0));
filter2D(input, dst2, -1, newgau);
filter2D(dst2.t(), dst, -1, newgau);
return dst.t();
}
One more method to improve the calculation of Gaussian Blur is to use FFT. FFT based convolution is much faster than the separable kernel method, if the data size is pretty huge.
A quick google search provided me with the following function
Mat Conv2ByFFT(Mat A,Mat B)
{
Mat C;
// reallocate the output array if needed
C.create(abs(A.rows - B.rows)+1, abs(A.cols - B.cols)+1, A.type());
Size dftSize;
// compute the size of DFT transform
dftSize.width = getOptimalDFTSize(A.cols + B.cols - 1);
dftSize.height = getOptimalDFTSize(A.rows + B.rows - 1);
// allocate temporary buffers and initialize them with 0's
Mat tempA(dftSize, A.type(), Scalar::all(0));
Mat tempB(dftSize, B.type(), Scalar::all(0));
// copy A and B to the top-left corners of tempA and tempB, respectively
Mat roiA(tempA, Rect(0,0,A.cols,A.rows));
A.copyTo(roiA);
Mat roiB(tempB, Rect(0,0,B.cols,B.rows));
B.copyTo(roiB);
// now transform the padded A & B in-place;
// use "nonzeroRows" hint for faster processing
Mat Ax = computeDFT(tempA);
Mat Bx = computeDFT(tempB);
// multiply the spectrums;
// the function handles packed spectrum representations well
mulSpectrums(Ax, Bx, Ax,0,true);
// transform the product back from the frequency domain.
// Even though all the result rows will be non-zero,
// we need only the first C.rows of them, and thus we
// pass nonzeroRows == C.rows
//dft(Ax, Ax, DFT_INVERSE + DFT_SCALE, C.rows);
updateMag(Ax);
Mat Cx = updateResult(Ax);
//idft(tempA, tempA, DFT_SCALE, A.rows + B.rows - 1 );
// now copy the result back to C.
Cx(Rect(0, 0, C.cols, C.rows)).copyTo(C);
//C.convertTo(C, CV_8UC1);
// all the temporary buffers will be deallocated automatically
return C;
}
Hope this helps. :)
I know this post is old. But the question is interresting and may interrest future readers. As far as I know, a DoG filter is not separable. So there is two solutions left:
1) compute both convolutions by calling the function GaussianBlur() twice then subtract the two images
2) Make a kernel by computing the difference of two gaussian kernels then convolve it with the image.
About which solution is faster:
The solution 2 seems faster at first sight because it convolves the image only once.
But this does not involve a separable filter. On the contrary, the first solution involves two separable filter and may be faster finaly. (I do not know how the OpenCV function GaussianBlur() is optimised and whether it uses separable filters or not. But it is likely.)
However, if one uses FFT technique to convolve, the second solution is surely faster.
If anyone has any advice to add or wishes to correct me, please do.

Two Dimensional FFTW Help

I'm currently trying to compute the fft of an image via fftw_plan_dft_2d.
To use this function, I'm linearizing the image data into an in array and calling the function mentioned above (and detailed below)
ftw_plan fftw_plan_dft_2d(int n0, int n1,
fftw_complex *in, fftw_complex *out,
int sign, unsigned flags);
The func modifies a complex array, out, with a size equal to the number of pixels in the original image.
Do you know if this is the proper way of computing the 2D FFT of an image? If so, what does the data within out represent? IE Where are the high and low frequency values in the array?
Thanks,
djs22
A 2D FFT is equivalent to applying a 1D FFT to each row of the image in one pass, followed by 1D FFTs on all the columns of the output from the first pass.
The output of a 2D FFT is just like the output of a 1D FFT, except that you have complex magnitudes in x, y dimensions rather just a single dimension. Spatial frequency increases with the x and y index as expected.
There's a section in the FFTW manual (here) which covers the organisation of the real-to-complex 2D FFT output data, assuming that's what you're using.
It is.
Try to compute 2 plans:
plan1 = fftw_plan_dft_2d(image->rows, image->cols, in, fft, FFTW_FORWARD, FFTW_ESTIMATE);
plan2 = fftw_plan_dft_2d(image->rows, image->cols, fft, ifft, FFTW_BACKWARD, FFTW_ESTIMATE);
You'll obtain the original data in ifft.
Hope it helps :)

Resources