Fourier transform of an image in EmguCV - emgucv

Can anyone tell me if there is an inbuilt function present in emgucv 2.3 for finding out fourier transform of images ?
Thanks in advance

From my answer Fourier Transform + emgucv
The function you are after is CvInvoke.cvDFT it is technically calling the opencv method but it should be what your after
Here is the code that splits the Imaginary and Real parts from cvDFT:
Image<Gray, float> image = new Image<Gray, float>(open.FileName);
IntPtr complexImage = CvInvoke.cvCreateImage(image.Size, Emgu.CV.CvEnum.IPL_DEPTH.IPL_DEPTH_32F, 2);
CvInvoke.cvSetZero(complexImage); // Initialize all elements to Zero
CvInvoke.cvSetImageCOI(complexImage, 1);
CvInvoke.cvCopy(image, complexImage, IntPtr.Zero);
CvInvoke.cvSetImageCOI(complexImage, 0);
Matrix<float> dft = new Matrix<float>(image.Rows, image.Cols, 2);
CvInvoke.cvDFT(complexImage, dft, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, 0);
//The Real part of the Fourier Transform
Matrix<float> outReal = new Matrix<float>(image.Size);
//The imaginary part of the Fourier Transform
Matrix<float> outIm = new Matrix<float>(image.Size);
CvInvoke.cvSplit(dft, outReal, outIm, IntPtr.Zero, IntPtr.Zero);
//Show The Data
CvInvoke.cvShowImage("Real", outReal);
CvInvoke.cvShowImage("Imaginary ", outIm);
Cheers,
Chris

Related

JavaCV or OpenCV Image Rotation

My objective is to rotate an image by a certain angle (e.g. 30 degrees). One possible way of rotating by 90 degrees in OpenCV is given by tenta4 but unfortunately, it only performs 90-degree flips.
Another possible way is a method "SkewGrayImage" given in JavaCV samples where it performs "small angle rotations" that appear to work for rotations of up to approximately 45 - 50 degrees but not for any other higher values.
So - my issues is, is there a proper way/method in OpenCV or JavaCV to actually perform an angular rotation of images or objects?
Meta has explained how to compute a rotation matrix with respect to the center of the image and then to perform a rotation as follows:
Mat rotated_image;
warpAffine(src, rotated_image, rot_mat, src.size());
there is an operation that's called warp, and it is able to just rotate, but also to do some other transformations on the image.
Some useful links are here
https://docs.opencv.org/2.4.13.2/modules/stitching/doc/warpers.html
https://docs.opencv.org/3.1.0/db/d29/group__cudawarping.html
https://docs.opencv.org/2.4/modules/imgproc/doc/geometric_transformations.html
Hope it helps ;)
A more detailed answer for IplImage rotation is given by Martin based on Mat variables which can then be converted and returned as an IplImage as follows:
Mat source = imread(argv[1], CV_LOAD_IMAGE_COLOR);
Mat rotation_matrix = getRotationMatrix2D(src_center, angle, 1.0);
Mat destinationMat;
warpAffine(source, destinationMat, rotation_matrix, source.size());
IplImage iplframe = IplImage(destinationMat);
Hope this helps! Worked for me with JavaCV.
Mat raw = ... // your raw mat
// Create your "new" Mat and the center of your Raw Mat
Mat result = new Mat(raw.size(), [your Image Type]); // my Img type was CV_8U
Point2f rawCenter = new Point2f(raw.cols() / 2.0F, raw.rows() / 2.0F);
// Scale and Rotation of new Mat
double scale = 1.0;
int rotation = -5;
// Rotation Matrix
Mat rotationMatrix = getRotationMatrix2D(rawCenter, rotation, scale);
// Rotate
warpAffine(raw, result, rotationMatrix, raw.size());

Homography matrix decomposition into rotation matrix and translation vector

I'm working on an augmented reality application for android using opencv 2.4.4 and have some problem with homography decomposition.
As we know, homography matrix is define as H=A.[R t] , where A is the intrinsic camera matrix, R is rotation matrix and t is translation vector.
I want to estimate the view side of camera using pictures, also the orientation of camera in 3d room.
Homography matrix can I estimate with opencv function: findHomography, and I think it works!!!
Here how I do it:
static Mat mFindHomography(MatOfKeyPoint keypoints1, MatOfKeyPoint keypoints2, MatOfDMatch matches){
List<Point> lp1 = new ArrayList<Point>(500);
List<Point> lp2 = new ArrayList<Point>(500);
KeyPoint[] k1 = keypoints1.toArray();
KeyPoint[] k2 = keypoints2.toArray();
List<DMatch> matchesList = matches.toList();
if (matchesList.size() < 4){
MatOfDMatch mat = new MatOfDMatch();
return mat;
}
// Add matches keypoints to new list to apply homography
for(DMatch match : matchesList){
Point kk1 = k1[match.queryIdx].pt;
Point kk2 = k2[match.trainIdx].pt;
lp1.add(kk1);
lp2.add(kk2);
}
MatOfPoint2f srcPoints = new MatOfPoint2f(lp1.toArray(new Point[0]));
MatOfPoint2f dstPoints = new MatOfPoint2f(lp2.toArray(new Point[0]));
Mat mask = new Mat();
Mat homography = Calib3d.findHomography(srcPoints, dstPoints, Calib3d.RANSAC, 10, mask); // Finds a perspective transformation between two planes. ---Calib3d.LMEDS Least-Median robust method
List<DMatch> matches_homo = new ArrayList<DMatch>();
int size = (int) mask.size().height;
for(int i = 0; i < size; i++){
if ( mask.get(i, 0)[0] == 1){
DMatch d = matchesList.get(i);
matches_homo.add(d);
}
}
MatOfDMatch mat = new MatOfDMatch();
mat.fromList(matches_homo);
matchesFilterdByRansac = (int) mat.size().height;
return homography;
}
After that, I want to decompose this homography matrix and compute euler angles. As we know H=A.[R t], I multiply homography matrix with inverse of camera intrinsic matrix: H.A^{-1} = [R t]. So, I want to decompose [R t] in rotation and translation and compute euler angles from rotation matrix. But it didn't work. What is wrong there?!!
if(!homography.empty()){ // esstimate pose frome homography
Mat intrinsics = Mat.zeros(3, 3, CvType.CV_32FC1); // camera intrinsic matrix
intrinsics.put(0, 0, 890);
intrinsics.put(0, 2, 400);
intrinsics.put(1, 1, 890);
intrinsics.put(1, 2, 240);
intrinsics.put(2, 2, 1);
// Inverse Matrix from Wolframalpha
double[] inverseIntrinsics = { 0.001020408, 0 , -0.408163265,
0, 0.0011235955, -0.26966292,
0, 0 , 1 };
// cross multiplication
double[] rotationTranslation = matrixMultiply3X3(homography, inverseIntrinsics);
Mat pose = Mat.eye(3, 4, CvType.CV_32FC1); // 3x4 matrix, the camera pose
float norm1 = (float) Core.norm(rotationTranslation.col(0));
float norm2 = (float) Core.norm(rotationTranslation.col(1));
float tnorm = (norm1 + norm2) / 2.0f; // Normalization value ---test: float tnorm = (float) h.get(2, 2)[0];// not worked
Mat normalizedTemp = new Mat();
Core.normalize(rotationTranslation.col(0), normalizedTemp);
normalizedTemp.convertTo(normalizedTemp, CvType.CV_32FC1);
normalizedTemp.copyTo(pose.col(0)); // Normalize the rotation, and copies the column to pose
Core.normalize(rotationTranslation.col(1), normalizedTemp);
normalizedTemp.convertTo(normalizedTemp, CvType.CV_32FC1);
normalizedTemp.copyTo(pose.col(1));// Normalize the rotation and copies the column to pose
Mat p3 = pose.col(0).cross(pose.col(1)); // Computes the cross-product of p1 and p2
p3.copyTo(pose.col(2));// Third column is the crossproduct of columns one and two
double[] buffer = new double[3];
rotationTranslation.col(2).get(0, 0, buffer);
pose.put(0, 3, buffer[0] / tnorm); //vector t [R|t] is the last column of pose
pose.put(1, 3, buffer[1] / tnorm);
pose.put(2, 3, buffer[2] / tnorm);
float[] rotationMatrix = new float[9];
rotationMatrix = getArrayFromMat(pose);
float[] eulerOrientation = new float[3];
SensorManager.getOrientation(rotationMatrix, eulerOrientation);
// Convert radian to degree
double yaw = (double) (eulerOrientation[0]) * (180 / Math.PI));// * -57;
double pitch = (double) (eulerOrientation[1]) * (180 / Math.PI));
double roll = (double) (eulerOrientation[2]) * (180 / Math.PI));}
Note that opencv 3.0 has a homogeraphy decomposition function (here), but I'm using opencv 2.4.4 for android!!! Is there a wrapper for it in java?
Second problem is with decomposing of rotation matrix in euler angels. Is there any problem with:
float[] eulerOrientation = new float[3];
SensorManager.getOrientation(rotationMatrix, eulerOrientation);
I used this link too, but not better result!
double pitch = Math.atan2(pose.get(2, 1)[0], pose.get(2, 2)[0]);
double roll = Math.atan2(-1*pose.get(2, 0)[0], Math.sqrt( Math.pow(pose.get(2, 1)[0], 2) + Math.pow(pose.get(2, 2)[0], 2)) );
double yaw = Math.atan2(pose.get(1, 0)[0], pose.get(0, 0)[0]);
Thanks a lot for any response
I hope this answer will help those looking for a solution today.
My answer uses c++ and opencv 2.4.9. I copied the decomposehomographymat function from opencv 3.0. After computing homography I use the copied function to decompose homography. To filter homography matrix and select the correct answer from the 4 decompositions, check my answer here.
To obtain euler angles from the rotation matrix, you can refer to this. I am able to get good results with this method.

Filtering on Fourier image and then taking its Inverse fourier to get the image

// Fourier transform of Image<Bgr,byte> orig object.
// output is matrix<float> with 2 channels.
private Matrix<float> fourier()
{
Image<Gray, float> image = orig.Convert<Gray, float>();
IntPtr complexImage = CvInvoke.cvCreateImage(image.Size,Emgu.CV.CvEnum.IPL_DEPTH.IPL_DEPTH_32F, 2);
CvInvoke.cvSetZero(complexImage); // Initialize all elements to Zero
CvInvoke.cvSetImageCOI(complexImage, 1);
CvInvoke.cvCopy(image, complexImage, IntPtr.Zero);
CvInvoke.cvSetImageCOI(complexImage, 0);
Matrix<float> dft = new Matrix<float>(image.Rows, image.Cols, 2);
CvInvoke.cvDFT(complexImage, dft, Emgu.CV.CvEnum.CV_DXT.CV_DXT_FORWARD, 0);
//The Real part of the Fourier Transform
Matrix<float> outReal = new Matrix<float>(image.Size);
//The imaginary part of the Fourier Transform
Matrix<float> outIm = new Matrix<float>(image.Size);
CvInvoke.cvSplit(dft, outReal, outIm, IntPtr.Zero, IntPtr.Zero);
return dft;
}
// butterworth filter with Do frequency and order n.
// Filter is returned as matrix<float> with 2 channels.
private Matrix<float> make_butterworth(int Do, int n)
{
Matrix<float> ff = fourier();
Matrix<float> tmp = new Matrix<float>(ff.Rows, ff.Cols, 2);
Point center=new Point(tmp.Rows/2,tmp.Cols/2);
for (int i=0;i<orig.Rows;i++)
for (int j = 0; j < orig.Cols; j++)
{
int Duv= (int) (Math.Sqrt( Math.Pow(i-center.X,2) + Math.Pow(j-center.Y,2)));
tmp[i, j] = (float) (1 / (1 + Math.Pow((Duv / Do), 2 * n)));
}
return tmp;
}
// The click event which will trigger fourier() and
make_butterworth() takes Do and n order input from user
and applies filter on orig image.
private void lowPassToolStripMenuItem2_Click(object sender, EventArgs e)
{
dialog_input d1 = new dialog_input("Enter values of Do and order n seperated by space:\n");
d1.ShowDialog();
string[] s = d1.t.Split(new char[] { ' ', ',' });
int fc = Convert.ToInt32(s[0]);
int order = Convert.ToInt32(s[1]);
Matrix<float> filter= make_butterworth(fc, order); // 2 channels
Matrix<float> m = fourier(); // 2 channels
m._Mul(filter);
// filter * with fourier image.
CvInvoke.cvDFT(m,m,CV_DXT.CV_DXT_INVERSE, 0);
IntPtr cmplx = CvInvoke.cvCreateImage(m.Size, IPL_DEPTH.IPL_DEPTH_32F, 2);
CvInvoke.cvSetZero(cmplx);
CvInvoke.cvSetImageCOI(cmplx, 0);
CvInvoke.cvCopy(m, cmplx, IntPtr.Zero);
Bitmap bm = new Bitmap(m.Width, m.Height);
BitmapData bd = bm.LockBits(new Rectangle
(0, 0, bm.Width, bm.Height),
ImageLockMode.ReadWrite,
PixelFormat.Canonical);
bd.Scan0 = cmplx;
bm.UnlockBits(bd);
pictureBox2.Image = bm;
}
One thing i am taking fourier() as 2 channels instead of only taking real channel. i am not sure if i am wrong in this regard. Also thats why i had to take filter as 2 channels also where 2 channels are used to represent data of Gray and Alpha in both cases.
Problem occurs at bitmapdata object initialization due to pixelFormat.Canonical parameter. The result of multiply of fourier matrix and filter matrix is in matrix float. All i want to do is to take its IDFT and display the filtered image. Not sure about the PixelFormat. Any help would be great.
Read this chapter: opencv DFT tutorial , C code DFT and opencv DFT pythonit explains all you need to know about DFT in opencv.
About the types
1) Image is Real
2) DFT(Image) result in a complex image.
3) butterworth is a one channel matrix with the same size of image.
4) to filter, multiply each channel of DFT resulting image by butterworth filter. each channel must be multplied separated beacouse we have the real and complex parts of each pixel allocated in one channel as result of DFT.how filtering works
5) After filtering you will have a complex image
6) now you can apply the IDFT that have as result a real image. In opencv you maybe get as result a complex image, but the second channel are entirely zeros so you can discart.
Look here to:opencv C++ DFT

Retrieving the original coordinates of a pixel taken from a warped Image

I have four corners extracted from a sourceImage:
src_vertices[0] = corners[upperLeft];
src_vertices[1] = corners[upperRight];
src_vertices[2] = corners[downLeft];
src_vertices[3] = corners[downRight];
These four corners are warped to destinationImage like that:
dst_vertices[0] = Point(0,0);
dst_vertices[1] = Point(width, 0);
dst_vertices[2] = Point(0, height);
dst_vertices[3] = Point(width, height);
Mat warpPerspectiveMatrix = getPerspectiveTransform(src_vertices, dst_vertices);
cv::Size size_d = Size(width, height);
cv::Mat DestinationImage(width,height,CV_8UC3);
warpPerspective(sourceImage, destinationImage, warpPerspectiveMatrix, size_d, INTER_LINEAR, BORDER_CONSTANT);
Now my question is:
I have a point p(x,y) taken from the destinationImage how can I retrieve the coordinates of this point in the original sourceImage
In other words I want to use warpPerspectiveMatrix to do the opposite work of getPerspectiveTransform
You want the inverse perspective transform. If your original transform is S->S', you want the transform matrix S'->S
Mat InversewarpPerspectiveMatrix = getPerspectiveTransform(dst_vertices, src_vertices);
Then you make a SPARSE matrix
Mat PerspectiveCoordinates containing the vector x,y.
Finally you want to call
PerspectiveTransform(PerspectiveCoordinates,OriginalCoordinates,InversewarpPerspectiveMatrix)

OpenCV - Image Stitching

I am using following code to stitch to input images. For an unknown
reason the output result is crap!
It seems that the homography matrix is wrong (or is affected wrongly)
because the transformed image is like an "exploited star"!
I have commented the part that I guess is the source of the problem
but I cannot realize it.
Any help or point is appriciated!
Have a nice day,
Ali
void Stitch2Image(IplImage *mImage1, IplImage *mImage2)
{
// Convert input images to gray
IplImage* gray1 = cvCreateImage(cvSize(mImage1->width, mImage1->height), 8, 1);
cvCvtColor(mImage1, gray1, CV_BGR2GRAY);
IplImage* gray2 = cvCreateImage(cvSize(mImage2->width, mImage2->height), 8, 1);
cvCvtColor(mImage2, gray2, CV_BGR2GRAY);
// Convert gray images to Mat
Mat img1(gray1);
Mat img2(gray2);
// Detect FAST keypoints and BRIEF features in the first image
FastFeatureDetector detector(50);
BriefDescriptorExtractor descriptorExtractor;
BruteForceMatcher<L1<uchar> > descriptorMatcher;
vector<KeyPoint> keypoints1;
detector.detect( img1, keypoints1 );
Mat descriptors1;
descriptorExtractor.compute( img1, keypoints1, descriptors1 );
/* Detect FAST keypoints and BRIEF features in the second image*/
vector<KeyPoint> keypoints2;
detector.detect( img1, keypoints2 );
Mat descriptors2;
descriptorExtractor.compute( img2, keypoints2, descriptors2 );
vector<DMatch> matches;
descriptorMatcher.match(descriptors1, descriptors2, matches);
if (matches.size()==0)
return;
vector<Point2f> points1, points2;
for(size_t q = 0; q < matches.size(); q++)
{
points1.push_back(keypoints1[matches[q].queryIdx].pt);
points2.push_back(keypoints2[matches[q].trainIdx].pt);
}
// Create the result image
result = cvCreateImage(cvSize(mImage2->width * 2, mImage2->height), 8, 3);
cvZero(result);
// Copy the second image in the result image
cvSetImageROI(result, cvRect(mImage2->width, 0, mImage2->width, mImage2->height));
cvCopy(mImage2, result);
cvResetImageROI(result);
// Create warp image
IplImage* warpImage = cvCloneImage(result);
cvZero(warpImage);
/************************** Is there anything wrong here!? *******************/
// Find homography matrix
Mat H = findHomography(Mat(points1), Mat(points2), 8, 3.0);
CvMat HH = H; // Is this line converted correctly?
// Transform warp image
cvWarpPerspective(mImage1, warpImage, &HH);
// Blend
blend(result, warpImage);
/*******************************************************************************/
cvReleaseImage(&gray1);
cvReleaseImage(&gray2);
cvReleaseImage(&warpImage);
}
This is what I would suggest you to try, in this order:
1) Use CV_RANSAC option for homography. Refer http://opencv.willowgarage.com/documentation/cpp/calib3d_camera_calibration_and_3d_reconstruction.html
2) Try other descriptors, particularly SIFT or SURF which ship with OpenCV. For some images FAST or BRIEF descriptors are not discriminating enough. EDIT (Aug '12): The ORB descriptors, which are based on BRIEF, are quite good and fast!
3) Try to look at the Homography matrix (step through in debug mode or print it) and see if it is consistent.
4) If above does not give you a clue, try to look at the matches that are formed. Is it matching one point in one image with a number of points in the other image? If so the problem again should be with the descriptors or the detector.
My hunch is that it is the descriptors (so 1) or 2) should fix it).
Also switch to Hamming distance instead of L1 distance in BruteForceMatcher. BRIEF descriptors are supposed to be compared using Hamming distance.
Your homography, might calculated based on wrong matches and thus represent bad allignment.
I suggest to path the matrix through additional check of interdependancy between its rows.
You can use the following code:
bool cvExtCheckTransformValid(const Mat& T){
// Check the shape of the matrix
if (T.empty())
return false;
if (T.rows != 3)
return false;
if (T.cols != 3)
return false;
// Check for linear dependency.
Mat tmp;
T.row(0).copyTo(tmp);
tmp /= T.row(1);
Scalar mean;
Scalar stddev;
meanStdDev(tmp,mean,stddev);
double X = abs(stddev[0]/mean[0]);
printf("std of H:%g\n",X);
if (X < 0.8)
return false;
return true;
}

Resources