Mat fourierTransform(1,final.size()-1, CV_64FC1);
//Mat fourierTransform;
Mat ans(1,final.size()-1, CV_64FC1)
cv::dft(signal, fourierTransform,cv::DFT_COMPLEX_OUTPUT);
I am following this approach and then I am stuck at how to obtain the complex part of DFT. Can anybody tell how to about it to obtain it. Thanks in advance.
Here is a tutorial on how you can use OpenCV's DFT function:
Discrete Fourier Transform
Essentially, it will store the output matrix as a two-channel matrix. The first channel is for real values, and the second channel is for complex values.
Related
I am using Image processing, openCV , C++ to check the misshapes of bottles. I am very new to openCV. It will be a great help if someone can guide me a right direction how to achieve this. How can I detect the defects of the shape of the bottle using opencv and c++. I am giving bottle images as the inputs to the system.when a misshaped bottle is input system should detect it.
Defected bottle image :
Good Bottle image :
Basic approach:
you can extract the edges then Register the two images. In openCV you will get couple of filters for this.
Perfect Approach:
you can use statistical shape modeling algorithm, I am not sure if it is there in OPenCV.
Take the region of interest (ROI) and find contours.
Find convexhull
Find convexity defects
Do this for both the reference ROI and the defected ROI, then compare
The comparison would not be straightforward as you may have to establish some correspondence between the regions of the two contours(may be you can use a grid and use its cells as the ROIs - now many ROIs for a single image - to solve the correspondence complexities)
ROI in red:
Grid based approach (multiple ROIs):
You could try the opencv template matching function. From the opencv documentation:
Template matching is a technique for finding areas of an image that match (are similar) to a template image (patch).
It implements a sliding window scheme, by sliding the template image that we want to find over the source image and calculating a similarity metric that is stored in a result matrix.
In the result matrix, the darkest/brightest location indicates the highest matches (according to the template matching algorithm employed), which marks the position of the best match for the template. The brightest location can be found using the minMaxLoc function on the result matrix.
The signature of the matchTemplate method is as follows:
matchTemplate( image, template, result, match_method ); //Matches the template
normalize( result, result, 0, 1, NORM_MINMAX, -1, Mat() ); //Normalizes the result
double minVal; double maxVal; Point minLoc; Point maxLoc; Point matchLoc;
minMaxLoc( result, &minVal, &maxVal, &minLoc, &maxLoc, Mat() ); //Finds the minimum and maximum values in the result
OpenCV provides several different algorithms for the matching, such as finding the normalized square difference of intensities(CV_TM_SQDIFF_NORMED). For the result matrix obtained using CV_TM_SQDIFF_NORMED, the lowest values correspond to the best matches. For other methods such as normalized cross correlation (CV_TM_CCORR_NORMED), the highest values correspond to the best matches.
In your case, you could threshold the result matrix with a tolerance value for deviation from the template image, and if the result on thresholding is an empty Mat, you could identify the bottle to be defective. You might have to experiment a little to find an appropriate threshold. If you want an exact match, you have to look for 0/1 (according to method) in the result matrix.
You can find more on opencv template matching here.
Hope this helps.
I have a vector of Point2f which have color space CV_8UC4 and need to convert them to CV_64F, is the following code correct?
points1.convertTo(points1, CV_64F);
More details:
I am trying to use this function to calculate the essential matrix (rotation/translation) through the 5-point algorithm, instead of using the findFundamentalMath included in OpenCV, which is based on the 8-point algorithm:
https://github.com/prclibo/relative-pose-estimation/blob/master/five-point-nister/five-point.cpp#L69
As you can see it first converts the image to CV_64F. My input image is a CV_8UC4, BGRA image. When I tested the function, both BGRA and greyscale images produce valid matrices from the mathematical point of view, but if I pass a greyscale image instead of color, it takes way more to calculate. Which makes me think I'm not doing something correctly in one of the two cases.
I read around that when the change in color space is not linear (which I suppose is the case when you go from 4 channels to 1 like in this case), you should normalize the intensity value. Is that correct? Which input should I give to this function?
Another note, the function is called like this in my code:
vector<Point2f>imgpts1, imgpts2;
for (vector<DMatch>::const_iterator it = matches.begin(); it!= matches.end(); ++it)
{
imgpts1.push_back(firstViewFeatures.second[it->queryIdx].pt);
imgpts2.push_back(secondViewFeatures.second[it->trainIdx].pt);
}
Mat mask;
Mat E = findEssentialMat(imgpts1, imgpts2, [camera focal], [camera principal_point], CV_RANSAC, 0.999, 1, mask);
The fact I'm not passing a Mat, but a vector of Point2f instead, seems to create no problems, as it compiles and executes properly.
Is it the case I should store the matches in a Mat?
I am no sure do you mean by vector of Point2f in some color space, but if you want to convert vector of points into vector of points of another type you can use any standard C++/STL function like copy(), assign() or insert(). For example:
copy(floatPoints.begin(), floatPoints.end(), doublePoints.begin());
or
doublePoints.insert(doublePoints.end(), floatPoints.begin(), floatPoints.end());
No, it is not. A std::vector<cv::Pointf2f> cannot make use of the OpenCV convertTo function.
I think you really mean that you have a cv::Mat points1 of type CV_8UC4. Note that those are RxCx4 values (being R and C the number of rows and columns), and that in a CV_64F matrix you will have RxC values only. So, you need to be more clear on how you want to transform those values.
You can do points1.convertTo(points1, CV_64FC4) to get a RxCx4 matrix.
Update:
Some remarks after you updated the question:
Note that a vector<cv::Point2f> is a vector of 2D points that is not associated to any particular color space, they are just coordinates in the image axes. So, they represent the same 2D points in a grey, rgb or hsv image. Then, the execution time of findEssentialMat doesn't depend on the image color space. Getting the points may, though.
That said, I think your input for findEssentialMat is ok (the function takes care of the vectors and convert them into their internal representation). In this cases, it is very useful to draw the points in your image to debug the code.
I'm using OpenCVs solvePnP to get the pose/positon of the camera.
I'm doing this by using points selected by the user, on an image that is already calibrated and have applied the fix for radial and tangential distortion.
However, it seems solvePnP() takes distortion coefficients as input in addition to the points selected in the image, which I suppose means taht SolvePnP applies the distortion-fix on the points given as input to the function.
This would create a minor error in my program, since the source image is already barrel-distorted, right?
If so, how can I make solvePnP() ignore the barreldistortion? Can I pass a vector with distortion-coefficients set to just 1's? Or should i set all values to 0?
Some other way?
In the past I have just passed an empty cv::Mat
cv::solvePnP(world_points, image_points, camera_mat, cv::Mat(), rotation_vector, translation_vector);
the documentation says that if you pass NULL it will set all the coefficients to 0 for you.
I'm now trying to analyze the perspective transform/homography matrix between two images capturing the same object (e.g., a rectangle) but at different perspectives/shooting angles. The perspective transform can be derived by using the function getPerspectiveTransform in OpenCV 2.3.1. I want to find the corresponding rotation and translation matrices.
The output of getPerspectiveTransform is a 3x3 matrix which I can directly use it to warp the source image into the target image. But my question is that how I can find the rotation and translation matrices based on the obtained 3x3 matrix?
I was looking into the funciton decomposeProjectionMatrix for the corresponding rotation and translation matrices. But the input is required to be a 3x4 projection matrix. How can I relate the perspective transformation (i.e., a 3x3 matrix) to the 3x4 projection matrix? Am I on the right track?
Thank you very much.
The information contained in the homography matrix (returned from getPerspectiveTransform) is not enough to extract rotation/translation. The missing column is key to correctly find the angles.
The good news is that in some scenarios, you can use the solvePnP() function to extract the desired parameters from two sets of points.
Also, this question is about the same thing you are asking for. It should help
Analyze camera movement with OpenCV
I'm trying to do calibration of Kinect camera and external camera, with Emgu/OpenCV.
I'm stuck and I would really appreciate any help.
I've choose do this via fundamental matrix, i.e. epipolar geometry.
But the result is not as I've expected. Result images are black, or have no sense at all.
Mapx and mapy points are usually all equal to infinite or - infinite, or all equals to 0.00, and rarely have regular values.
This is how I tried to do rectification:
1.) Find image points get two arrays of image points (one for every camera) from set of images. I've done this with chessboard and FindChessboardCorners function.
2.) Find fundamental matrix
CvInvoke.cvFindFundamentalMat(points1Matrix, points2Matrix,
_fundamentalMatrix.Ptr, CV_FM.CV_FM_RANSAC,1.0, 0.99, IntPtr.Zero);
Do I pass all collected points from whole set of images, or just from two images trying to rectify?
3.) Find homography matrices
CvInvoke.cvStereoRectifyUncalibrated(points11Matrix, points21Matrix,
_fundamentalMatrix.Ptr, Size, h1.Ptr, h2.Ptr, threshold);
4.) Get mapx and mapy
double scale = 0.02;
CvInvoke.cvInvert(_M1.Ptr, _iM.Ptr, SOLVE_METHOD.CV_LU);
CvInvoke.cvMul(_H1.Ptr, _M1.Ptr, _R1.Ptr,scale);
CvInvoke.cvMul(_iM.Ptr, _R1.Ptr, _R1.Ptr, scale);
CvInvoke.cvInvert(_M2.Ptr, _iM.Ptr, SOLVE_METHOD.CV_LU);
CvInvoke.cvMul(_H2.Ptr, _M2.Ptr, _R2.Ptr, scale);
CvInvoke.cvMul(_iM.Ptr, _R2.Ptr, _R2.Ptr, scale);
CvInvoke.cvInitUndistortRectifyMap(_M1.Ptr,_D1.Ptr, _R1.Ptr, _M1.Ptr,
mapxLeft.Ptr, mapyLeft.Ptr) ;
I have a problem here...since I'm not using calibrated images, what is my camera matrix and distortion coefficients ? How can I get it from fundamental matrix or homography matrices?
5.) Remap
CvInvoke.cvRemap(src.Ptr, destRight.Ptr, mapxRight, mapyRight,
(int)INTER.CV_INTER_LINEAR, new MCvScalar(255));
And this doesn't returning good result. I would appreciate if someone would tell me what am I doing wrong.
I have set of 25 pairs of images, and chessboard pattern size 9x6.
The book "Learning OpenCV," from O'Reilly publishing, has two full chapters devoted to this specific topic. Both make heavy use of OpenCV's included routines cvCalibrateCamera2() and cvStereoCalibrate(); These routines are wrappers to code that is very similar to what you have written here, with the added benefit of having been more thoroughly debugged by the folks who maintain the OpenCV libraries. while they are convenient, both require quite a bit of preprocessing to achieve the necessary inputs to the routines. There may in fact be a sample program, somewhere deep in the samples directory of the OpenCV distribution, that uses these routines, with examples on how to go from chessboard image to calibration/intrinsics matrix. If you take an in depth look at any of these places, I am sure you will see how you can achieve your goal with advice from the experts.
cv::findFundamentalMat cannot work if the intrinsic parameter of your image points is an identity matrix. In other words, it cannot work with unprojected image points.