Difference between HomographyBasedEstimator and findHomography OpenCV - opencv

I am trying to write a program that stitches images using SURF detector and I would like to know the difference between the two homography estimator.
I understand findHomography uses RANSAC, is HomographyBasedEstimator using RANSAC too?
If it isn't, would someone point me to the paper HomographyBasedEstimator used?
Thanks in advance

The main difference between both functions is that findHomography, as the name says, is used to find an homography, and HomographyBasesEstimator uses an already existing homographies to calculate the rotation of the cameras.
I mean, HomographyBasesEstimator doesn't find the homographies, it use them to do the computation of camera motion and all the other camera parameters such as focal lengths and optical centers.
I hope this can help you.

Actually, findHomography has been called in BestOf2NearestMatcher.

Documentation doesn't seem to say, but it suggests that HomographyBasedEstimator finds a rotation matrix, which is a special case of the homography matrix that requires the focal length. If you're doing stitching, HomographyBasedEstimator is probably the way to go. (My guess is that it's doing RANSAC internally.)

Related

image calibration manually with transformation matrices from OpenCV

I need to understand that how calibration works for cameras. I do not have basics good enough. Can someone please help me in understanding that how can I apply calibration data to an image taken with a camera for which I already have the calibration matrices such as Camera Matrix, Rotation and Translation matrix, and Distortion Matrix. I got all these using opencv, but I really do not understand how it works. Reading some tutorials helped but still not good enough. Please Please Please help me!!!!!!!!!!!
Thanks in advance,
Sumit
It depends on what you are trying to do. You can use the calibration parameters to remove lens distortion from an image. With a single calibrated camera you can measure planar objects, or you can do 3-D reconstruction from multiple images, assuming you know the extrinsics. With a calibrated stereo pair of cameras you can do 3-D reconstruction more easily.
I am not sure what you mean by "manually". Generally, this is a rather broad topic, which cannot be explained in a few paragraphs. I would recommend you start with the "Learning OpenCV" book by Bradski and Kaehler.

Pose estimation with emgu

I would like to do pose estimation of a chessboard target using emgu. I have already determined the camera intrinsics. However, I can't find the solvePnP function in emgu which I think should solve my problem.
Does anybody know how I could find this function in emgu?
Is there another way to do pose estimation using emgu? I suppose I could use the CalibrateCamera camera and use the extrinsics in some way... but I think this more computational heavy than needed. Or is it?
You should be able to find chessboard corners using emgu, refer to CameraCalibration.FindChessboardCorners. Once you have the corners, you will be able to draw point correspondences between an ideal chessboard and your image.
Although SolvePnP is not available in emgu, you can still compute a homography once you have at least 4 point correspondences on a plane (which you now have). Refer to CameraCalibration.FindHomography. Once you have the homography, you can decompose this into a rotation and translation, and hence the camera pose. Take a look at this article.

Estimate motion of monocular camera

I had read lectures and topics and I work on it since weeks, but I can't found the way to describe the motion of my camera. I don't want to reconstruct 3D world. I'm using OpenCV.
I have a monocular camera and an unknown word. I have the intrinsic and distortion parameters. I have features and correspondances. So I'm looking for the rotation and the translation between two frames. I would like to consider my first image as the origin of the XYZ axes.
I use the Fundamental matrix, and the Essential matrix to find the extrinsics parameters (R, T) but I'm not convinced. I had these results:
R
[0.040437..., 0.116076..., -0.992416...,
0.076999..., -0.99063..., -0.112731...,
-0.996211.., -0.071848.., -0.048994...]
T
[0.6924183...; 0.081694...; -716885...]
How can I check if they are good?
I calculated the euclidean distance to see the distance in 3D but I had erroneous values.
Please, can anyone give me some details, or guide me? I hope I explained myself well.
Regards
By word do you mean world? This question is also not really on topic for stackoverflow since it deals with a theory and not code.
https://stackoverflow.com/faq
To answer your question. If you and R and T then you can compute the 3D coordinate for each point. From that you can reproject each point onto the other camera and compute the residual error between the observed and predicted point. If the error is within a pixel or so it's probably valid.
Basically this way you will have unknown scale factor for each consecutive frame, so you can get strange values for R and T. But you can use some initialization like known motion in order to perform first triangulation of the scene. Next you can use solvePnP in order to calculate next [R|T].
Try to read about PTAMM which is one of the most interesting implementations of monocular SLAM
http://www.robots.ox.ac.uk/~bob/research/research_ptamm.html

OpenCV: Camera Pose Estimation

I try to match two overlapping images captured with a camera. To do this, I'd like to use OpenCV. I already extracted the features with the SurfFeatureDetector. Now I try to to compute the rotation and translation vector between the two images.
As far as I know, I should use cvFindExtrinsicCameraParams2(). Unfortunately, this method require objectPoints as an argument. These objectPoints are the world coordinates of the extracted features. These are not known in the current context.
Can anybody give me a hint how to solve this problem?
The problem of simultaneously computing relative pose between two images and the unknown 3d world coordinates has been treated here:
Berthold K. P. Horn. Relative orientation revisited. Berthold K. P. Horn. Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 545 Technology ...
EDIT: here is a link to the paper:
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.4700
Please see my answer to a related question where I propose a solution to this problem:
OpenCV extrinsic camera from feature points
EDIT: You may want to take a look at bundle adjustments too,
http://en.wikipedia.org/wiki/Bundle_adjustment
That assumes an initial estimate is available.
EDIT: I found some code resources you might want to take a look at:
Resource I:
http://www.maths.lth.se/vision/downloads/
Two View Geometry Estimation with Outliers
C++ code for finding the relative orientation of two calibrated
cameras in presence of outliers. The obtained solution is optimal in
the sense that the number of inliers is maximized.
Resource II:
http://lear.inrialpes.fr/people/triggs/src/ Relative orientation from
5 points: a somewhat more polished C routine implementing the minimal
solution for relative orientation of two calibrated cameras from
unknown 3D points. 5 points are required and there can be as many as
10 feasible solutions (but 2-5 is more common). Also requires a few
CLAPACK routines for linear algebra. There's also a short technical
report on this (included with the source).
Resource III:
http://www9.in.tum.de/praktika/ppbv.WS02/doc/html/reference/cpp/toc_tools_stereo.html
vector_to_rel_pose Compute the relative orientation between two
cameras given image point correspondences and known camera parameters
and reconstruct 3D space points.
There is a theoretical solution, however, the OpenCV implementation of camera pose estimation lacks the needed tools.
The theoretical approach:
Step 1: extract the homography (the matrix describing the geometrical transform between images). use findHomography()
Step 2. Decompose the result matrix into rotations and translations. Use cv::solvePnP();
Problem: findHomography() returns a 3x3 matrix, corresponding to a projection from a plane to another. solvePnP() needs a 3x4 matrix, representing the 3D rotation/translation of the objects. I think that with some approximations, you can modify the solvePnP to give you some results, but it requires a lot of math and a very good understanding of 3D geometry.
Read more about at http://en.wikipedia.org/wiki/Transformation_matrix

Head pose estimation with Opencv

I am using OpenCV Haar Algorithm to track the Head and overlay an image over the Head.
What I am doing is saving frames generated by camera and overlaying image over each frames.
And time is not a constraint as I am not doing it Real-Time.
My code is working fine for say 45 degree of left and right rotation of Head.
But I need something which will track up to 90 degree of rotation.
Even I got many reference of OpenCV functions and link to estimate Head Pose
Please provide me some reference. Code Examples will be cool.
Thanks in advance
You can use an algorithm like SURF (you have samples in OpenCV package) and use it over a picture of the face, the over the image, and then use SURF descriptors to match the points and estimate the 3d position of the face in the image.
You can use the same code on the sample "find_obj" but replace the image by the face picture you want to track.
Hope this helps.
There is a functionc in openCV called POSIT that permit to estimate the pose of 3d object in a single image. It implements POSIT algorithm. Try to have a look there.
You could check the EHCI project at http://code.google.com/p/ehci/ as it gives a nice overview about POSIT and Lukas Kanade.

Resources