Pose estimation with emgu - opencv

I would like to do pose estimation of a chessboard target using emgu. I have already determined the camera intrinsics. However, I can't find the solvePnP function in emgu which I think should solve my problem.
Does anybody know how I could find this function in emgu?
Is there another way to do pose estimation using emgu? I suppose I could use the CalibrateCamera camera and use the extrinsics in some way... but I think this more computational heavy than needed. Or is it?

You should be able to find chessboard corners using emgu, refer to CameraCalibration.FindChessboardCorners. Once you have the corners, you will be able to draw point correspondences between an ideal chessboard and your image.
Although SolvePnP is not available in emgu, you can still compute a homography once you have at least 4 point correspondences on a plane (which you now have). Refer to CameraCalibration.FindHomography. Once you have the homography, you can decompose this into a rotation and translation, and hence the camera pose. Take a look at this article.

Related

Use EMGU to get "real world" coordinates of pixel values

There are a number of calibration tutorials to calibrate camera images of chessboards in EMGU (OpenCV). They all end up calibrating and then undistorting an image for display. That's cool and all but I need to do machine vision where I am taking an image, identifying the location of a corner or blob or feature in the image and then translating the location of that feature in pixels into real world X, Y coordinates.
Pixel -> mm.
Is this possible with EMGU? If so, how? I'd hate to spend a bunch of time learning EMGU and then not be able to do this crucial function.
Yes, it's certainly possible as the "bread and butter" of OpenCV.
The calibration you are describing, in terms of removing distortions, is a prerequisite to this process. After which, the following applies:
The Intrinsic calibration, or "camera matrix" is the first of two required matrices. The second is the Extrinsic calibration of the camera which is essentially the 6 DoF transform that describes the physical location of the sensor center relative to a coordinate reference frame.
All of the Distortion Coefficients, Intrinsic, and Extrinsic Calibrations are available from a single function in Emgu.CV: CvInvoke.CalibrateCamera This process is best explained, I'm sure, by one of the many tutorials available that you have described.
After that it's as simple as CvInvoke.ProjectPoints to apply the transforms above and produce 3D coordinates from 2D pixel locations.
The key to doing this successfully this providing comprehensive IInputArray objectPoints and IInputArray imagePoints to CvInvoke.CalibrateCamera. Be sure to cause "excitation" by using many images, from many different perspectives.

question about camera geometric distortion correction

In OpenCV implementation, instrinsic parameters of the camera is used to correct geometric distortion.
So camera calibration is performed to obtain instrinsic parameters using multiple chessboard images.
Currently I learned that geometric distortion can be corrected using only one chessboard image.
I try to figure out how it is done, but still can't find one way to do it.
http://www.imatest.com/docs/distortion-methods-and-modules/
https://www.edmundoptics.com/resources/application-notes/imaging/distortion/
I find the two above links. It describes the radial distortion. However we can't
guarantee that the camera is parallel to the chessboard when capturing the chessboard.
I can detect the corners of the chessboard, but some corners is distorted, so I can't
fit lines because fitting can only handle noise.
Any help are appreciated.
Please take a look at this paper and this paper. Moreover, this paper proves that you can correct distortion using single image without calibration target based on identifying straight lines on image such as edges of the buildings.
I don't know whether this functionality is implemented in OpenCV but the math in those papers is should be relatively easy to implement it using OpenCV.

Finding the relative pose between two cameras with 2D and 3D correspondences

I have two images obtained by a calibrated camera from two different poses. I also have correspondences of 2D points between the images. Some of the points have depth information, so I also know their 3D coordinates. I want to calculate the relative pose between the images.
I know I can compute a fundamental matrix or an essential matrix from the 2D points. I also know PnP can find the pose with 2D-to-3D correspondences and that it's also doable getting just correspondences of 3D points. However, I don't know any algorithm that takes advantage of all the available information. Is there any?
There is only one such algorithm: Bundle Adjustment - everything else is a hack. Get your initial estimates separately, use any "reasonable & simple" hacky way of merging them to get an initial estimate, then byte the bullet and bundle. If you are coding in C++, Google's Ceres is my recommended B.A. library.

OpenCV: Camera Pose Estimation

I try to match two overlapping images captured with a camera. To do this, I'd like to use OpenCV. I already extracted the features with the SurfFeatureDetector. Now I try to to compute the rotation and translation vector between the two images.
As far as I know, I should use cvFindExtrinsicCameraParams2(). Unfortunately, this method require objectPoints as an argument. These objectPoints are the world coordinates of the extracted features. These are not known in the current context.
Can anybody give me a hint how to solve this problem?
The problem of simultaneously computing relative pose between two images and the unknown 3d world coordinates has been treated here:
Berthold K. P. Horn. Relative orientation revisited. Berthold K. P. Horn. Artificial Intelligence Laboratory, Massachusetts Institute of Technology, 545 Technology ...
EDIT: here is a link to the paper:
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.64.4700
Please see my answer to a related question where I propose a solution to this problem:
OpenCV extrinsic camera from feature points
EDIT: You may want to take a look at bundle adjustments too,
http://en.wikipedia.org/wiki/Bundle_adjustment
That assumes an initial estimate is available.
EDIT: I found some code resources you might want to take a look at:
Resource I:
http://www.maths.lth.se/vision/downloads/
Two View Geometry Estimation with Outliers
C++ code for finding the relative orientation of two calibrated
cameras in presence of outliers. The obtained solution is optimal in
the sense that the number of inliers is maximized.
Resource II:
http://lear.inrialpes.fr/people/triggs/src/ Relative orientation from
5 points: a somewhat more polished C routine implementing the minimal
solution for relative orientation of two calibrated cameras from
unknown 3D points. 5 points are required and there can be as many as
10 feasible solutions (but 2-5 is more common). Also requires a few
CLAPACK routines for linear algebra. There's also a short technical
report on this (included with the source).
Resource III:
http://www9.in.tum.de/praktika/ppbv.WS02/doc/html/reference/cpp/toc_tools_stereo.html
vector_to_rel_pose Compute the relative orientation between two
cameras given image point correspondences and known camera parameters
and reconstruct 3D space points.
There is a theoretical solution, however, the OpenCV implementation of camera pose estimation lacks the needed tools.
The theoretical approach:
Step 1: extract the homography (the matrix describing the geometrical transform between images). use findHomography()
Step 2. Decompose the result matrix into rotations and translations. Use cv::solvePnP();
Problem: findHomography() returns a 3x3 matrix, corresponding to a projection from a plane to another. solvePnP() needs a 3x4 matrix, representing the 3D rotation/translation of the objects. I think that with some approximations, you can modify the solvePnP to give you some results, but it requires a lot of math and a very good understanding of 3D geometry.
Read more about at http://en.wikipedia.org/wiki/Transformation_matrix

how to find 3d position of a point with intrinsic and extrinsic parameters with opencv

i want to find a position of a point with opencv. i calibrated two cameras using cvCalibrateCamera2. so i know both intrinsic and extrinsic parameters. I read that with a known intrinsic and extrinsic parameters, i can reconstruct 3d by triangulation easily. Is there a function in opencv to achive this.I think cvProjectPoint2 may be useful but i don t understand what exactly. So how i can find 3d position of a point.
Thanks.
You first have to find disparities. There are two algorithms implemented in OpenCV - block matching (cvFindStereoCorrespondenceBM) and graph cuts (cvFindStereoCorrespondenceGC). The latter one gives better results but is slower. After disparity detection you can reproject the disparities to 3D using cvReprojectImageTo3D. This gives you distances for each point of the input images that is in both camera views.
Also note that the stereo correspondence algorithms require a rectified image pair (use cvStereoRectify, cvInitUndistortRectifyMap and cvRemap).

Resources