How to find relationship between two different stereo calibrations - opencv

I have a question for you.
I have performed a single and stereo calibration using 10 different checkerboard poses. I have acquired a image pair and obtained the 3D position of each pixel and saved in a point cloud (PCL).
After that, I have performed another calibration using 60 different checkerboard poses. The obtained calibration parameters are different from those estimated in the previous calibration.
I have used the same image pair to obtain the point cloud and get the 3D reconstruction of the scene and I notice that the corresponding 3D points in the two point clouds have different location in the space.
When the two point clouds are displayed in MeshLab, two separate point clouds in the space are represented.
I think that the origin of the "reconstructed space" is changed somehow according to calibration parameters.
How can I get the transformation between the two different coordinates system so that, known the transformation, I can display the second and the first point clouds overlapping?
The aim is to find this relationship using only stereo-calibration parameters. I know that the transformation could be computed using the correspondences between the same points displayed in the two point clouds, but I need to find out this relationship using only the calibration parameters.
Thanks!

Related

Is it possible to find camera position using 8-10 non-coplanar points, if their 3D coordinates are unknown?

I have a set of non-coplanar points with unknown 3D position (I am not limited with points number, let's say 8-10 of them), and at least 3 different views (number of views also not limited) of these points in 2D images. I have also estimation for rotation and scale for every point set on pictures that corresponds to real points, also an estimation of the euclidean distance between every two camera positions that images were taken at.
Is this data enough to find camera pose after taking another additional picture with these points (to find as precisely as possible)? If not, what are minimal additional data need to have to achieve this?
UPDATE: In this specific case I needed the function recoverPose() from calib3d module
Yes, this is possible. Depending on the algorithms (and the availability of some pre-calibration), you can obtain the relative positions of two cameras using a minimum of 5 to 8 points.
Beware that the point correspondences must be available, i.e. the points must be known in pairs.

How to determine distance of objects from camera using Epipolar Plane Image?

I am working on converting 2d images to 3d environment. The images were collected from a video made in a lateral motion. Then the images were placed one behind the other, so it would be easy to find the correspondences between the two images. This is called a spatial-temporal volume.
Next I take a slice from the spatiotemporal volume. That slice is called the Epipolar Plane Image.
Using the Epipolar Plane Image, I want to calculate the depth of the objects in the scene and make a 3D enviornment. I have listed the reference but I have not been able to figure out the math described in the paper. Can someone help me figure this out? Any help is appreciated.
Reference
Epipolar-Plane Image Analysis: An Approach to Determining Structure from Motion* !
The math in this situation is easy and straight forward.
First let's define two the coordinate systems for two overlapping images taken by the same camera with the focal length with the following schema:
Let us say that first camera position is defined as follows:
While it's orientation by using three Euler angles is:
By using this definition the corresponding rotation matrix is the identity matrix
The second camera position can be defined as follows:
And since the orientation is the same as the first camera, all Euler angles remain zero:
Which also means that the corresponding rotation matrix is the identity matrix.
If the images overlap and the orientation is the same, the situation in the image space looks like this:
Here the image coordinates and their measurement accuracy are defined as follows:
This geometrical situation can be described by using the Intercept Theorem:
As you see it's not complicated. But be aware that this solution is certainly not the best, since it's base assumption that all orientation angles are the same can't be fulfilled in reality.
If you need to be accurate then you have to perform an bundle adjustment. However, this equations are often used to determine the approximated solution for this geometric situation, where the values are used to linearize the collinearity equations.

Project a 2D point from one camera view onto the corresponding 2D point in another camera view of the same scene

I'm using open cv in C++ in multi-view scene with two cameras. I have the intrinsic and extrinsic parameters for both cameras.
I would like to map a (X,Y) point in View 1 to the same point in the second View. I'm am slightly unsure how I should use the intrinsic and extrinsic matrices in order to convert the points to a 3D world and finally end up with the new 2D point in view 2.
It is (normally) not possible to take a 2D coordinate in one image and map it into another 2D coordinate without some additional information.
The main problem is that a single point in the left image will map to a line in the right image (an epipolar line). There are an infinite number of possible corresponding locations because depth is a free parameter. Secondly it's entirely possible that the point doesn't exist in the right image i.e. it's occluded. Finally it may be difficult to determine exactly which point is the right correspondence, e.g. if there is no texture in the scene or if it contains lots of repeating features.
Although the fundamental matrix (which you get out of cv::StereoCalibrate anyway) gives you a constraint between points in each camera: x'Fx = 0, for a given x' there will be a whole family of x's which will satisfy the equation.
Some possible solutions are as follows:
You know the 3D location of a 2D point in one image. Provided that 3D point is in a common coordinate system, you just use cv::projectPoints with the calibration parameters of the other camera you want to project into.
You do some sparse feature detection and matching using something like SIFT or ORB. Then you can calculate a homography to map the points from one image to the other. This makes a few assumptions about things being planes. If you Google panorama homography, there are plenty of lecture slides detailing this.
You calibrate your cameras, perform an epipolar rectification (cv::StereoRectify, cv::initUndistortRectifyMap, cv::remap) and then run them through a stereo matcher. The output is a disparity map which gives you exactly what you want: a per-pixel mapping from one camera to the other. That is, left[y,x] = right[y, x+disparity_map[y,x]].
(1) is by far the easiest, but it's unlikely you have that information already. (2) is often doable and might be suitable, and as another commenter pointed out will be poor where the planarity assumption fails. (3) is the general (ideal) solution, but has its own drawbacks and relies on the images being amenable to dense matching.

Camera pose estimation

I'm currently working on a project that deals with the reconstruction based on a set of images, in a multi-view stereo approach. As such I need to know the several images pose in space. I find matching features using surf, and from the correspondences I find the essential matrix.
Now comes the problem: It is possible to decompose the essential matrix with SVD, but this can lead to 4 different results, as I read in a book. How can I obtain the correct one, assuming this is possible?
What other algorithms can I use for this?
Wikipedia says:
It turns out, however, that only one of the four classes of solutions
can be realized in practice. Given a pair of corresponding image
coordinates, three of the solutions will always produce a 3D point
which lies behind at least one of the two cameras and therefore cannot
be seen. Only one of the four classes will consistently produce 3D
points which are in front of both cameras. This must then be the
correct solution.
If you have the extrinsic calibration parameters for the camera in the first frame, or if you assume that it lies at a default calibration, say translation of (0,0,0) and rotation of (0,0,0), then you can determine which of the decompositions is the valid one.
Thanks to Zaphod answer I was able to solve my problem. Here's what I did:
First I calculated the Essential Matrix (E) from a set of point correspondences in both images.
Using SVD, decomposed it into 2 solutions. Using the negated Essential Matrix -E (which also satisfies the same constraints) I arrived at 2 more solutions for a total of 4 possible camera positions and orientations.
Then, for all solutions I triangulated the point correspondences and determined which intersected in front of both cameras, by taking the dot product of the point coordinate and each of the cameras viewing direction. I both are positive, then that intersection is in front of both cameras.
In the end the solution that delivers the most intersections in front of the cameras is the chosen one.

Correspondence between a set of 3D model points and their image projections

I have a set of 3-d points and some images with the projections of these points. I also have the focal length of the camera and the principal point of the images with the projections (resulting from previously done camera calibration).
Is there any way to, given these parameters, find the automatic correspondence between the 3-d points and the image projections? I've looked through some OpenCV documentation but I didn't find anything suitable until now. I'm looking for a method that does the automatic labelling of the projections and thus the correspondence between them and the 3-d points.
The question is not very clear, but I think you mean to say that you have the intrinsic calibration of the camera, but not its location and attitude with respect to the scene (the "extrinsic" part of the calibration).
This problem does not have a unique solution for a general 3d point cloud if all you have is one image: just notice that the image does not change if you move the 3d points anywhere along the rays projecting them into the camera.
If have one or more images, you know everything about the 3D cloud of points (e.g. the points belong to an object of known shape and size, and are at known locations upon it), and you have matched them to their images, then it is a standard "camera resectioning" problem: you just solve for the camera extrinsic parameters that make the 3D points project onto their images.
If you have multiple images and you know that the scene is static while the camera is moving, and you can match "enough" 3d points to their images in each camera position, you can solve for the camera poses up to scale. You may want to start from David Nister's and/or Henrik Stewenius's papers on solvers for calibrated cameras, and then look into "bundle adjustment".
If you really want to learn about this (vast) subject, Zisserman and Hartley's book is as good as any. For code, look into libmv, vxl, and the ceres bundle adjuster.

Resources