Should stereo projection be internally consistent? - opencv

I'm working on a problem where I have a calibrated stereo pair and am identifying stereo matches. I then project those matches using perspectiveTransform to give me (x, y, z) coordinates.
Later, I'm taking those coordinates and reprojecting them into my original, unrectified image using projectPoints with takes my left camera's M and D parameters. I was surprised to find that, despite all of this happening within the same calibration, the points do not project on the correct part of the image (they have about a 5 pixel offset, depending where they are in the image). This offset seems to change with different calibrations.
My question is: should I expect this, or am I likely doing something wrong? It seems like the calibration ought to be internally consistent.
Here's a screenshot of just a single point being remapped (drawn with the two lines):
(ignore the little boxes, those are something else)

I was doing something slightly wrong. When reprojecting from 3D to 2D, I missed that stereoRectify returns R1, the output rectification rotation matrix. When calling projectPoints, I needed to pass the inverse of that matrix as the second parameter (rvec).

Related

How to check whether Essential Matrix is correct or not without decomposing it?

On a very high level, my pose estimation pipeline looks somewhat like this:
Find features in image_1 and image_2 (let's say cv::ORB)
Match the features (let's say using the BruteForce-Hamming descriptor matcher)
Calculate Essential Matrix (using cv::findEssentialMat)
Decompose it to get the proper rotation matrix and translation unit vector (using cv::recoverPose)
Repeat
I noticed that at some point, the yaw angle (calculated using the output rotation matrix R of cv::recoverPose) suddenly jumps by more than 150 degrees. For that particular frame, the number of inliers is 0 (the return value of cv::recoverPose). So, to understand what exactly that means and what's going on, I asked this question on SO.
As per the answer to my question:
So, if the number of inliers is 0, then something went very wrong. Either your E is wrong, or the point matches are wrong, or both. In this case you simply cannot estimate the camera motion from those two images.
For that particular image pair, based on the visualization and from my understanding, matches look good:
The next step in the pipeline is finding the Essential Matrix. So, now, how can I check whether the calculated Essential Matrix is correct or not without decomposing it i.e. without calculating Roll Pitch Yaw angles (which can be done by finding the rotation matrix via cv::recoverPose)?
Basically, I want to double-check whether my Essential Matrix is correct or not before I move on to the next component (which is cv::recoverPose) in the pipeline!
The essential matrix maps each point p in image 1 to its epipolar line in image 2. The point q in image 2 that corresponds to p must be very close to the line. You can plot the epipolar lines and the matching points to see if they make sense. Remember that E is defined in normalized image coordinates, not in pixels. To get the epipolar lines in pixel coordinates, you would need to convert E to F (the fundamental matrix).
The epipolar lines must all intersect in one point, called the epipole. The epipole is the projection of the other camera's center in your image. You can find the epipole by computing the nullspace of F.
If you know something about the camera motion, then you can determine where the epipole should be. For example, if the camera is mounted on a vehicle that is moving directly forward, and you know the pitch and roll angles of the camera relative to the ground, then you can calculate exactly where the epipole will be. If the vehicle can turn in-plane, then you can find the horizontal line on which the epipole must lie.
You can get the epipole from F, and see if it is where it is supposed to be.

Image point to point matching using intrinsics, extrinsics and third-party depth

I want to reopen a similar question to one which somebody posted a while ago with some major difference.
The previous post is https://stackoverflow.com/questions/52536520/image-matching-using-intrinsic-and-extrinsic-camera-parameters]
and my question is can I do the matching if I do have the depth?
If it is possible can some describe a set of formulas which I have to solve to get the desirable matching ?
Here there is also some correspondence on slide 16/43:
Depth from Stereo Lecture
In what units all the variables here, can some one clarify please ?
Will this formula help me to calculate the desirable point to point correspondence ?
I know the Z (mm, cm, m, whatever unit it is) and the x_l (I guess this is y coordinate of the pixel, so both x_l and x_r are on the same horizontal line, correct if I'm wrong), I'm not sure if T is in mm (or cm, m, i.e distance unit) and f is in pixels/mm (distance unit) or is it something else ?
Thank you in advance.
EDIT:
So as it was said by #fana, the solution is indeed a projection.
For my understanding it is P(v) = K (Rv+t), where R is 3 x 3 rotation matrix (calculated for example from calibration), t is the 3 x 1 translation vector and K is the 3 x 3 intrinsics matrix.
from the following video:
It can be seen that there is translation only in one dimension (because the situation is where the images are parallel so the translation takes place only on X-axis) but in other situation, as much as I understand if the cameras are not on the same parallel line, there is also translation on Y-axis. What is the translation on the Z-axis which I get through the calibration, is it some rescale factor due to different image resolutions for example ? Did I wrote the projection formula correctly in the general case?
I also want to ask about the whole idea.
Suppose I have 3 cameras, one with large FOV which gives me color and depth for each pixel, lets call it the first (3d tensor, color stacked with depth correspondingly), and two with which I want to do stereo, lets call them second and third.
Instead of calibrating the two cameras, my idea is to use the depth from the first camera to calculate the xyz of pixel u,v of its correspondent color frame, that can be done easily and now to project it on the second and the third image using the R,t found by calibration between the first camera and the second and the third, and using the K intrinsics matrices so the projection matrix seems to be full known, am I right ?
Assume for the case that FOV of color is big enough to include all that can be seen from the second and the third cameras.
That way, by projection each x,y,z of the first camera I can know where is the corresponding pixels on the two other cameras, is that correct ?

Undistort single point after calibration

I have question regrading undistortion of a single point using either Scaramuzza or Mei's opencv
I have done the calibration on a dataset and extracted camera matrix and distortion coefficient (for mei) and the necessary parameters for Scaramuzza, after getting mapx (map1) and mapy (map2) I want to apply the undistortion on a single point.
for mei:
we have a position for a point (an intersection in a chess board) in a fisheye image, I was able to find its position using findchessboardcoreners (I know this can be used for calibration but I want to know a position for a well-known point in the image), now I have the undistorted image and I want to know the position of that point after the distortion correction,
I have read many links, suggesting to use undistortpoints method, or by using remap method, and I read links describing that dst(x,y)=src(mapx(x,y),mapy(x,y)) and I applied them all but when I draw the resulted point it wasn't on the same intersection of the chessboard it was even out of the board closer to its position in the fisheye
for Scaramuzza:
I tried to understand world2cam and cam2world methods but still I can't get it right
so
is there a method to know the position of a single point after the distortion correction if we have its position before the distortion? also can someone explain in deep way mapx and mapy .. I have read examples about them and how they can be used but whenever I wanted to implement the mapping between the distorted point and the undistorted one I got confused, for example: mapx and mapy should have the size of the src (in my case it is a point) so how can I use remap method here? or I should get them form the camera matrix and distortion coefficient and use dst(x,y)=src(map1(x,y),map2(x,y) ?
note
I have applied estimateNewCameraMatrixForUndistortRectify, initUndistortRectifyMap and remap successfully on images (for mei's) and I have also applied the undistortion method which was implemented by Scaramuzza on images with a very satisfying result (better than mei)
I was able to solve it by undistortpoints openCV function, the problem was I did not use the fisheye::undistortPoints but I was using the original one, still the surrounded points are not in their right position but the result was kind of acceptable

camera calibration for single plane

My problem statement is very simple. But I am unable to get the opencv calibration work for me. I am using the code from here : source code.
I have to take images parallel to the camera at a fixed distance. I tried taking test images (about 20 of them) only parallel to the camera as well as at different planes. Also I changed the size and the no of squares.
What would be the best way to calibrate in this scenario?
The undistorted image is cropped later, that's why it looks smaller.
After going through the images closely, the pincushion distortion seems to have been corrected. But the "trapezoidal" distortion still remains. Since the camera is mounted in a closed box, the planes at which I can take images is limited.
To simplify what Vlad already said: It is theoretically impossible to calibrate your camera with test images only parallel to the camera. You have to change your calibration board's orientation. In fact, you should have different orientation in each test image.
Check out the first two images in the link below to see how the calibration board should be slanted (or tilted):
http://www.vision.caltech.edu/bouguetj/calib_doc/
think about calibration problem as finding a projection matrix P:
image_points = P * 3d_points, where P = intrinsic * extrinsic
Now just bear with me:
You basically are interested in intrinsic part but the calibration algorithm has to find both intrinsic and extrinsic. Now, each column of projection matrix can be obtained if you select a 3D point at infinity, for example xInf = [1, 0, 0, 0]. This point is at infinity because when you transform it from homogeneous coordinates to Cartesian you get
[1/0, 0, 0]. If you multiply a projection matrix with a point at infinity you will get its corresponding column (1st for Xinf, 2nd for yInf, 3rd for zInf and 4th for camera center).
Thus the conclusion is simple - to get a projection matrix (that is a successful calibration) you have to clearly see points at infinity or vanishing points from the converging extensions of lines in your chessboard rig (aka end of the railroad tracks at the horizon). Your images don’t make it easy to detect vanishing points since you don’t slant your chessboard, nor rotate nor scale it by stepping back. Thus your calibration will always fail.

Distance to the object using stereo camera

Is there a way to calculate the distance to specific object using stereo camera?
Is there an equation or something to get distance using disparity or angle?
NOTE: Everything described here can be found in the Learning OpenCV book in the chapters on camera calibration and stereo vision. You should read these chapters to get a better understanding of the steps below.
One approach that do not require you to measure all the camera intrinsics and extrinsics yourself is to use openCVs calibration functions. Camera intrinsics (lens distortion/skew etc) can be calculated with cv::calibrateCamera, while the extrinsics (relation between left and right camera) can be calculated with cv::stereoCalibrate. These functions take a number of points in pixel coordinates and tries to map them to real world object coordinates. CV has a neat way to get such points, print out a black-and-white chessboard and use the cv::findChessboardCorners/cv::cornerSubPix functions to extract them. Around 10-15 image pairs of chessboards should do.
The matrices calculated by the calibration functions can be saved to disc so you don't have to repeat this process every time you start your application. You get some neat matrices here that allow you to create a rectification map (cv::stereoRectify/cv::initUndistortRectifyMap) that can later be applied to your images using cv::remap. You also get a neat matrix called Q, which is a disparity-to-depth matrix.
The reason to rectify your images is that once the process is complete for a pair of images (assuming your calibration is correct), every pixel/object in one image can be found on the same row in the other image.
There are a few ways you can go from here, depending on what kind of features you are looking for in the image. One way is to use CVs stereo correspondence functions, such as Stereo Block Matching or Semi Global Block Matching. This will give you a disparity map for the entire image which can be transformed to 3D points using the Q matrix (cv::reprojectImageTo3D).
The downfall of this is that unless there is much texture information in the image, CV isn't really very good at building a dense disparity map (you will get gaps in it where it couldn't find the correct disparity for a given pixel), so another approach is to find the points you want to match yourself. Say you find the feature/object in x=40,y=110 in the left image and x=22 in the right image (since the images are rectified, they should have the same y-value). The disparity is calculated as d = 40 - 22 = 18.
Construct a cv::Point3f(x,y,d), in our case (40,110,18). Find other interesting points the same way, then send all of the points to cv::perspectiveTransform (with the Q matrix as the transformation matrix, essentially this function is cv::reprojectImageTo3D but for sparse disparity maps) and the output will be points in an XYZ-coordinate system with the left camera at the center.
I am still working on it, so I will not post entire source code yet. But I will give you a conceptual solution.
You will need the following data as input (for both cameras):
camera position
camera point of interest (point at which camera is looking)
camera resolution (horizontal and vertical)
camera field of view angles (horizontal and vertical)
You can measure the last one yourself, by placing the camera on a piece of paper and drawing two lines and measuring an angle between these lines.
Cameras do not have to be aligned in any way, you only need to be able to see your object in both cameras.
Now calculate a vector from each camera to your object. You have (X,Y) pixel coordinates of the object from each camera, and you need to calculate a vector (X,Y,Z). Note that in the simple case, where the object is seen right in the middle of the camera, the solution would simply be (camera.PointOfInterest - camera.Position).
Once you have both vectors pointing at your target, lines defined by these vectors should cross in one point in ideal world. In real world they would not because of small measurement errors and limited resolution of cameras. So use the link below to calculate the distance vector between two lines.
Distance between two lines
In that link: P0 is your first cam position, Q0 is your second cam position and u and v are vectors starting at camera position and pointing at your target.
You are not interested in the actual distance, they want to calculate. You need the vector Wc - we can assume that the object is in the middle of Wc. Once you have the position of your object in 3D space you also get whatever distance you like.
I will post the entire source code soon.
I have the source code for detecting human face and returns not only depth but also real world coordinates with left camera (or right camera, I couldn't remember) being origin. It is adapted from source code from "Learning OpenCV" and refer to some websites to get it working. The result is generally quite accurate.

Resources