Does stereo calibration still work if the right image is scaled a bit different than the left, or vice versa?
No, for two reasons:
The triangulation of the 3D point will be affected
Your correspondences will be inaccurate if you are using scale-variant interest point.
Yes, stereo calibration can still work if you have two different images. You have to make sure the calibration takes the difference into account (so the default OpenCV version won't work) and for best results you should try to make sure the cameras are synchronized.
It will be less accurate (more correspondence errors as Jacob notes
The Field of View of the stereo pair will be restricted to the smaller of the images, and than just to the overlapping area between the two images.
You will probably have to write your own calibration and rectification code. I'm not aware of any libraries that can do it.
Related
Four cameras are arranged in a ring shape. How to calibrate the relative postures of the four cameras, that is, the attitudes of the other three cameras relative to the camera 0, the difficulties are:
When using a calibration plate, four cameras cannot see the calibration plate at the same time, and only two cameras can see the calibration plate, such as calibrating cam1 relative to cam0, then calibrating cam2 relative to cam0, and cam2 can only be relative to cam0. The indirect calculation, causing errors;
In the case of only calibrating two cameras, such as cam0 and cam1, the calibration plates seen by both cameras are tilted, and the calibration plate changes angle is small, which also causes errors.
Is there any better way to calibrate, thank you
There are many ways and papers introduced to this.
The similiest way is to calibrate two at a time. The pair need to be havig largest common FOV. But there are other methods as well.
You can use structure from motion-based method to move the camera around and jointly optimize for the camera poses. It was first published in CVPR between 2010 to 2016. forgot the exact year, but it about camera calibration with minimal or zero overlap.
You can add an IMU and use kalibra to calibrate them. Anchor all image to this IMU. https://github.com/ethz-asl/kalibr/wiki/camera-imu-calibration.
An alternative that I frequently use is the Robotics HAND EYE calibration System used in VINSMONO https://github.com/HKUST-Aerial-Robotics/VINS-Mono. The VINSMONO one requires no complicated pattern. just moving around.
For my paper, We use sea level vanishing line and vanishing point to calibrate cameras which cant get the same chessboard pattern in the same view.
Han Wang, Wei Mou, Xiaozheng Mou, Shenghai Yuan, Soner Ulun, Shuai Yang, Bok-Suk Shin, “An Automatic Self-Calibration Approach for Wide Baseline Stereo Cameras Using Sea Surface Images”, Unmanned Systems, Vol. 3, No. 4. pp. 277-290. 2015
There are others as well such as using vicon to image tracking system or many other methods. Just find one which you think is suitable for you and try it out.
I want to compute the extrinsic calibration of two cameras w.r.t each other and am using cv::stereoCalibrate() function to do this. However, the result does not correspond to the reality. What could be wrong ?
Setup: Two cameras mounted 7 meters high, facing each other while looking downwards. They have lot of field of view intersection and I captured checkerboard images that I used in calibration.
I am not flipping any of the images.
Do I need to flip the images ? or do I need to do something else to tell that the cameras are actually facing each other ?
Note: The same function perfectly calibrates cameras that are next to each other facing in the same direction (like any typical stereo camera).
Thanks
In order to "tell that the cameras are actually facing each other" you have to specify imagePoints1 and imagePoints2 correctly, such that points with matching indices correspond to a same physical point.
If in your case function works perfectly when the cameras are oriented in the same direction and doesn't work with your configuration - discrepancy between point indexing might be a probable reason (most likely points are flipped both vertically and horizontally).
One way to debug this is to either draw indices near the points on each of the frames, or color-code them and make sure they match between the images.
One question though - why do you use cv::stereoCalibrate()? The setting you described doesn't seem to be a good use-case for it. If you want to estimate extrinsic parameters of cameras you can use cv::calibrateCamera(). The only downside is that it assumes that intrinsic parameters are same for all provided views (all images were taken with same or very similar cameras). If it is not the case - indeed cv::stereoCalibrate() would be a better fit (but the manual suggests that you still estimate each camera intrinsic parameters individually using cv::calibrateCamera())
I want to find the depth map for stereo images.At present i am working on the internet image,I want to take stereo images so that i can work on it by my own.How to take best stereo images without much noise.I have single camera.IS it necessary to do rectification?How much distance must be kept between the cameras?
Not sure I've understood your problem correclty - will try anyway
I guess your currently working with images from middlebury or something similar. If you want to use similar algorithms you have to rectify your images because they are based on the assumption that corresponding pixels are on the same line in all images. If you actually want depth images (!= disparity images) you also need to get the camera extrinsics.
Your setup should have two cameras and you have to make sure that they don't change there relative position/orientation - otherwise your rectification will break apart. In the first step you have to calibrate your system to get intrinsic and extrinsic camera parameters. For that you can either use some tool or roll your own with (for example) OpenCV (calib-module). Print out a calibration board to calibrate your system. Afterwards you can take images and use the calibration to rectify the images.
Regarding color-noise:
You could make your aperture very small and use high exposure times. In my own opinion this is useless because real world situations have to deal with such things anyway.
In short, there are plenty of stereo images on the internet that are already rectified. If you want to take your own stereo images you have to follow these three steps:
The relationship between distance to the object z (mm) and disparity in pixels D is inverse: z=fb/D, where f is focal length in pixels and b is camera separation in mm. Select b such that you have at least several pixels of disparity;
If you know camera intrinsic matrix and compensated for radial distortions you still have to rectify your images in order to ensure that matches are located in the same row. For this you need to find a fundamental matrix, recover essential matrix, apply rectifying homographies and update your intrinsic camera parameters... or use stereo pairs from the Internet.
The low level of noise in the camera image is helped by brightly illuminated scenes, large aperture, large pixel size, etc.; however, depending on your set up you still can end up with a very noisy disparity map. The way to reduce this noise is to trade-off with accuracy and use larger correlation windows. Another way to clean up a disparity map is to use various validation techniques such as
error validation;
uniqueness validation or back-and-force validation
blob-noise supression, etc.
In my experience:
-I did the rectification, so I had to obtain the fundamental matrix, and this may not be correct with some image pairs.
-Better resolution of your camera is better for the matching, I use OpenCV and it has an implementation of BRISK descriptor, it was useful for me.
-Try to cover the same area and try not to do unnecessary rotations.
-Once you understand the Theory, OpenCV is a good friend. Here is some result, but I am still working on it:
Depth map:
Rectified images:
I'm currently working on a project that deals with the reconstruction based on a set of images, in a multi-view stereo approach. As such I need to know the several images pose in space. I find matching features using surf, and from the correspondences I find the essential matrix.
Now comes the problem: It is possible to decompose the essential matrix with SVD, but this can lead to 4 different results, as I read in a book. How can I obtain the correct one, assuming this is possible?
What other algorithms can I use for this?
Wikipedia says:
It turns out, however, that only one of the four classes of solutions
can be realized in practice. Given a pair of corresponding image
coordinates, three of the solutions will always produce a 3D point
which lies behind at least one of the two cameras and therefore cannot
be seen. Only one of the four classes will consistently produce 3D
points which are in front of both cameras. This must then be the
correct solution.
If you have the extrinsic calibration parameters for the camera in the first frame, or if you assume that it lies at a default calibration, say translation of (0,0,0) and rotation of (0,0,0), then you can determine which of the decompositions is the valid one.
Thanks to Zaphod answer I was able to solve my problem. Here's what I did:
First I calculated the Essential Matrix (E) from a set of point correspondences in both images.
Using SVD, decomposed it into 2 solutions. Using the negated Essential Matrix -E (which also satisfies the same constraints) I arrived at 2 more solutions for a total of 4 possible camera positions and orientations.
Then, for all solutions I triangulated the point correspondences and determined which intersected in front of both cameras, by taking the dot product of the point coordinate and each of the cameras viewing direction. I both are positive, then that intersection is in front of both cameras.
In the end the solution that delivers the most intersections in front of the cameras is the chosen one.
I'm currently implementing the stereovision with OpenCV. Now I'm using the Stereo_Calib sample to remove the distortion en rectify the image. Removing the distortion works fine.
But when I apply rectification, the image is very warped.
This is the code to rectify the images. The parameters rmap are calculated in the same way as in the Stereo_calib example (see here)
void StereoCalibration::StereoRectify(Mat &imageLeft, Mat &imageRight)
{
Mat imLeft, imRight;
remap(imageLeft, imLeft,DistLeft.rmap[0], DistLeft.rmap[1], CV_INTER_CUBIC);
remap(imageRight,imRight, DistRight.rmap[0], DistRight.rmap[1], CV_INTER_CUBIC);
imageLeft = imLeft;
imageRight = imRight;
}
I realise this question is a few years old however, I have recently had a similar issue. Building on morynicz answer about "bad chessboard" patterns to calibrate stereo images, I found that even with a slight deformation in your chessboard pattern, for example that it isn't flat, can produce large warping in the stereo image pair on rectification. The algorithms in OpenCV, for instance, assume a flat chessboard pattern is being presented such that any physical deformation in that pattern will be wrongly attributed to distortions in the camera optics (or in the relative orientations of the two camera sensors). The algorithms will then try really hard to remove this false distortion leading to very warped images.
To avoid this problem, were possible, use a tablet (or other electronic screen) to display the chessboard pattern as it is then guaranteed to be flat.
Additionally, you should check that the images you are using to calibrate the stereo pair are in focus and have no motion blur or image tearing.
If using OpenCV to do the rectification do some experimentation with the flags used in the stereoCalibrate function as this may lead to a more "optimised" rectification for your particular application.
For anyone looking for help on this, I was dealing with very large scale resolution images and was getting very low reprojection error rate with good calibration images. I was getting very warped stereo pairs after rectification and a really bad depth map.
One thing to try is if your images are warped you might need to down-sample them.
Another thing to try is to combine the flags in stereoCalibrate instead of just choosing one.
Something like this worked for me :
cv2.stereoCalibrate(
object_points, image_points_left,image_points_right,
camera_matrix_left,dist_left,
camera_matrix_right, dist_right,
(5472,3648),None,None,None,None,
cv2.CALIB_FIX_ASPECT_RATIO + cv2.CALIB_ZERO_TANGENT_DIST + cv2.CALIB_USE_INTRINSIC_GUESS + cv2.CALIB_SAME_FOCAL_LENGTH + cv2.CALIB_RATIONAL_MODEL,criteria
)
I had the same problem, and I think that the issue was bad chessboard used to calibration or mixing up the maps.
I started working on opencv stereo image calibration and rectification recently and I was getting similar images. Although it is true to make sure the board is straight and it is true that we need to take multiple images on the corners and in the middle of the camera at different x,y,z and skew positions, what did the trick for me was the flags in stereoCalibrate. I used all the flags specified in the opencv docs except for INTRINSIC_GUESS and it started very nice undistorted and rectified images.