I want to calibrate stereo camera in c#(in Emgu library). but the calibration accuracy is very badthis is an example of disparity map!Please help me.
Thank you
As you can see in the picture, the two corner positions are not aligned vertically in the right and left images (red lines). Thus the stereo matching would fail. (Your disparity image is not noisy - it is just invalid.)
I think there is a problem producing the rectified images but I have no clue unless you provide more information on how you made the images.
Normally, the stereo vision algorithm must be able to delete this misalignment. In the attached file 1, rectification images are shown. It seems the rectified images are good, but disparity map is very bad. What is your idea?
Thank you
Related
I am trying to calibrate Camera-Projector 3D System. First, I used Logitech C920 webcam and I got an acceptable results in term of calibration accuracy (0.8 reprojection error). However, colors and resolution were bad.
Now, I got a professional camera (Nikon D3400 18-55). I did not manage to get better calibration results than 5.5! I did the calibration using exactly the same projector, the same pattern and the same algorithm.
All settings are fixed in my Camera including Focus, Iso, Aperture, Optical zoom and shutter speed.
What did I miss? What are the possible causes of this problem?
I know that my question is a bit board but it seems that there is a stupid mistake that I have made so any clue is appreciated.
I do not think that it is matter but I am using Brown University 3D Scanning Software which uses OpenCV 2.4.9.
First, your reprojection error is in pixels. What was the resolution of your webcam and your Nicon? I am guessing that the Nicon has much higher resolution, so the pixel size in much smaller. That would make the error in pixels higher, although 5.5 pixels still seems way too high.
The next thing I would worry about is lens distortion. What does the undistorted Nicon image look like? It may be that you do not have enough calibration points close to the edges of the image, which would mean that you are not estimating the distortion coefficients accurately. Or it may be that you have a wide-angle lens, and the distortion is simply too great for this camera model to handle.
So, what you should do is look at the undistorted Nicon image. If that looks strangely warped, then try capturing more calibration images with the pattern close to the edges of the image.
I am also confused by what you wrote about the colors and resolution being bad. Are you talking about undistorted or rectified images? Why would colors be bad?
I'm trying to perform stereo camera calibration, rectification and disparity map generation. It's working fine with normal sample data. However, I'm trying to use the dual cameras on an iPhone 7+, which have different zoom. The telephoto lens has 2X zoom compared to the wide angle camera. I ran the images through the algorithm, and it is succeeding, although with a high error rate. However, when I open up the rectified images, they have a weird spherical look to the edges. The center looks fine. I'm assuming this is due to the cameras having different zoom levels. Is there anything special I need to do to deal with this? Or do I just need to crop any output to the usable undistorted area? Here is what I'm seeing:
EDIT:
I tried using the calibration result from these checkerboard images to rectify an image of some objects, and the rectification was way off, not even close. If I rectify one of my checkerboard images, they are spot on. Any ideas why that happens?
EDIT2:
These are what my input images look like that result in the spherical looking output image. They are both taken from the exact same position, the iPhone was mounted to a tripod and I used a bluetooth device to trigger the shutter so the image wouldn't get shaken, my code automatically takes one image with each lens. I took 19 such images from different angles, all images show the full checkerboard. The more zoomed in image is the one that rectified to the top spherical looking image.
EDIT3:
Here is the disparity map using the calibration I got.
How do I recover correct image from a radially distorted image using OpenCV? for example:
Please provide me useful links.
Edit
The biggest problem is I neither have the camera used for taking the pic nor the chessboard image.
Is that even possible?
Well, there is not much to do if you don't have the camera, or at least the model of it. As you may know a usual camera model is pin-hole, this basically consist in the 3D world coordinates are transformed (mapped) to the camera image plane 2D coordinates.
Camera Resectioning
If you don't have access to the camera or at least two chessboard images, you can't estimate the focal, principal point, and distortion coefficients. At least not in a traditional way, if you have more images than the one that you showed or a video from that camera you could try auto or self calibration.
Camera auto-calibration
Another auto-calibration
yet another
Opencv auto-calibration
I'm currently implementing the stereovision with OpenCV. Now I'm using the Stereo_Calib sample to remove the distortion en rectify the image. Removing the distortion works fine.
But when I apply rectification, the image is very warped.
This is the code to rectify the images. The parameters rmap are calculated in the same way as in the Stereo_calib example (see here)
void StereoCalibration::StereoRectify(Mat &imageLeft, Mat &imageRight)
{
Mat imLeft, imRight;
remap(imageLeft, imLeft,DistLeft.rmap[0], DistLeft.rmap[1], CV_INTER_CUBIC);
remap(imageRight,imRight, DistRight.rmap[0], DistRight.rmap[1], CV_INTER_CUBIC);
imageLeft = imLeft;
imageRight = imRight;
}
I realise this question is a few years old however, I have recently had a similar issue. Building on morynicz answer about "bad chessboard" patterns to calibrate stereo images, I found that even with a slight deformation in your chessboard pattern, for example that it isn't flat, can produce large warping in the stereo image pair on rectification. The algorithms in OpenCV, for instance, assume a flat chessboard pattern is being presented such that any physical deformation in that pattern will be wrongly attributed to distortions in the camera optics (or in the relative orientations of the two camera sensors). The algorithms will then try really hard to remove this false distortion leading to very warped images.
To avoid this problem, were possible, use a tablet (or other electronic screen) to display the chessboard pattern as it is then guaranteed to be flat.
Additionally, you should check that the images you are using to calibrate the stereo pair are in focus and have no motion blur or image tearing.
If using OpenCV to do the rectification do some experimentation with the flags used in the stereoCalibrate function as this may lead to a more "optimised" rectification for your particular application.
For anyone looking for help on this, I was dealing with very large scale resolution images and was getting very low reprojection error rate with good calibration images. I was getting very warped stereo pairs after rectification and a really bad depth map.
One thing to try is if your images are warped you might need to down-sample them.
Another thing to try is to combine the flags in stereoCalibrate instead of just choosing one.
Something like this worked for me :
cv2.stereoCalibrate(
object_points, image_points_left,image_points_right,
camera_matrix_left,dist_left,
camera_matrix_right, dist_right,
(5472,3648),None,None,None,None,
cv2.CALIB_FIX_ASPECT_RATIO + cv2.CALIB_ZERO_TANGENT_DIST + cv2.CALIB_USE_INTRINSIC_GUESS + cv2.CALIB_SAME_FOCAL_LENGTH + cv2.CALIB_RATIONAL_MODEL,criteria
)
I had the same problem, and I think that the issue was bad chessboard used to calibration or mixing up the maps.
I started working on opencv stereo image calibration and rectification recently and I was getting similar images. Although it is true to make sure the board is straight and it is true that we need to take multiple images on the corners and in the middle of the camera at different x,y,z and skew positions, what did the trick for me was the flags in stereoCalibrate. I used all the flags specified in the opencv docs except for INTRINSIC_GUESS and it started very nice undistorted and rectified images.
Does stereo calibration still work if the right image is scaled a bit different than the left, or vice versa?
No, for two reasons:
The triangulation of the 3D point will be affected
Your correspondences will be inaccurate if you are using scale-variant interest point.
Yes, stereo calibration can still work if you have two different images. You have to make sure the calibration takes the difference into account (so the default OpenCV version won't work) and for best results you should try to make sure the cameras are synchronized.
It will be less accurate (more correspondence errors as Jacob notes
The Field of View of the stereo pair will be restricted to the smaller of the images, and than just to the overlapping area between the two images.
You will probably have to write your own calibration and rectification code. I'm not aware of any libraries that can do it.