I am trying to calibrate Camera-Projector 3D System. First, I used Logitech C920 webcam and I got an acceptable results in term of calibration accuracy (0.8 reprojection error). However, colors and resolution were bad.
Now, I got a professional camera (Nikon D3400 18-55). I did not manage to get better calibration results than 5.5! I did the calibration using exactly the same projector, the same pattern and the same algorithm.
All settings are fixed in my Camera including Focus, Iso, Aperture, Optical zoom and shutter speed.
What did I miss? What are the possible causes of this problem?
I know that my question is a bit board but it seems that there is a stupid mistake that I have made so any clue is appreciated.
I do not think that it is matter but I am using Brown University 3D Scanning Software which uses OpenCV 2.4.9.
First, your reprojection error is in pixels. What was the resolution of your webcam and your Nicon? I am guessing that the Nicon has much higher resolution, so the pixel size in much smaller. That would make the error in pixels higher, although 5.5 pixels still seems way too high.
The next thing I would worry about is lens distortion. What does the undistorted Nicon image look like? It may be that you do not have enough calibration points close to the edges of the image, which would mean that you are not estimating the distortion coefficients accurately. Or it may be that you have a wide-angle lens, and the distortion is simply too great for this camera model to handle.
So, what you should do is look at the undistorted Nicon image. If that looks strangely warped, then try capturing more calibration images with the pattern close to the edges of the image.
I am also confused by what you wrote about the colors and resolution being bad. Are you talking about undistorted or rectified images? Why would colors be bad?
Related
I use this reference https://automaticaddison.com/how-to-perform-pose-estimation-using-an-aruco-marker/ to estimate pose of a marker.
When I obtain cam matrix and distortion matrix I used the full camera resolution.
However, when I change the resolution (image size) before pose estimation, I am getting different results. I am not sure why and which resolution would be correct to use.
Should we always use the same resolution as what was used for camera calibration?
I expected the pose to be somewhat independent from image size other than minor changes. Any thoughts?
Yes, always use the same resolution.
One could recalculate the camera matrix and distortion coefficients to fit a different resolution but that's a hassle, and requires some knowledge of how the camera made these pictures (binning, cropping). Unless you understand the math behind it, just stick with same resolution.
i used the opencv sample code for stereo camera calibration to get the intrinsics and extrinsics of my stereo camera. I used 149 image pairs and the program detected 114 image pairs
Result of my Calibration:
..... 114 pairs have been successfully detected.
Running stereo calibration ...
done with RMS error = 1.60208
average epipolar error = 1.15512
i know the error should be below 1 but i only get below 1 of error in small number of image pairs. so im not sure if my result is good or bad.
You should be able to get an error below 1, but it's not so bad. I also do the calibration with around 100 of images. I often got a few images to discard in which the detection was not reliable.
If you decreased the number of images down to 10 images, then the calibration might overfit for these cases. The error would then not be reliable.
In the calibration process, the problems I faced came from the calibration setup. My recommendations are the following:
Check that your calibration pattern is perfectly flat. In my case I printed on adhesive paper and glued it on a piece of glass.
Check that your calibration pattern is not symmetrical in rotation, otherwise the pose estimation could be wrong.
Check the intermediate pattern points detection. There are some examples in opencv to show the corners or circles centers detected points.
The error can be also displayed for each frame. This can help you to understand for which images you have a problem. If you see that these images actually have a detection problem, you can discard them.
If you acquire videos and not images, both cameras should be synchronized with a hardware connection. In my case I cannot have such a link, therefore I built some kind of holder for the calibration target to keep it still, and I acquired only images, not videos.
This won't reduce your calibration error, but use very different pattern positions to cover the maximum of the field of view.
If your depth of field is small and you have blurry images before/after the focus because of that, change from the chessboard pattern to a circles pattern (functions also available in opencv).
If you don't have a strong distortion in your images (e.g. a photo with an iphone doesn't really show a strong fisheye-like distortion), consider forcing K3=0.
In my case, I fixed the "principal point" in the middle of the image, because the algorithm always found crazy values for these parameters, like for K3.
Hope this helps a bit. Good luck!
I want to calibrate stereo camera in c#(in Emgu library). but the calibration accuracy is very badthis is an example of disparity map!Please help me.
Thank you
As you can see in the picture, the two corner positions are not aligned vertically in the right and left images (red lines). Thus the stereo matching would fail. (Your disparity image is not noisy - it is just invalid.)
I think there is a problem producing the rectified images but I have no clue unless you provide more information on how you made the images.
Normally, the stereo vision algorithm must be able to delete this misalignment. In the attached file 1, rectification images are shown. It seems the rectified images are good, but disparity map is very bad. What is your idea?
Thank you
I want to find the depth map for stereo images.At present i am working on the internet image,I want to take stereo images so that i can work on it by my own.How to take best stereo images without much noise.I have single camera.IS it necessary to do rectification?How much distance must be kept between the cameras?
Not sure I've understood your problem correclty - will try anyway
I guess your currently working with images from middlebury or something similar. If you want to use similar algorithms you have to rectify your images because they are based on the assumption that corresponding pixels are on the same line in all images. If you actually want depth images (!= disparity images) you also need to get the camera extrinsics.
Your setup should have two cameras and you have to make sure that they don't change there relative position/orientation - otherwise your rectification will break apart. In the first step you have to calibrate your system to get intrinsic and extrinsic camera parameters. For that you can either use some tool or roll your own with (for example) OpenCV (calib-module). Print out a calibration board to calibrate your system. Afterwards you can take images and use the calibration to rectify the images.
Regarding color-noise:
You could make your aperture very small and use high exposure times. In my own opinion this is useless because real world situations have to deal with such things anyway.
In short, there are plenty of stereo images on the internet that are already rectified. If you want to take your own stereo images you have to follow these three steps:
The relationship between distance to the object z (mm) and disparity in pixels D is inverse: z=fb/D, where f is focal length in pixels and b is camera separation in mm. Select b such that you have at least several pixels of disparity;
If you know camera intrinsic matrix and compensated for radial distortions you still have to rectify your images in order to ensure that matches are located in the same row. For this you need to find a fundamental matrix, recover essential matrix, apply rectifying homographies and update your intrinsic camera parameters... or use stereo pairs from the Internet.
The low level of noise in the camera image is helped by brightly illuminated scenes, large aperture, large pixel size, etc.; however, depending on your set up you still can end up with a very noisy disparity map. The way to reduce this noise is to trade-off with accuracy and use larger correlation windows. Another way to clean up a disparity map is to use various validation techniques such as
error validation;
uniqueness validation or back-and-force validation
blob-noise supression, etc.
In my experience:
-I did the rectification, so I had to obtain the fundamental matrix, and this may not be correct with some image pairs.
-Better resolution of your camera is better for the matching, I use OpenCV and it has an implementation of BRISK descriptor, it was useful for me.
-Try to cover the same area and try not to do unnecessary rotations.
-Once you understand the Theory, OpenCV is a good friend. Here is some result, but I am still working on it:
Depth map:
Rectified images:
I'm currently implementing the stereovision with OpenCV. Now I'm using the Stereo_Calib sample to remove the distortion en rectify the image. Removing the distortion works fine.
But when I apply rectification, the image is very warped.
This is the code to rectify the images. The parameters rmap are calculated in the same way as in the Stereo_calib example (see here)
void StereoCalibration::StereoRectify(Mat &imageLeft, Mat &imageRight)
{
Mat imLeft, imRight;
remap(imageLeft, imLeft,DistLeft.rmap[0], DistLeft.rmap[1], CV_INTER_CUBIC);
remap(imageRight,imRight, DistRight.rmap[0], DistRight.rmap[1], CV_INTER_CUBIC);
imageLeft = imLeft;
imageRight = imRight;
}
I realise this question is a few years old however, I have recently had a similar issue. Building on morynicz answer about "bad chessboard" patterns to calibrate stereo images, I found that even with a slight deformation in your chessboard pattern, for example that it isn't flat, can produce large warping in the stereo image pair on rectification. The algorithms in OpenCV, for instance, assume a flat chessboard pattern is being presented such that any physical deformation in that pattern will be wrongly attributed to distortions in the camera optics (or in the relative orientations of the two camera sensors). The algorithms will then try really hard to remove this false distortion leading to very warped images.
To avoid this problem, were possible, use a tablet (or other electronic screen) to display the chessboard pattern as it is then guaranteed to be flat.
Additionally, you should check that the images you are using to calibrate the stereo pair are in focus and have no motion blur or image tearing.
If using OpenCV to do the rectification do some experimentation with the flags used in the stereoCalibrate function as this may lead to a more "optimised" rectification for your particular application.
For anyone looking for help on this, I was dealing with very large scale resolution images and was getting very low reprojection error rate with good calibration images. I was getting very warped stereo pairs after rectification and a really bad depth map.
One thing to try is if your images are warped you might need to down-sample them.
Another thing to try is to combine the flags in stereoCalibrate instead of just choosing one.
Something like this worked for me :
cv2.stereoCalibrate(
object_points, image_points_left,image_points_right,
camera_matrix_left,dist_left,
camera_matrix_right, dist_right,
(5472,3648),None,None,None,None,
cv2.CALIB_FIX_ASPECT_RATIO + cv2.CALIB_ZERO_TANGENT_DIST + cv2.CALIB_USE_INTRINSIC_GUESS + cv2.CALIB_SAME_FOCAL_LENGTH + cv2.CALIB_RATIONAL_MODEL,criteria
)
I had the same problem, and I think that the issue was bad chessboard used to calibration or mixing up the maps.
I started working on opencv stereo image calibration and rectification recently and I was getting similar images. Although it is true to make sure the board is straight and it is true that we need to take multiple images on the corners and in the middle of the camera at different x,y,z and skew positions, what did the trick for me was the flags in stereoCalibrate. I used all the flags specified in the opencv docs except for INTRINSIC_GUESS and it started very nice undistorted and rectified images.