In OpenCV APIs, there are cv::stereoCalibrate() and cv::fisheye::stereoCalibrate() for calibrating normal stereo cameras and fisheye stereo cameras repectively. For both APIs, the stereo camera pair use the same camera model. In other words, both stereo cameras must use normal camera model (6 radial distortion coefficients + 2 tangential ones) for cv::stereoCalibrate() or fisheye camera model (4 fisheye distortion coefficients) for cv::fisheye::stereoCalibrate().
Is there any way to calibrate a stereo camera pair where one use normal camera model and the other use fisheye camera model via OpenCV?
For anyone find this question helpful, OpenCV doesn't natively support stereo calibration (i.e., via stereoCalibrate() API) for mixed camera models (e.g., one is normal, and the other fisheye). However, this can be done by the following steps:
Calibrate each camera's intrinsics via API cv::calibrateCamera() or cv::fisheye::calibrate().
Undistort all image points for both cameras use previous intrinsics via API cv::undistortPoints() or cv::fisheye::undistortPoints().
Calibrate the extrinsics for the stereo pair using the image points after undistortion in previous step and "perfect intrinsics" for both cameras via API cv::stereoCalibrate() or cv::fisheye::stereoCalibrate() (with CALIB_FIX_INTRINSIC flag to compute extrinsics only). A camera with perfect intrinsics means its camera matrix is [1 0 0; 0 1 0; 0 0 1] and has zero distortion (distortion coefficients are all 0).
Note: It doesn't matter to use cv::stereoCalibrate() or cv::fisheye::stereoCalibrate() in this step, which will result in the same extrinsics.
Related
I'm using a physical camera in Unity where I set the focal length f and sensor size sx and sy. Can these parameters and image resolution be used to create a camera calibration matrix? I probably need the focal length in terms of pixels and the cx and cy parameters that denote the deviation of the image plane center from the camera's optical axis. Is cx = w/2 and cy = h/2 correct in this case (w: width, h: height)?
I need the calibration matrix to compute a homography in OpenCV using the camera pose from Unity.
Yes, that's possible. I have done that with multiple different camera models( pinhole model, fisheye lens, polynominal lens model, etc).
Calibrate your camera with opencv and put the calibration parameters to the shader. You need to write a custom shader. Have a look at my previous question.
Camera lens distortion in OpenGL
You don't need homography here.
#Tuebel gave me a nice piece of code and I have successfully adapted it to real camera models.
The hardest part will be managing the difference between opengl camera coordinate and opencv camera coordinate. The camera calibration parameters are of course calibrated based on the opencv camera coordinate.
I am trying to get depth of each pixel from RGB camera.
So I use ToF camera and Lidar(SICK) to get depth data through PCL and OpenNI.
In order to project the depth data to RGB image currectly, I need to known the Rotation and translation (so-called pose) of ToF or Lidar to the RGB camera.
OpenCV module provide the stereo calibration to get pose between two RGB camera.
But I can not use same way because of that depth sensor only get depth data so that corner detecton of chessboard for calibration will fail.
So...what sould I do if I want to get depth of each pixel from RGB camera.
thanks for any suggestion~~
I have two cameras with different resolutions but stereoCalibrate functions has only one option for imagesize.
If I understand correctly stereoCalibrate computes rigid transformation matrix from cam1 to cam2. If that is true then which camera size should I use as input to the function stereoCalibrate?
The imageSize parameter is used only to initialize intrinsic camera matrices.
I suggest to calibrate each camera independently using cv::calibrateCamera() and so get the camera matrices and distortion coefficients for each camera. And then estimate the transformation between the camera coordinate systems (rotation R and translation t) using cv::stereoCalibrate() with flags CV_CALIB_FIX_INTRINSIC enabled (with the pre-estimated camera matrices and distortion coefficients).
And so the imageSize parameter doesn't matter anymore.
How do you stereo cameras so that the output of the triangulation is in a real world coordinate system, that is defined by known points?
OpenCV stereo calibration returns results based on the pose of the left hand camera being the reference coordinate system.
I am currently doing the following:
Intrinsically calibrating both the left and right camera using a chess board. This gives the Camera Matrix A, and the distortion coefficients for the camera.
Running stereo calibrate, again using the chessboard, for both cameras. This returns the extrinsic parameters, but they are relative to the cameras and not the coordinate system I would like to use.
How do I calibrate the cameras in such a way that known 3D point locations, with their corresponding 2D pixel locations in both images provides a method of extrinsically calibrating so the output of triangulation will be in my coordinate system?
Calculate disparity map from the stereo camera - you may use cvFindStereoCorrespondenceBM
After finding the disparity map, refer this: OpenCv depth estimation from Disparity map
I've 4 ps3eye cameras. And I've calibrated camera1 and camera2 using cvStereoCalibrate() function of OpenCV library
using a chessboard pattern by finding the corners and passing their 3d coordinates into this function.
Also I've calibrated camera2 and camera3 using another set of chessboard images viewed by camera2 and camera3.
Using the same method I've calibrated camera3 and camera4.
So now I've extrinsic and intrinsic parameters of camera1 and camera2,
extrinsic and intrinsic parameters of camera2 and camera3,
and extrinsic and intrinsic parameters of camera3 and camera4.
where extrinsic parameters are matrices of rotation and translation and intrinsic are matrices of focus length and principle point.
Now suppose there's a 3d point(world coordinate)(And I know how to find 3d coordinates from stereo cameras) that is viewed by camera3 and camera4 which is not viewed by camera1 and camera2.
The question I've is: How do you take this 3d world coordinate point that is viewed by camera3 and camera4 and transform it with respect to camera1 and camera2's
world coordinate system using rotation, translation, focus and principle point parameters?
OpenCV's stereo calibration gives you only the relative extrinsic matrix between two cameras.
Acording to its documentation, you don't get the transformations in world coordinates (i.e. in relation to the calibration pattern ). It suggests though to run a regular camera calibration on one of the images and at least know its transformations. cv::stereoCalibrate
If the calibrations were perfect, you could use your daisy-chain setup to derive the world transformation of any of the cameras.
As far as I know this is not very stable, because the fact that you have multiple cameras should be considered when running the calibration.
Multi-camera calibration is not the most trivial of problems. Have a look at:
Multi-Camera Self-Calibration
GML C++ Camera Calibration Toolbox
I'm also looking for a solution to this, so if you find out more regarding this and OpenCV, let me know.