Is there a simple function in OpenCV to get the 3D position and pose of an object from a stereo camera pair?
I have the cameras and baseline calibrated with the chess board. I now want to take a known object like the same chessboard, with known 3D points in it's own coordinates and find the real world position (in the camera coordinates).
There are functions to do this for a single camera (POSIT) and functions to find the 3D disparity image for the entire scene.
It must be simple to do almost the same process as for the camera calibration and find the chessboard in the camera pair - but I can't find any function that takes object + image coords and returns camera coords for a stereo pair.
Thank you
After calibrating your stereo camera system, you have got the relative pose (=translation+orientation) between the two cameras. Using solvePnP/solvePnPRansac if you find the relative pose between one of the cameras and the object and then consequently you have got the relative pose between the object and the other camera as well. For example, in stereo systems used for robot navigation usually reconstructed 3D points from previous frames are matched against only one of the cameras and then the relative camera pose from the 3d points is estimated. The stereo system just eases and improves the quality of triangulation/structure reconstruction.
Yes, StereoBM_create():
import numpy as np
import cv2 as cv
from matplotlib import pyplot as plt
imgL = cv.imread('tsukuba_l.png',0)
imgR = cv.imread('tsukuba_r.png',0)
stereo = cv.StereoBM_create(numDisparities=16, blockSize=15)
disparity = stereo.compute(imgL,imgR)
plt.imshow(disparity,'gray')
plt.show()
https://docs.opencv.org/4.x/dd/d53/tutorial_py_depthmap.html
Related
I have a laser giving out range data and a monocular camera attached on top of it which is used for detection and tracking. I have the intrinsic calibration parameters of the camera. I want to establish a correspondence between the camera data and laser data. Is there any known method to get the extrinsic calibration matrix?? The end goal is to use x,y of the detected object from the camera and z (or depth) of the detected object from the laser.
Thank you in advance.
Not sure if the question is still open, in this repo you'll find some Matlab code to get the extrinsic between an 1D laser range finder (or altimeter) and a monocular camera:
https://github.com/RiccardoGiubilato/1d-lidar-cam-calib
required are pairs of images of a plane with a printout of a checkerboard and "1-D" associated ranges from the altimeter.
I have R|t between two cameras which is estimated using stereoCalibrate() function from Opencv. From stereoCalibrate() function, we are getting R1 t1 and R2 t2 for each cameras respectively. We also getting between camera R t(R t between both cameras). Further, we also getting 2 intrinsic matrices K1 and K2, one for each of the camera.
I tried to map points from one camera to another camera using estimated R|t (between two cameras). However, I failed to map, even the points which I used for estimating R|t. I tried to map using depth data also but i failed. Any idea how to map the points from one camera to another?.
I tried Pose estimation of 2nd camera of a calibrated stereo rig, given 1st camera pose but didn't get success.
The "mapping" you seek requires knowledge of the 3D geometry of the scene. This can be inferred from a depth map, i.e. an image associated to a camera, whose pixel values equal the distance from the camera of the scene object seen through each pixel. The depth map itself can be computed from a stereo algorithm.
In some special cases the mapping can be computed without knowledge of the scene geometry. These include:
The camera displacement is a pure rotation (or, more generally, the translation between the cameras is very small compared to the distance of the scene objects from the cameras). In this case the image mapping is a homography.
The scene lies in a plane. In this case also the image mapping is a homography.
I'm currently trying to discover the 3D position of a projector within a real world coordinate system. The origin of such a system is, for example, the corner of a wall. I've used Open Frameworks addon called ofxCvCameraProjectorCalibration
that is based on OpenCV functions, namely calibrateCamera and stereoCalibrate methods. The application output is the following:
camera intrisic matrix (distortion coeficients included);
projector intrisic matrix (distortion coeficients included);
camera->projector extrinsic matrix;
My initial idea was, while calibration the camera, place the chessboard pattern at the corner of the wall and extract the extrinsic parameters ( [RT] matrix ) for that particular calibration.
After calibrating both camera and projector do I have all the necessary data to discover the position of the projector in real world coordinates? If so, what's the matrix manipulation required to get it?
How do you stereo cameras so that the output of the triangulation is in a real world coordinate system, that is defined by known points?
OpenCV stereo calibration returns results based on the pose of the left hand camera being the reference coordinate system.
I am currently doing the following:
Intrinsically calibrating both the left and right camera using a chess board. This gives the Camera Matrix A, and the distortion coefficients for the camera.
Running stereo calibrate, again using the chessboard, for both cameras. This returns the extrinsic parameters, but they are relative to the cameras and not the coordinate system I would like to use.
How do I calibrate the cameras in such a way that known 3D point locations, with their corresponding 2D pixel locations in both images provides a method of extrinsically calibrating so the output of triangulation will be in my coordinate system?
Calculate disparity map from the stereo camera - you may use cvFindStereoCorrespondenceBM
After finding the disparity map, refer this: OpenCv depth estimation from Disparity map
I've 4 ps3eye cameras. And I've calibrated camera1 and camera2 using cvStereoCalibrate() function of OpenCV library
using a chessboard pattern by finding the corners and passing their 3d coordinates into this function.
Also I've calibrated camera2 and camera3 using another set of chessboard images viewed by camera2 and camera3.
Using the same method I've calibrated camera3 and camera4.
So now I've extrinsic and intrinsic parameters of camera1 and camera2,
extrinsic and intrinsic parameters of camera2 and camera3,
and extrinsic and intrinsic parameters of camera3 and camera4.
where extrinsic parameters are matrices of rotation and translation and intrinsic are matrices of focus length and principle point.
Now suppose there's a 3d point(world coordinate)(And I know how to find 3d coordinates from stereo cameras) that is viewed by camera3 and camera4 which is not viewed by camera1 and camera2.
The question I've is: How do you take this 3d world coordinate point that is viewed by camera3 and camera4 and transform it with respect to camera1 and camera2's
world coordinate system using rotation, translation, focus and principle point parameters?
OpenCV's stereo calibration gives you only the relative extrinsic matrix between two cameras.
Acording to its documentation, you don't get the transformations in world coordinates (i.e. in relation to the calibration pattern ). It suggests though to run a regular camera calibration on one of the images and at least know its transformations. cv::stereoCalibrate
If the calibrations were perfect, you could use your daisy-chain setup to derive the world transformation of any of the cameras.
As far as I know this is not very stable, because the fact that you have multiple cameras should be considered when running the calibration.
Multi-camera calibration is not the most trivial of problems. Have a look at:
Multi-Camera Self-Calibration
GML C++ Camera Calibration Toolbox
I'm also looking for a solution to this, so if you find out more regarding this and OpenCV, let me know.