Monocular camera and 1D laser rangefinder calibration - opencv

I have a laser giving out range data and a monocular camera attached on top of it which is used for detection and tracking. I have the intrinsic calibration parameters of the camera. I want to establish a correspondence between the camera data and laser data. Is there any known method to get the extrinsic calibration matrix?? The end goal is to use x,y of the detected object from the camera and z (or depth) of the detected object from the laser.
Thank you in advance.

Not sure if the question is still open, in this repo you'll find some Matlab code to get the extrinsic between an 1D laser range finder (or altimeter) and a monocular camera:
https://github.com/RiccardoGiubilato/1d-lidar-cam-calib
required are pairs of images of a plane with a printout of a checkerboard and "1-D" associated ranges from the altimeter.

Related

How to map points from left camera to right camera using Rotation(R) and translation(t) between two cameras obtained from stereoCalibrate()?

I have R|t between two cameras which is estimated using stereoCalibrate() function from Opencv. From stereoCalibrate() function, we are getting R1 t1 and R2 t2 for each cameras respectively. We also getting between camera R t(R t between both cameras). Further, we also getting 2 intrinsic matrices K1 and K2, one for each of the camera.
I tried to map points from one camera to another camera using estimated R|t (between two cameras). However, I failed to map, even the points which I used for estimating R|t. I tried to map using depth data also but i failed. Any idea how to map the points from one camera to another?.
I tried Pose estimation of 2nd camera of a calibrated stereo rig, given 1st camera pose but didn't get success.
The "mapping" you seek requires knowledge of the 3D geometry of the scene. This can be inferred from a depth map, i.e. an image associated to a camera, whose pixel values equal the distance from the camera of the scene object seen through each pixel. The depth map itself can be computed from a stereo algorithm.
In some special cases the mapping can be computed without knowledge of the scene geometry. These include:
The camera displacement is a pure rotation (or, more generally, the translation between the cameras is very small compared to the distance of the scene objects from the cameras). In this case the image mapping is a homography.
The scene lies in a plane. In this case also the image mapping is a homography.

Finding a Projector real world position (using OpenCV)

I'm currently trying to discover the 3D position of a projector within a real world coordinate system. The origin of such a system is, for example, the corner of a wall. I've used Open Frameworks addon called ofxCvCameraProjectorCalibration
that is based on OpenCV functions, namely calibrateCamera and stereoCalibrate methods. The application output is the following:
camera intrisic matrix (distortion coeficients included);
projector intrisic matrix (distortion coeficients included);
camera->projector extrinsic matrix;
My initial idea was, while calibration the camera, place the chessboard pattern at the corner of the wall and extract the extrinsic parameters ( [RT] matrix ) for that particular calibration.
After calibrating both camera and projector do I have all the necessary data to discover the position of the projector in real world coordinates? If so, what's the matrix manipulation required to get it?

OpenCV Stereo Calibration and triangulation in a user defined coordinate system

How do you stereo cameras so that the output of the triangulation is in a real world coordinate system, that is defined by known points?
OpenCV stereo calibration returns results based on the pose of the left hand camera being the reference coordinate system.
I am currently doing the following:
Intrinsically calibrating both the left and right camera using a chess board. This gives the Camera Matrix A, and the distortion coefficients for the camera.
Running stereo calibrate, again using the chessboard, for both cameras. This returns the extrinsic parameters, but they are relative to the cameras and not the coordinate system I would like to use.
How do I calibrate the cameras in such a way that known 3D point locations, with their corresponding 2D pixel locations in both images provides a method of extrinsically calibrating so the output of triangulation will be in my coordinate system?
Calculate disparity map from the stereo camera - you may use cvFindStereoCorrespondenceBM
After finding the disparity map, refer this: OpenCv depth estimation from Disparity map

how to obtain the world coordinates of an image

After to calibrated a camera using Jean- Yves Bouget's Camera Calibration Toolbox and checkerboard-patterns printed on cardboard, I´ve obtained extrinsic and intrinsic parameters, I can use the informations to find camera coordinates:
Pc = R * Pw + T
After that, how to obtain the world coordinates of an image using the Pc and calibration parametesr?
thanks in advance.
EDIT
The goal is to use the calibrated camera parameters to measure planar objects with a calibrated Camera). To perform this task i dont know to use the camera parameters. in other words i have to convert the pixels coordinates of the image to world coordinates using the calibrated parameters. I already have the parameters and the new image. How can i do this convertion?
thanks in advance.
I was thinking about problem, and came to the result:
You can't find the object size. The problem is by a single shot, when you have no idea how far the Object is from your camera you can't say something about the size of the object. The calibration just say how far is the image plane from the camera (focal length) and the open angles of the lense. When the focal length changes the calbriation changes too.
But there are some possibiltys:
How to get the real life size of an object from an image, when not knowing the distance between object and the camera?
So how I understand you can approximate the size of the objects.
Your problem can be solved if (and only if) you can express the plane of your object in calibrated camera coordinates.
The calibration procedure outputs, along with the camera intrinsic parameters K, a coordinate transform matrix for every calibration image Qwc_i = [Rwc_i |Twc_i] matrix, that expresses the location and pose of a particular scene coordinate frame in the camera coordinates at that calibration image. IIRC, in Jean-Yves toolbox this is the frame attached to the top-left corner of the calibration checkerboard.
So, if your planar object is on the same plane as the checkerboard in one of the calibration images, all you have to do in order to find its location in space is intersect the checkerboard plane with camera rays cast from the camera center (0,0,0) to the pixels into which the object is imaged.
If your object is NOT in one of those planes, all you can do is infer the object's own plane from additional information, if available, e.g. from a feature of known size and shape.

OpenCV calibration parameters and a 3d point transformation from stereo cameras

I've 4 ps3eye cameras. And I've calibrated camera1 and camera2 using cvStereoCalibrate() function of OpenCV library
using a chessboard pattern by finding the corners and passing their 3d coordinates into this function.
Also I've calibrated camera2 and camera3 using another set of chessboard images viewed by camera2 and camera3.
Using the same method I've calibrated camera3 and camera4.
So now I've extrinsic and intrinsic parameters of camera1 and camera2,
extrinsic and intrinsic parameters of camera2 and camera3,
and extrinsic and intrinsic parameters of camera3 and camera4.
where extrinsic parameters are matrices of rotation and translation and intrinsic are matrices of focus length and principle point.
Now suppose there's a 3d point(world coordinate)(And I know how to find 3d coordinates from stereo cameras) that is viewed by camera3 and camera4 which is not viewed by camera1 and camera2.
The question I've is: How do you take this 3d world coordinate point that is viewed by camera3 and camera4 and transform it with respect to camera1 and camera2's
world coordinate system using rotation, translation, focus and principle point parameters?
OpenCV's stereo calibration gives you only the relative extrinsic matrix between two cameras.
Acording to its documentation, you don't get the transformations in world coordinates (i.e. in relation to the calibration pattern ). It suggests though to run a regular camera calibration on one of the images and at least know its transformations. cv::stereoCalibrate
If the calibrations were perfect, you could use your daisy-chain setup to derive the world transformation of any of the cameras.
As far as I know this is not very stable, because the fact that you have multiple cameras should be considered when running the calibration.
Multi-camera calibration is not the most trivial of problems. Have a look at:
Multi-Camera Self-Calibration
GML C++ Camera Calibration Toolbox
I'm also looking for a solution to this, so if you find out more regarding this and OpenCV, let me know.

Resources