camera calibration for single plane - opencv

My problem statement is very simple. But I am unable to get the opencv calibration work for me. I am using the code from here : source code.
I have to take images parallel to the camera at a fixed distance. I tried taking test images (about 20 of them) only parallel to the camera as well as at different planes. Also I changed the size and the no of squares.
What would be the best way to calibrate in this scenario?
The undistorted image is cropped later, that's why it looks smaller.
After going through the images closely, the pincushion distortion seems to have been corrected. But the "trapezoidal" distortion still remains. Since the camera is mounted in a closed box, the planes at which I can take images is limited.

To simplify what Vlad already said: It is theoretically impossible to calibrate your camera with test images only parallel to the camera. You have to change your calibration board's orientation. In fact, you should have different orientation in each test image.
Check out the first two images in the link below to see how the calibration board should be slanted (or tilted):
http://www.vision.caltech.edu/bouguetj/calib_doc/

think about calibration problem as finding a projection matrix P:
image_points = P * 3d_points, where P = intrinsic * extrinsic
Now just bear with me:
You basically are interested in intrinsic part but the calibration algorithm has to find both intrinsic and extrinsic. Now, each column of projection matrix can be obtained if you select a 3D point at infinity, for example xInf = [1, 0, 0, 0]. This point is at infinity because when you transform it from homogeneous coordinates to Cartesian you get
[1/0, 0, 0]. If you multiply a projection matrix with a point at infinity you will get its corresponding column (1st for Xinf, 2nd for yInf, 3rd for zInf and 4th for camera center).
Thus the conclusion is simple - to get a projection matrix (that is a successful calibration) you have to clearly see points at infinity or vanishing points from the converging extensions of lines in your chessboard rig (aka end of the railroad tracks at the horizon). Your images don’t make it easy to detect vanishing points since you don’t slant your chessboard, nor rotate nor scale it by stepping back. Thus your calibration will always fail.

Related

Opencv get accurate real world coordinates from 2 known parallel planes

So I have been tinkering a little bit with opencv and I want to be able to use a camera image to get the position of certain objects that are lying flat on plane. These objects are simple shapes such as circles squares etc. They all have the same height of 5cm. To be able to relate real world points to pixels on the camera I painted 4 white squares on the plane with known distances between them.
So the steps I have been taking are:
Initialization:
Calibrate my camera using a checkerboard image and save the calibration data.
Get the input image. call cv::undistort with the calibration data for my camera.
Find the center points of the 4 squares in the image and pass that data and the real world coordinates of the squares to the cv::solvePnP function. Save the rvec and tvec return parameters.
Warp the perspective of the image so you can get a top down view from the image. This is essentially following this tutorial: https://docs.opencv.org/3.4.1/d9/dab/tutorial_homography.html
Use the resulting image to again find the 4 white squares and then calculate a "pixels per meter" translation constant which can relate a certain amount of difference in pixels between points to the real world distance on the plane where the 4 squares are.
Finding object, This is done after initialization:
Get the input image. call cv::undistort with the calibration data for my camera.
Warp the perspective of the image so you can get a top down view from the image. This is the same as step 4 during initialisation.
Find the centerpoint of the object to detect.
Since the centerpoint of the object is on a higher plane then where I calibrated I use the following formula to correct this(d = is the pixel offset from the center of the image. camHeight is the cameraHeight I measured by using a tape measure. h is height of the object):
d = x - (h * (x / camHeight))
So here for an illustration how I got this formule:
But still the coordinates are not matching up...
So I am wondering at all if this is the correct. Specifically I have the following questions:
Is using cv::undistort before using cv::solvenPnP correct? cv::solvePnP also takes the camera calibration data as input so I'm not sure if I have to pass an undistorted image to it or not.
Similar to 1. During Finding object I call cv::undistort -> cv::warpPerspective. Is this undistort necessary here?
Is my calculation to correct for the parallel planes in step 4 correct? I feel like I am missing something but I can't see what. One thing I am wondering is whether I can get the camera height from opencv once solvePnp is done.
I am a newbie to CV so If anything else is totally wrong please also point it out to me.
Thank you for reading this wall of text!

stereo rectification with measured extrinsic parameters

I am trying to rectify two sequences of images for stereo matching. The usual approach of using stereoCalibrate() with a checkerboard pattern is not of use to me, since I am only working with the footage.
What I have is the correct calibration data of the individual cameras (camera matrix and distortion parameters) as well as measurements of their distance and angle between each other.
How can I construct the rotation matrix and translation vector needed for stereoRectify()?
The naive approach of using
Mat T = (Mat_<double>(3,1) << distance, 0, 0);
Mat R = (Mat_<double>(3,3) << cos(angle), 0, sin(angle), 0, 1, 0, -sin(angle), 0, cos(angle));
resulted in a heavily warped image. Do these matrices need to relate to a different origin point I am not aware of? Or do I need to convert the distance/angle value somehow to be dependent of pixelsize?
Any help would be appreciated.
It's not clear whether you have enough information about the camera poses to perform an accurate rectification.
Both T and R are measured in 3D, but in your case:
T is one-dimensional (along x-axis only), which means that you are confident that the two cameras are perfectly aligned along the other axes (in particular, you have less-than-1 pixel error on the y axis, ie a few microns by today's standards);
R leaves the Y-coordinates untouched. Thus, all you have is rotation around this axis, does it match your experimental setup ?
Finally, you need to check the consistency of the units that you are using for the translation and rotation to match with the units from the intrinsic data.
If it is feasible, you can check your results by finding some matching points between the two cameras and proceeding to a projective calibration: the accurate knowledge of the 3D position of the calibration points is required for metric reconstruction only. Other tasks rely on the essential or fundamental matrices, that can be computed from image-to-image point correspondences.
If intrinsics and extrinsics known, I recommend this method: http://link.springer.com/article/10.1007/s001380050120#page-1
It is easy to implement. Basically you rotate the right camera till both cameras have the same orientation, means both share a common R. Epipols are then transformed to the infinity and you have epipolar lines parallel to the image x-axis.
First row of the new R (x) is simply the baseline, e.g. the subtraction of both camera centers. Second row (y) the cross product of the baseline with the old left z-axis. Third row (z) equals cross product of the first two rows.
At last you need to calculate a 3x3 homography described in the above link and use warpPerspective() to get a rectified version.

Calibrate camera with opencv, how does it work and how do i have to move my chessboard

I'm using openCV the calibrateCamera function to calibrate my camera. I started from the tutorial implementation, but there seems something wrong.
The camera is looking down on a table and i use a chessboard with an area that covers about 1/2 or 1/4 of my total image. Since I aim to track a flat object that slides over this table, I also slide my chessboard over this table.
So my first question is: is it ok that i move my chessboard over this table? Or do I have to make some 3D movements in order to get some good result?
Because I was wondering: how does the function guesses the distance between the table and the camera? He has only a guess of his focal point, and he has only one "eye", so there is no depth vision.
My second question: how does the bloody thing work? :p Can anyone show me some implementation of this function?
Thx!
the camera calibration needs a seed of points to calculate the camera matrix and the the position of the central point of the camera, and the distortion matrices , if you want to use a chessboard you have to take in consideration its dimension(I never used the circles function because the detection of chessboard is easier ) , the dimension of the chessboard should be pair X unpair number so you can get a correct rotation matrix ! the calibration function needs minimum 8x set of chessboardCorners and ( I use 30 tell 50) it depends on how precise you want to be .the return value of the calibration function is the re-projection error this should be near to zero if the calibration is good.
the cameraCalibration take a the size of used chessboards ( you can use different chessboardSize) and the dimension ( in mm or cm or even m etc.. ) your result will depend on your given dimension.
By the way after getting the chessboardCorners you have to refines them with the function CornerSubPix you can set how good the refinement is in the function parameter.
In the internet you can find a lot docs about this subject.
http://www.ics.uci.edu/~majumder/vispercep/cameracalib.pdf
I hope it helps !
regarding the chessboard positions, I got best results with 25-30 images
First I do 3 -4 images that show the chessboard at different distances full frame half 1/3 1/4
then I make sure to go to each corner, each center of each edge plus 4 rotation on each axis XYZ. When using a 640x480 sensor my reprojection error was mostly around 0.1 or even better
here a few links that got me in the right direction:
How to verify the correctness of calibration of a webcam?

Warping Perspective using arbitary rotation angle

I have an image of a chessboard taken at some angle. Now I want to warp perspective so the chessboard image look again as if was taken directly from above.
I know that I can try to use 'findHomography' between matched points but I wanted to avoid it and use e.g. rotation data from mobile sensors to build homography matrix on my own. I calibrated my camera to get intrinsic parameters. Then lets say the following image has been taken at ~60degrees angle around x-axis. I thought that all I have to do is to multiply camera matrix with rotation matrix to obtain homography matrix. I tried to use the following code but looks like I'm not understanding something correctly because it doesn't work as expected (result image completely black or white.
import cv2
import numpy as np
import math
camera_matrix = np.array([[ 5.7415988502105745e+02, 0., 2.3986181527877352e+02],
[0., 5.7473682183375217e+02, 3.1723734404756237e+02],
[0., 0., 1.]])
distortion_coefficients = np.array([ 1.8662919398453856e-01, -7.9649812697463640e-01,
1.8178068172317731e-03, -2.4296638847737923e-03,
7.0519002388825025e-01 ])
theta = math.radians(60)
rotx = np.array([[1, 0, 0],
[0, math.cos(theta), -math.sin(theta)],
[0, math.sin(theta), math.cos(theta)]])
homography = np.dot(camera_matrix, rotx)
im = cv2.imread('data/chess1.jpg')
gray = cv2.cvtColor(im,cv2.COLOR_BGR2GRAY)
im_warped = cv2.warpPerspective(gray, homography, (480, 640), flags=cv2.WARP_INVERSE_MAP)
cv2.imshow('image', im_warped)
cv2.waitKey()
pass
I also have distortion_coefficients after calibration. How can those be incorporated into the code to improve results?
This answer is awfully late by several years, but here it is ...
(Disclaimer: my use of terminology in this answer may be imprecise or incorrect. Please do look up on this topic from other more credible sources.)
Remember:
Because you only have one image (view), you can only compute 2D homography (perspective correspondence between one 2D view and another 2D view), not the full 3D homography.
Because of that, the nice intuitive understanding of the 3D homography (rotation matrix, translation matrix, focal distance, etc.) are not available to you.
What we say is that with 2D homography you cannot factorize the 3x3 matrix into those nice intuitive components like 3D homography does.
You have one matrix - (which is the product of several matrices unknown to you) - and that is it.
However,
OpenCV provides a getPerspectiveTransform function which solves the 3x3 perspective matrix (using homogenous coordinate system) for a 2D homography between two planar quadrilaterals.
Link to documentation
To use this function,
Find the four corners of the chessboard on the image. These will be your source coordinates.
Supply four rectangle corners of your choice. These will be your destination coordinates.
Pass the source coordinates and destination coordinates into the getPerspectiveTransform to generate a 3x3 matrix that is able to dewarp your chessboard to an upright rectangle.
Notes to remember:
Mind the ordering of the four corners.
If the source coordinates are picked in clockwise order, the destination also needs to be picked in clockwise order.
Likewise, if counter-clockwise order is used, do it consistently.
Likewise, if z-order (top left, top right, bottom left, bottom right) is used, do it consistently.
Failure to order the corners consistently will generate a matrix that executes the point-to-point correspondence exactly (mathematically speaking), but will not generate a usable output image.
The aspect ratio of the destination rectangle can be chosen arbitrarily. In fact, it is not possible to deduce the "original aspect ratio" of the object in world coordinates, because "this is 2D homography, not 3D".
One problem is that to multiply by a camera matrix you need some concept of a z coordinate. You should start by getting basic image warping given Euler angles to work before you think about distortion coefficients. Have a look at this answer for a slightly more detailed explanation and try to duplicate my result. The idea of moving your image down the z axis and then projecting it with your camera matrix can be confusing, let me know if any part of it does not make sense.
You do not need to calibrate the camera nor estimate the camera orientation (the latter, however, in this case would be very easy: just find the vanishing points of those orthogonal bundles of lines, and take their cross product to find the normal to the plane, see Hartley & Zisserman's bible for details).
The only thing you need to do is estimate the homography that maps the checkers to squares, then apply it to the image.

Distance to the object using stereo camera

Is there a way to calculate the distance to specific object using stereo camera?
Is there an equation or something to get distance using disparity or angle?
NOTE: Everything described here can be found in the Learning OpenCV book in the chapters on camera calibration and stereo vision. You should read these chapters to get a better understanding of the steps below.
One approach that do not require you to measure all the camera intrinsics and extrinsics yourself is to use openCVs calibration functions. Camera intrinsics (lens distortion/skew etc) can be calculated with cv::calibrateCamera, while the extrinsics (relation between left and right camera) can be calculated with cv::stereoCalibrate. These functions take a number of points in pixel coordinates and tries to map them to real world object coordinates. CV has a neat way to get such points, print out a black-and-white chessboard and use the cv::findChessboardCorners/cv::cornerSubPix functions to extract them. Around 10-15 image pairs of chessboards should do.
The matrices calculated by the calibration functions can be saved to disc so you don't have to repeat this process every time you start your application. You get some neat matrices here that allow you to create a rectification map (cv::stereoRectify/cv::initUndistortRectifyMap) that can later be applied to your images using cv::remap. You also get a neat matrix called Q, which is a disparity-to-depth matrix.
The reason to rectify your images is that once the process is complete for a pair of images (assuming your calibration is correct), every pixel/object in one image can be found on the same row in the other image.
There are a few ways you can go from here, depending on what kind of features you are looking for in the image. One way is to use CVs stereo correspondence functions, such as Stereo Block Matching or Semi Global Block Matching. This will give you a disparity map for the entire image which can be transformed to 3D points using the Q matrix (cv::reprojectImageTo3D).
The downfall of this is that unless there is much texture information in the image, CV isn't really very good at building a dense disparity map (you will get gaps in it where it couldn't find the correct disparity for a given pixel), so another approach is to find the points you want to match yourself. Say you find the feature/object in x=40,y=110 in the left image and x=22 in the right image (since the images are rectified, they should have the same y-value). The disparity is calculated as d = 40 - 22 = 18.
Construct a cv::Point3f(x,y,d), in our case (40,110,18). Find other interesting points the same way, then send all of the points to cv::perspectiveTransform (with the Q matrix as the transformation matrix, essentially this function is cv::reprojectImageTo3D but for sparse disparity maps) and the output will be points in an XYZ-coordinate system with the left camera at the center.
I am still working on it, so I will not post entire source code yet. But I will give you a conceptual solution.
You will need the following data as input (for both cameras):
camera position
camera point of interest (point at which camera is looking)
camera resolution (horizontal and vertical)
camera field of view angles (horizontal and vertical)
You can measure the last one yourself, by placing the camera on a piece of paper and drawing two lines and measuring an angle between these lines.
Cameras do not have to be aligned in any way, you only need to be able to see your object in both cameras.
Now calculate a vector from each camera to your object. You have (X,Y) pixel coordinates of the object from each camera, and you need to calculate a vector (X,Y,Z). Note that in the simple case, where the object is seen right in the middle of the camera, the solution would simply be (camera.PointOfInterest - camera.Position).
Once you have both vectors pointing at your target, lines defined by these vectors should cross in one point in ideal world. In real world they would not because of small measurement errors and limited resolution of cameras. So use the link below to calculate the distance vector between two lines.
Distance between two lines
In that link: P0 is your first cam position, Q0 is your second cam position and u and v are vectors starting at camera position and pointing at your target.
You are not interested in the actual distance, they want to calculate. You need the vector Wc - we can assume that the object is in the middle of Wc. Once you have the position of your object in 3D space you also get whatever distance you like.
I will post the entire source code soon.
I have the source code for detecting human face and returns not only depth but also real world coordinates with left camera (or right camera, I couldn't remember) being origin. It is adapted from source code from "Learning OpenCV" and refer to some websites to get it working. The result is generally quite accurate.

Resources