I am attempting camera calibration from a single RGB image (panorama) given 3D pointcloud
The methods that I have considered all require an intrinsic properties matrix (which I have no access to)
The intrinsic properties matrix can be estimated using the Bouguet’s camera calibration Toolbox, but as I have said, I have a single image only and a single point cloud for that image.
So, knowing 2D image coordinates, extrinsic properties, and 3D world coordinates, how can the intrinsic properties be estimated?
It would seem that the initCameraMatrix2D function from the OpenCV (https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html) works in the same way as the Bouguet’s camera calibration Toolbox and requires multiple images of the same object
I am looking into the Direct linear transformation DLT and Levenberg–Marquardt algorithm with implementations https://drive.google.com/file/d/1gDW9zRmd0jF_7tHPqM0RgChBWz-dwPe1
but it would seem that both use the pinhole camera model and therefore find linear transformation between 3D and 2D points
I can't find my half year old source code, but from top of my head
cx, cy is optical centre which is width/2, height/2 in pixels
fx=fy is focal length in pixels (distance from camera to image plane or axis of rotation)
If you know that image distance from camera to is for example 30cm and it captures image that has 16x10cm and 1920x1200 pixels, size of pixel is 100mm/1200=1/12mm and camera distance (fx,fy) would be 300mm*12px/1mm=3600px and image centre is cx=1920/2=960, cy=1200/2=600. I assume that pixels are square and camera sensor is centered at optical axis.
You can get focal lenght from image size in pixels and measured angle of view.
Related
I'm using a physical camera in Unity where I set the focal length f and sensor size sx and sy. Can these parameters and image resolution be used to create a camera calibration matrix? I probably need the focal length in terms of pixels and the cx and cy parameters that denote the deviation of the image plane center from the camera's optical axis. Is cx = w/2 and cy = h/2 correct in this case (w: width, h: height)?
I need the calibration matrix to compute a homography in OpenCV using the camera pose from Unity.
Yes, that's possible. I have done that with multiple different camera models( pinhole model, fisheye lens, polynominal lens model, etc).
Calibrate your camera with opencv and put the calibration parameters to the shader. You need to write a custom shader. Have a look at my previous question.
Camera lens distortion in OpenGL
You don't need homography here.
#Tuebel gave me a nice piece of code and I have successfully adapted it to real camera models.
The hardest part will be managing the difference between opengl camera coordinate and opencv camera coordinate. The camera calibration parameters are of course calibrated based on the opencv camera coordinate.
I am using opencv to calibrate my webcam. So, what I have done is fixed my webcam to a rig, so that it stays static and I have used a chessboard calibration pattern and moved it in front of the camera and used the detected points to compute the calibration. So, this is as we can find in many opencv examples (https://docs.opencv.org/3.1.0/dc/dbb/tutorial_py_calibration.html)
Now, this gives me the camera intrinsic matrix and a rotation and translation component for mapping each of these chessboard views from the chessboard space to world space.
However, what I am interested in is the global extrinsic matrix i.e. once I have removed the checkerboard, I want to be able to specify a point in the image scene i.e. x, y and its height and it gives me the position in the world space. As far as I understand, I need both the intrinsic and extrinsic matrix for this. How should one proceed to compute the extrinsic matrix from here? Can I use the measurements that I have already gathered from the chessboard calibration step to compute the extrinsic matrix as well?
Let me place some context. Consider the following picture, (from https://docs.opencv.org/2.4/modules/calib3d/doc/camera_calibration_and_3d_reconstruction.html):
The camera has "attached" a rigid reference frame (Xc,Yc,Zc). The intrinsic calibration that you successfully performed allows you to convert a point (Xc,Yc,Zc) into its projection on the image (u,v), and a point (u,v) in the image to a ray in (Xc,Yc,Zc) (you can only get it up to a scaling factor).
In practice, you want to place the camera in an external "world" reference frame, let's call it (X,Y,Z). Then there is a rigid transformation, represented by a rotation matrix, R, and a translation vector T, such that:
|Xc| |X|
|Yc|= R |Y| + T
|Zc| |Z|
That's the extrinsic calibration (which can be written also as a 4x4 matrix, that's what you call the extrinsic matrix).
Now, the answer. To obtain R and T, you can do the following:
Fix your world reference frame, for example the ground can be the (x,y) plane, and choose an origin for it.
Set some points with known coordinates in this reference frame, for example, points in a square grid in the floor.
Take a picture and get the corresponding 2D image coordinates.
Use solvePnP to obtain the rotation and translation, with the following parameters:
objectPoints: the 3D points in the world reference frame.
imagePoints: the corresponding 2D points in the image in the same order as objectPoints.
cameraMatris: the intrinsic matrix you already have.
distCoeffs: the distortion coefficients you already have.
rvec, tvec: these will be the outputs.
useExtrinsicGuess: false
flags: you can use CV_ITERATIVE
Finally, get R from rvec with the Rodrigues function.
You will need at least 3 non-collinear points with corresponding 3D-2D coordinates for solvePnP to work (link), but more is better. To have good quality points, you could print a big chessboard pattern, put it flat in the floor, and use it as a grid. What's important is that the pattern is not too small in the image (the larger, the more stable your calibration will be).
And, very important: for the intrinsic calibration, you used a chess pattern with squares of a certain size, but you told the algorithm (which does kind of solvePnPs for each pattern), that the size of each square is 1. This is not explicit, but is done in line 10 of the sample code, where the grid is built with coordinates 0,1,2,...:
objp[:,:2] = np.mgrid[0:7,0:6].T.reshape(-1,2)
And the scale of the world for the extrinsic calibration must match this, so you have several possibilities:
Use the same scale, for example by using the same grid or by measuring the coordinates of your "world" plane in the same scale. In this case, you "world" won't be at the right scale.
Recommended: redo the intrinsic calibration with the right scale, something like:
objp[:,:2] = (size_of_a_square*np.mgrid[0:7,0:6]).T.reshape(-1,2)
Where size_of_a_square is the real size of a square.
(Haven't done this, but is theoretically possible, do it if you can't do 2) Reuse the intrinsic calibration by scaling fx and fy. This is possible because the camera sees everything up to a scale factor, and the declared size of a square only changes fx and fy (and the T in the pose for each square, but that's another story). If the actual size of a square is L, then replace fx and fy Lfx and Lfy before calling solvePnP.
After to calibrated a camera using Jean- Yves Bouget's Camera Calibration Toolbox and checkerboard-patterns printed on cardboard, I´ve obtained extrinsic and intrinsic parameters, I can use the informations to find camera coordinates:
Pc = R * Pw + T
After that, how to obtain the world coordinates of an image using the Pc and calibration parametesr?
thanks in advance.
EDIT
The goal is to use the calibrated camera parameters to measure planar objects with a calibrated Camera). To perform this task i dont know to use the camera parameters. in other words i have to convert the pixels coordinates of the image to world coordinates using the calibrated parameters. I already have the parameters and the new image. How can i do this convertion?
thanks in advance.
I was thinking about problem, and came to the result:
You can't find the object size. The problem is by a single shot, when you have no idea how far the Object is from your camera you can't say something about the size of the object. The calibration just say how far is the image plane from the camera (focal length) and the open angles of the lense. When the focal length changes the calbriation changes too.
But there are some possibiltys:
How to get the real life size of an object from an image, when not knowing the distance between object and the camera?
So how I understand you can approximate the size of the objects.
Your problem can be solved if (and only if) you can express the plane of your object in calibrated camera coordinates.
The calibration procedure outputs, along with the camera intrinsic parameters K, a coordinate transform matrix for every calibration image Qwc_i = [Rwc_i |Twc_i] matrix, that expresses the location and pose of a particular scene coordinate frame in the camera coordinates at that calibration image. IIRC, in Jean-Yves toolbox this is the frame attached to the top-left corner of the calibration checkerboard.
So, if your planar object is on the same plane as the checkerboard in one of the calibration images, all you have to do in order to find its location in space is intersect the checkerboard plane with camera rays cast from the camera center (0,0,0) to the pixels into which the object is imaged.
If your object is NOT in one of those planes, all you can do is infer the object's own plane from additional information, if available, e.g. from a feature of known size and shape.
I'd like to get homography matrix to Bird's eye view and I know the projection Matrix of the camera. Is there any relation between them?
Thanks.
A projection matrix is defined as a product of camera's intrinsic (e.g. focal length, principal points, etc.) and extrinsic (rotation and translation) matrices. The question is w.r.t what your rotation and translation are? For example, I can imagine another camera or an object in 3D with respect to which you have these rotations and translations. Otherwise your projection is just an intrinsic matrix.
Think first about the pieces of information you need to know to obtain a bird’s eye view: you need to know at least how your camera is oriented w.r.t ground surface. If you also know camera elevation you can create a metric reconstruction. But since you mentioned a homography, I assume that you consider a bird’s eye view of a flat surface since a homography maps the points on two flat surfaces, in your case the points on a flat ground to the points on your flat sensor.
Let’s consider a pinhole camera equation. It basically says that
[u, v, 1]T ~ A*[R|t][x, y, z, 1]T, where A is a camera intrinsic matrix. Now since you deal with a ground plane, you can align a new coordinate system with it by setting z=0; R|t are rotation and translation matrices from this coordinate system into your camera-aligned system;
Next, note that your R|t is a 3x4 matrix and it looses one dimension since z=0; it becomes 3x3 or Homography which is equal now to H=A*R’|t; Ok all we did was proving that a homography mapping existed between the ground and your sensor;
Now, you want another kind of homography that happens during pure camera rotations and zooms between points on sensor before and after rotations/zoom; that is you want to rotate the camera down and possibly zoom out. Again, think in terms of a pinhole camera equation: originally you had H1=A ( here I threw out R|T as irrelevant for now) and then you rotated your camera and you have H2=AR; in other words, H1 is how you make your image now and H2 is how you want your image look like.
The relations between two is what you want to find, H12, and it is also a homography since the Homography is a family of transformations (use this simple heuristics: what happens in a family stays in the family). Since the same surface can generate images either with H1 or H2 we can assemble H12 by undoing H1 (back to the ground plane) and applying H2 (from the ground to a sensor bird’s eye view); in a way this resembles operations with vectors, you just have to respect the order of matrix application from the right to the left:
H12 = H2*H1-1=A*R*A-1=P*A-1 , where we substituted the expressions for H1, H2 and finally for a projection matrix (in case you do have it)
This is your answer, and if the rotation R is unknown it can be guessed from the camera orientation w.r.t. the ground or calculated using solvePnP() from the opeCV library. Finally, when I do this on a cell phone I just use its accelerometer readings as a good approximation since when a cell phone is not accelerated the readings represent a gravity vector which gives the rotation w.r.t. flat horizontal ground.
When you plot your bird’s eye view as an image you will notice that its boundaries turned from rectangular into some kind of a trapezoid (due to a camera frustum shape) and there are some holes at the distant locations (due to the insufficient sampling rate). You can interpolate inside the holes using wrapPerspective()
I have used openCV to calculate the homography relating to views of the same plane by using features and matching them. Is there any way to recover the plane itsself or the plane normal from this homography? (I am looking for an equation where H is the input and the normal n is the output.)
If you have the calibration of the cameras, you can extract the normal of the plane, but not the distance to the plane (i.e. the transformation that you obtain is up to scale), as Wikipedia explains. I don't know any implementation to do it, but here you are a couple of papers that deal with that problem (I warn you it is not straightforward): Faugeras & Lustman 1988, Vargas & Malis 2005.
You can recover the real translation of the transformation (i.e. the distance to the plane) if you have at least a real distance between two points on the plane. If that is the case, the easiest way to go with OpenCV is to first calculate the homography, then obtain four points on the plane with their 2D coordinates and the real 3D ones (you should be able to obtain them if you have a real measurement on the plane), and using PnP finally. PnP will give you a real transformation.
Rectifying an image is defined as making epipolar lines horizontal and lying in the same row in both images. From your description I get that you simply want to warp the plane such that it is parallel to the camera sensor or the image plane. This has nothing to do with rectification - I’d rather call it an obtaining a bird’s-eye view or a top view.
I see the source of confusion though. Rectification of images usually involves multiplication of each image with a homography matrix. In your case though each point in sensor plane b:
Xb = Hab * Xa = (Hb * Ha^-1) * Xa, where Ha is homography from the plane in the world to the sensor a; Ha and intrinsic camera matrix will give you a plane orientation but I don’t see an easy way to decompose Hab into Ha and Hb.
A classic (and hard) way is to find a Fundamental matrix, restore the Essential matrix from it, decompose the Essential matrix into camera rotation and translation (up to scale), rectify both images, perform a dense stereo, then fit a plane equation into 3d points you reconstruct.
If you interested in the ground plane and you operate an embedded device though, you don’t even need two frames - a top view can be easily recovered from a single photo, camera elevation from the ground (H) and a gyroscope (or orientation vector) readings. A simple diagram below explains the process in 2D case: first picture shows how to restore Z (depth) coordinate to every point on the ground plane; the second picture shows a plot of the top view with vertical axis being z and horizontal axis x = (img.col-w/2)*Z/focal; Here is img.col is image column, w - image width, and focal is camera focal length. Note that a camera frustum looks like a trapezoid in a birds eye view.