Calculating homography matrix using arbitrary known geometrical relations - opencv

I am using OpenCV for an optical measurement system. I need to carry out a perspective transformation between two images, captured by a digital camera. In the field of view of the camera I placed a set of markers (which lie in a common plane), which I use as corresponding points in both images. Using the markers' positions I can calculate the homography matrix. The problem is, that the measured object, whose images I actually want to transform is positioned in a small distance from the markers and in parallel to the markers' plane. I can measure this distance.
My question is, how to take that distance into account when calculating the homography matrix, which is necessary to perform the perspective transformation.
In my solution it is a strong requirement not to use the measured object points for calculation of homography (and that is why I need other markers in the field of view).
Please let me know if the description is not precise.
Presented in the figure is the exemplary image.
The red rectangle is the measured object. It is physically placed in a small distance behind the circular markers.
I capture images of the object from different camera's positions. The measured object can deform between each acquisition. Using circular markers, I want to transform the object's image to the same coordinates. I can measure the distance between object and markers but I do not know, how should I modify the homography matrix in order to work on the measured object (instead of the markers).

This question is quite old, but it is interesting and it might be useful to someone.
First, here is how I understood the problem presented in the question:
You have two images I1 and I2 acquired by the same digital camera at two different positions. These images both show a set of markers which all lie in a common plane pm. There is also a measured object, whose visible surface lies in a plane po parallel to the marker's plane but with a small offset. You computed the homography Hm12 mapping the markers positions in I1 to the corresponding markers positions in I2 and you measured the offset dm-o between the planes po and pm. From that, you would like to calculate the homography Ho12 mapping points on the measured object in I1 to the corresponding points in I2.
A few remarks on this problem:
First, notice that an homography is a relation between image points, whereas the distance between the markers' plane and the object's plane is a distance in world coordinates. Using the latter to infer something about the former requires to have a metric estimation of the camera poses, i.e. you need to determine the euclidian and up-to-scale relative position & orientation of the camera for each of the two images. The euclidian requirement implies that the digital camera must be calibrated, which should not be a problem for an "optical measurement system". The up-to-scale requirement implies that the true 3D distance between two given 3D points must be known. For instance, you need to know the true distance l0 between two arbitrary markers.
Since we only need the relative pose of the camera for each image, we may choose to use a 3D coordinate system centered and aligned with the coordinate system of the camera for I1. Hence, we will denote the projection matrix for I1 by P1 = K1 * [ I | 0 ]. Then, we denote the projection matrix for I2 (in the same 3D coordinate system) by P2 = K2 * [ R2 | t2 ]. We will also denote by D1 and D2 the coefficients modeling lens distortion respectively for I1 and I2.
As a single digital camera acquired both I1 and I2, you may assume that K1 = K2 = K and D1 = D2 = D. However, if I1 and I2 were acquired with a long delay between the acquisitions (or with a different zoom, etc), it will be more accurate to consider that two different camera matrices and two sets of distortion coefficients are involved.
Here is how you could approach such a problem:
The steps in order to estimate P1 and P2 are as follows:
Estimate K1, K2 and D1, D2 via calibration of the digital camera
Use D1 and D2 to correct images I1 and I2 for lens distortion, then determine the marker positions in the corrected images
Compute the fundamental matrix F12 (mapping points in I1 to epilines in I2) from the corresponding markers positions and infer the essential matrix E12 = K2T * F12 * K1
Infer R2 and t2 from E12 and one point correspondence (see this answer to a related question). At this point, you have an affine estimation of the camera poses, but not an up-to-scale one since t2 has unit norm.
Use the measured distance l0 between two arbitrary markers to infer the correct norm for t2.
For the best accuracy, you may refine P1 and P2 using a bundle adjustment, with K1 and ||t2|| fixed, based on the corresponding marker positions in I1 and I2.
At this point, you have an accurate metric estimation of the camera poses P1 = K1 * [ I | 0 ] and P2 = K2 * [ R2 | t2 ]. Now, the steps to estimate Ho12 are as follows:
Use D1 and D2 to correct images I1 and I2 for lens distortion, then determine the marker positions in the corrected images (same as 2. above, no need to re-do that) and estimate Hm12 from these corresponding positions
Compute the 3x1 vector v describing the markers' plane pm by solving this linear equation: Z * Hm12 = K2 * ( R2 - t2 * vT ) * K1-1 (see HZ00 chapter 13, result 13.5 and equation 13.2 for a reference on that), where Z is a scaling factor. Infer the distance to origin dm = ||v|| and the normal n = v / ||v||, which describe the markers' plane pm in 3D.
Since the object plane po is parallel to pm, they have the same normal n. Hence, you can infer the distance to origin do for po from the distance to origin dm for pm and from the measured plane offset dm-o, as follows: do = dm ± dm-o (the sign depends of the relative position of the planes: positive if pm is closer to the camera for I1 than po, negative otherwise).
From n and do describing the object plane in 3D, infer the homography Ho12 = K2 * ( R2 - t2 * nT / do ) * K1-1 (see HZ00 chapter 13, equation 13.2)
The homography Ho12 maps points on the measured object in I1 to the corresponding points in I2, where both I1 and I2 are assumed to be corrected for lens distortion. If you need to map points from and to the original distorted image, don't forget to use the distortion coefficients D1 and D2 to transform the input and output points of Ho12.
The reference I used:
[HZ00] "Multiple view geometry for computer vision", by R.Hartley and A.Zisserman, 2000.

Related

How to obtain the orientation of a square in the real world from an image knowing the centroid position in the real world and the image projection

I'm trying to obtain the orientation of a square in the real world from an image. I know the projection of each vertex in the image and with this and a depth camera I can obtain the position of the centroid in the real world.
I need the orientation of the square (actually, the normal vector to the plane) and the depth camera has not enough resolution. The camera parameters are also known.
I've search and I've only found estimation algorithms too overkill for problems with much less information. But in this case, I have a lot of data of the shape, distance, camera, image, etc. but I am not being able to get it.
Thanks in advance.
I assume the image is captured with an ordinary camera, and that your "square" is well approximated by an actual geometrical rectangle, with parallel opposite sides and orthogonal adjacent ones
If you only need the square's normal, and the camera is calibrated (in particular, the nonlinear lens distortion is removed from the image), then it can trivially be obtained from the vanishing points and the center. The algorithm is as follows:
Express the images of the four vertices p_i, i=1..4, in homogeneous coordinates: p_i = (u_i, v_i, 1). The ordering of i is unimportant, but in the following I assume it's clockwise starting from any one vertex. Also, for convenience, where in the following I write, say, i + n, it's assumed that the addition is modulo 4, so that, e.g., i + 1 = 1 when i = 4.
Compute the equations of the lines covering the square sides: l_i = p_(i+1) X p_i, where X represents the cross product.
Compute the equations of the diagonals: d_13 = p_1 X p_3, d_24 = p_2 X p_4.
Compute the center: c = d_13 X d_24.
Compute the vanishing points of the pairs of parallel sides: v_13 = l_1 X l_3, v_24 = l_2 X l_4. They represent the directions of the images of two lines which, in 3D, are orthogonal to each other.
Compute the images of the axes the 3D orthogonal coordinate frame rooted at the square center, and with two of the axes parallel to the square sides: x = c X v_13, y = c X v_24.
Lastly, the plane normal, in 3D camera coordinate frame, is their cross product: z = x X y .
Note that removing the distortion is important, because even a small amount of distortion can greatly affects the location of the vanishing points when the square sides are nearly parallel.
If you want to know why this works, the following excerpt from Hartley and Zisserman's "Multiple View Geometry in Computer Vision" should sufice:

Merging depth maps for trinocular stereo

I have a parallel trinocular setup where all 3 cameras are alligned in a collinear fashion as depicted below.
Left-Camera------------Centre-Camera---------------------------------Right-Camera
The baseline (distance between cameras) between left and centre camera is the shortest and the baseline between left and right camera is the longest.
In theory I can obtain 3 sets of disparity images using different camera combinations (L-R, L-C and C-R).I can generate depth maps (3D points) for each disparity map using Triangulation. I now have 3 depth maps.
The L-C combination has higher depth accuracy (measured distance is more accurate) for objects that are near (since the baseline is short) whereas
the L-R combination has higher depth accuracy for objects that are far(since the baseline is long). Similarly the C-R combination is accurate for objects at medium distance.
In stereo setups, normally we define the left (RGB) image as the reference image. In my project, by thresholding the depth values, I obtain an ROI on the reference image. For example I find all the pixels that have a depth value between 10-20m and find their respective pixel locations. In this case, I have a relationship between 3D points and their corresponding pixel location.
Since in normal stereo setups, we can have higher depth accuracy only for one of the two regions depending upon the baseline (near and far), I plan on using 3 cameras. This helps me to generate 3D points of higher accuracy for three regions (near, medium and far).
I now want to merge the 3 depth maps to obtain a global map. My problems are as follows -
How to merge the three depth maps ?
After merging, how do I know which depth value corresponds to which pixel location in the reference (Left RGB) image ?
Your help will be much appreciated :)
1) I think that simple "merging" of depth maps (as matrices of values) is not possible, if you are thinking of a global 2D depth map as an image or a matrix of depth values. You can consider instead to merge the 3 set of 3D points with some similarity criteria like the distance (refining your point cloud). If they are too close, delete one of theme (pseudocode)
for i in range(points):
for j in range(i,points):
if distance(i,j) < treshold
delete(j)
or delete the 2 points and add a point that have average coordinates
2) From point one, this question became "how to connect a 3D point to the related pixel in the left image" (it is the only interpretation).
The answer simply is: use the projection equation. If you have K (intrinsic matrix), R (rotation matrix) and t (translation vector) from calibration of the left camera, join R and t in a 3x4 matrix
[R|t]
and then connect the M 3D point in 4-dimensional coordinates (X,Y,Z,1) as an m point (u,v,w)
m = K*[R|t]*M
divide m by its third coordinate w and you obtain
m = (u', v', 1)
u' and v' are the pixel coordinates in the left image.

OpenCV: Is there any way to extract the real world x coordinate of a point from a depth map

Is there any way to extract the real world X coordinate of a point from a depth map of a calibrated stereo camera rig given only the baseline B, the focal length f and the real world depth "Z" computed from Z=f(B)(x-x') where x - x' is the disparity computed from the column indices of their respective images obtained from matching details from the left and right cameras?

finding the real world coordinates of an image point

I am searching lots of resources on internet for many days but i couldnt solve the problem.
I have a project in which i am supposed to detect the position of a circular object on a plane. Since on a plane, all i need is x and y position (not z) For this purpose i have chosen to go with image processing. The camera(single view, not stereo) position and orientation is fixed with respect to a reference coordinate system on the plane and are known
I have detected the image pixel coordinates of the centers of circles by using opencv. All i need is now to convert the coord. to real world.
http://www.packtpub.com/article/opencv-estimating-projective-relations-images
in this site and other sites as well, an homographic transformation is named as:
p = C[R|T]P; where P is real world coordinates and p is the pixel coord(in homographic coord). C is the camera matrix representing the intrinsic parameters, R is rotation matrix and T is the translational matrix. I have followed a tutorial on calibrating the camera on opencv(applied the cameraCalibration source file), i have 9 fine chessbordimages, and as an output i have the intrinsic camera matrix, and translational and rotational params of each of the image.
I have the 3x3 intrinsic camera matrix(focal lengths , and center pixels), and an 3x4 extrinsic matrix [R|T], in which R is the left 3x3 and T is the rigth 3x1. According to p = C[R|T]P formula, i assume that by multiplying these parameter matrices to the P(world) we get p(pixel). But what i need is to project the p(pixel) coord to P(world coordinates) on the ground plane.
I am studying electrical and electronics engineering. I did not take image processing or advanced linear algebra classes. As I remember from linear algebra course we can manipulate a transformation as P=[R|T]-1*C-1*p. However this is in euclidian coord system. I dont know such a thing is possible in hompographic. moreover 3x4 [R|T] Vector is not invertible. Moreover i dont know it is the correct way to go.
Intrinsic and extrinsic parameters are know, All i need is the real world project coordinate on the ground plane. Since point is on a plane, coordinates will be 2 dimensions(depth is not important, as an argument opposed single view geometry).Camera is fixed(position,orientation).How should i find real world coordinate of the point on an image captured by a camera(single view)?
EDIT
I have been reading "learning opencv" from Gary Bradski & Adrian Kaehler. On page 386 under Calibration->Homography section it is written: q = sMWQ where M is camera intrinsic matrix, W is 3x4 [R|T], S is an "up to" scale factor i assume related with homography concept, i dont know clearly.q is pixel cooord and Q is real coord. It is said in order to get real world coordinate(on the chessboard plane) of the coord of an object detected on image plane; Z=0 then also third column in W=0(axis rotation i assume), trimming these unnecessary parts; W is an 3x3 matrix. H=MW is an 3x3 homography matrix.Now we can invert homography matrix and left multiply with q to get Q=[X Y 1], where Z coord was trimmed.
I applied the mentioned algorithm. and I got some results that can not be in between the image corners(the image plane was parallel to the camera plane just in front of ~30 cm the camera, and i got results like 3000)(chessboard square sizes were entered in milimeters, so i assume outputted real world coordinates are again in milimeters). Anyway i am still trying stuff. By the way the results are previosuly very very large, but i divide all values in Q by third component of the Q to get (X,Y,1)
FINAL EDIT
I could not accomplish camera calibration methods. Anyway, I should have started with perspective projection and transform. This way i made very well estimations with a perspective transform between image plane and physical plane(having generated the transform by 4 pairs of corresponding coplanar points on the both planes). Then simply applied the transform on the image pixel points.
You said "i have the intrinsic camera matrix, and translational and rotational params of each of the image.” but these are translation and rotation from your camera to your chessboard. These have nothing to do with your circle. However if you really have translation and rotation matrices then getting 3D point is really easy.
Apply the inverse intrinsic matrix to your screen points in homogeneous notation: C-1*[u, v, 1], where u=col-w/2 and v=h/2-row, where col, row are image column and row and w, h are image width and height. As a result you will obtain 3d point with so-called camera normalized coordinates p = [x, y, z]T. All you need to do now is to subtract the translation and apply a transposed rotation: P=RT(p-T). The order of operations is inverse to the original that was rotate and then translate; note that transposed rotation does the inverse operation to original rotation but is much faster to calculate than R-1.

opencv giving camera pose - rvec and tvec - with mismatched coordinate system

I need to find the pose (rotation matrix + translation vector) for a camera, and for that I am using cv2.solvePnP(), but the results I get from photos don't match.
In order to debug, I created (with numpy) a "debugging 3d scene" composed of some object points (four corners of a square), some camera points (focal point, principal point and four corners of the virtual projection plane) and parameters (focal distance, initial orientation).
Then, I construct a general rotation matrix by multiplying three axis rotation matrices, apply this general rotation to the camera (numpy.dot()), project the object points in the virtual projection plane (line-plane intersection algorithm), and calculate the in-plane 2D coordinates (point-line distance) to the projection plane axes.
After doing this (objectpoints to imagepoints via rotationmatrix), I feed the imagepoints and the objectpoints to cv2.Rodrigues(cv2.solvePnP(...)) and get a matrix "not quite identical" to the one I used, only because of transposition and some elements with opposite signal (negative vs. positive), respecting this relation:
solvepnp_rotmatrix = my_original_matrix.transpose * [ 1 1 1]
[ 1 1 -1]
[-1 -1 1]
Although the rotation matrix mismatch is "solveable" with this hack, the TRANSLATION VECTOR gives coordinates that don't make sense to me.
I suspect there are mismatches between my 3D model (handedness, axes orientation, order of rotations) and the model used by opencv:
I use OpenGL-like coordinate system (X increases to the right, Y increases upwards, and Z increases toward the observer;
I applied the rotations in the order that made more sense to me (all right-handed, first around global Z, then around global X, then around global Y);
The image plane is between object and camera focal point (virtual projection plane, instead of real/CCD);
The origin of my image plane (virtual CCD) is lower-left corner (Xpix increases to the right, Ypx increases upwards.
My questions are:
Given that the terms of the rotation matrix are identical, only transposed and with different signal in some terms, is it possible that I am confusing some of openCV conventions (handedness, order of rotations, axis direction)? And how can I discover which one(s)?
Also, is there a way to relate my handmade translation vector to the tvec returned by solvePnP? (of course, ideally, the best would be to make the coordinate systems to match, in the first place).
Any help will be most welcome!

Resources