How to calculate orientation when a ray hits a quad? - augmented-reality

I know how to determine where a ray hit a quad but I don't know how to determine the orientation at the hit position.
ArCore describes the HitResult pose for a plane as:
HitResult.getHitPose()
Returns the pose of the intersection between a ray and detected real-world geometry. The position is the location in space where the ray intersected the geometry. The orientation is a best effort to face the user's device, and its exact definition differs depending on the Trackable that was hit.
Plane: X+ is perpendicular to the cast ray and parallel to the plane, Y+ points along the plane normal (up, for HORIZONTAL_UPWARD_FACING planes), and Z+ is parallel to the plane, pointing roughly toward the user's device.
Note that a Pose is a position (Vector3) and an orientation (quaternion).
I'm currently using DirectX::TriangleTests::Intersects to calculate the position where the ray hits the quad. Calculating Y+ is easy by using the plane's normal but I don't know how to calculate X+ and Z+.

Related

Real Distance of object from camera using camera matrix

How can I calculate the distance of an object of known size (e.g. aruco marker of 0.14m printed on paper) from camera. I know the camera matrix (camMatx) and my fx,fy ~= 600px assuming no distortion. From this data I am able to calculate the pose of the aruco marker and have obtained [R|t]. Now the task is to get the distance of the aruco marker from the camera. I also know the height of the camera from ground plane (15m).
How should I go about solving this problem. Any help would be appreciated. Also please note I have also seen approach of similar triangles, but that would work on knowing the distance of the object, which doesnt apply in my case as I have to calculate the distance.
N.B: I dont know the camera sensor height. But I know how high the camera is located above ground.
I know the dimensions of the area in which my object is moving (70m x 45m). In the end I would like to plot the coordinate of the moving object on a 2D map drawn to the scale.

How to zoom to fit 3D points in the scene to screen?

I store my 3D points (many points) in a TGLPoints object. There is no other object in the scene than points. When drawing the points, I would like to fit them to the screen so they do not look far away or too close. I tried TGLCamera.ZoomAll but with no success and also the solution given here which adjusts the camera location, depth of view and scene scale:
objSize:=YourCamera.TargetObject.BoundingSphereRadius;
if objSize>0 then begin
if objSize<1 then begin
GLCamera.SceneScale:=1/objSize;
objSize:=1;
end else GLCamera.SceneScale:=1;
GLCamera.AdjustDistanceToTarget(objSize*0.27);
GLCamera.DepthOfView:=1.5*GLCamera.DistanceToTarget+2*objSize;
end;
The points did not appear on the screen this time.
What should I do to fit the 3D points to screen?
For each point build the scale factor by taking length of vector from points position to camera position. Then using this scale build your transformation matrix that you will apply to camera matrix. If scale is large that means point is farther you will apply reverse translation to bring that point in close proximity. I hope this is clear. To compute translation vector use following formula
translation vector = translation vector +/- (abs(scale)/2)
+/- will be decided by the scale magnitude if it is too far from camera you chose - in above equation.

Camera pose and reflections using OpenCV's SovePNP

I'm trying to use the function SolvePNP to estimate the relative position of a camera. Mi question is this, when choosing world coordinates, do I need to be careful in choosing them so that there can be no reflections when transforming them to camera coordinates? Or will OpenCV correct that for me?
Details: I'm filming a tennis court and was originally setting the world coordinate origin to be the centre of the court, with the x-axis pointing parallel to the net towards the left, the y-axis pointing forwards vertically on the court, and the z-axis pointing upwards. If I've understood correctly, SolvePNP will transform these coordinates to a system with origin at some point behind the top left corner of an image, with x-axis pointing downwards on the image, y-axis pointing to the right, and z-axis pointing forwards to the scene. However this transformation would definitely involve a reflection, must I swap the x and y axis of my world coordinates to avoid this or is it fine to leave them as they are? (Also, let me know if I'm making a big mistake and SolvePnp actually puts the origin at a point behind the centre of the image rather than one the top left corner...)
Assuming that you have a camera calibration matrix (and that such calibration was done assuming a right hand coordinate system all along), and correct correspondences between the tennis field features in the image and the CAD-features:
You need to select the reference frame in the tennis court such that is a right hand coordinate system, so that your solution from solvePNP provides the pose and position of the tennis field reference frame with respect to the camera coordinate system (by default a right hand coordinate system).
Hope it helps

Converting a 2D image point to a 3D world point

I know that in the general case, making this conversion is impossible since depth information is lost going from 3d to 2d.
However, I have a fixed camera and I know its camera matrix. I also have a planar calibration pattern of known dimensions - let's say that in world coordinates it has corners (0,0,0) (2,0,0) (2,1,0) (0,1,0). Using opencv I can estimate the pattern's pose, giving the translation and rotation matrices needed to project a point on the object to a pixel in the image.
Now: this 3d to image projection is easy, but how about the other way? If I pick a pixel in the image that I know is part of the calibration pattern, how can I get the corresponding 3d point?
I could iteratively choose some random 3d point on the calibration pattern, project to 2d, and refine the 3d point based on the error. But this seems pretty horrible.
Given that this unknown point has world coordinates something like (x,y,0) -- since it must lie on the z=0 plane -- it seems like there should be some transformation that I can apply, instead of doing the iterative nonsense. My maths isn't very good though - can someone work out this transformation and explain how you derive it?
Here is a closed form solution that I hope can help someone. Using the conventions in the image from your comment above, you can use centered-normalized pixel coordinates (usually after distortion correction) u and v, and extrinsic calibration data, like this:
|Tx| |r11 r21 r31| |-t1|
|Ty| = |r12 r22 r32|.|-t2|
|Tz| |r13 r23 r33| |-t3|
|dx| |r11 r21 r31| |u|
|dy| = |r12 r22 r32|.|v|
|dz| |r13 r23 r33| |1|
With these intermediate values, the coordinates you want are:
X = (-Tz/dz)*dx + Tx
Y = (-Tz/dz)*dy + Ty
Explanation:
The vector [t1, t2, t3]t is the position of the origin of the world coordinate system (the (0,0) of your calibration pattern) with respect to the camera optical center; by reversing signs and inversing the rotation transformation we obtain vector T = [Tx, Ty, Tz]t, which is the position of the camera center in the world reference frame.
Similarly, [u, v, 1]t is the vector in which lies the observed point in the camera reference frame (starting from camera center). By inversing the rotation transformation we obtain vector d = [dx, dy, dz]t, which represents the same direction in world reference frame.
To inverse the rotation transformation we take advantage of the fact that the inverse of a rotation matrix is its transpose (link).
Now we have a line with direction vector d starting from point T, the intersection of this line with plane Z=0 is given by the second set of equations. Note that it would be similarly easy to find the intersection with the X=0 or Y=0 planes or with any plane parallel to them.
Yes, you can. If you have a transformation matrix that maps a point in the 3d world to the image plane, you can just use the inverse of this transformation matrix to map a image plane point to the 3d world point. If you already know that z = 0 for the 3d world point, this will result in one solution for the point. There will be no need to iteratively choose some random 3d point. I had a similar problem where I had a camera mounted on a vehicle with a known position and camera calibration matrix. I needed to know the real world location of a lane marking captured on the image place of the camera.
If you have Z=0 for you points in world coordinates (which should be true for planar calibration pattern), instead of inversing rotation transformation, you can calculate homography for your image from camera and calibration pattern.
When you have homography you can select point on image and then get its location in world coordinates using inverse homography.
This is true as long as the point in world coordinates is on the same plane as the points used for calculating this homography (in this case Z=0)
This approach to this problem was also discussed below this question on SO: Transforming 2D image coordinates to 3D world coordinates with z = 0

How do you counter a rotated camera?

We are currently using opencv to track a planar rectangular target. While directly straight(no pitch), this works perfectly using findContours with solvePnp and returns a very accurate location of the target.
The problem is, is that obviously we get the different results once we increase the pitch. We know the pitch of the camera at all time.
How would I "cancel out" the pitch of the camera, and obtain coordinates as if the camera was facing straight ahead?
In the general case you can use an affine transform to map the quadrilateral seen by the camera back to the original rectangle. In your case the quadrilateral seen by the camera may be a good approximation of a parallelogram since only one angle is changing, but in real-world applications you can generally assume that the camera can have non-zero values for each of the three rotations (e.g. in pitch, yaw, and roll).
http://opencv.itseez.com/doc/tutorials/imgproc/imgtrans/warp_affine/warp_affine.html
The transform allows you to calculate the matching coordinates (x,y) within the rectangle's plane given coordinates (x', y') in the image of the rectangle.

Resources