How to calculate translation matrix? - opencv

I have 2D image data with respective camera location in latitude and longitude. I want to translate pixel co-ordinates to 3D world co-ordinates. I have access to intrinsic calibration parameters and Yaw, pitch and roll. Using Yaw, pitch and roll I can derive rotation matrix but I am not getting how to calculate translation matrix. As I am working on data set, I don't have access to camera physically. Please help me to derive translation matrix.

Cannot be done at all if you don't have the elevation of the camera with respect to the ground (AGL or ASL) or another way to resolve the scale from the image (e.g. by identifying in the image an object of known size, for example a soccer stadium in an aerial image).
Assuming you can resolve the scale, the next question is how precisely you can (or want to) model the terrain. For a first approximation you can use a standard geodetical ellipsoid (e.g. WGS-84). For higher precision - especially for images shot from lower altitudes - you will need use a DTM and register it to the images. Either way, it is a standard back-projection problem: you compute the ray from the camera centre to the pixel, transform it into world coordinates, then intersect with the ellipsoid or DTM.
There are plenty of open source libraries to help you do that in various languages (e.g GeographicLib)
Edited to add suggestions:
Express your camera location in ECEF.
Transform the ray from the camera in ECEF as well taking into account the camera rotation. You can both transformations using a library, e.g. nVector.
Then proceeed to intersect the ray with the ellipsoid, as explained in this answer.

Related

Finding the real world coordinates of an object from a camera

I am trying to find the coordinates of an object, which is detected from single camera, by using OpenCV. The camera will be mounted on the drone, looking through directly to the surface.
I have:
-Camera's coordinates from GPS sensor on the drone.
-Camera's height .
-Camera's intrinsic parameters.
3D Reconstruction formula
According to this formula, I need to find the extrinsic parameters to find the real world coordinates. I suppose to be use OpenCV’s solvePnP method to find extrinsic parameters. As I know, extrinsic parameters are about the camera location but my camera will be on the drone and the location will be change. Is the extrinsic parameters are constant just like the intrinsic parameters?
Is there any other way to do this calculation?
What you're trying to do is what is called monoplotting. In order to estimate XY real world coordinates from a single image you need to know the following:
X0, Y0, Z0 real world coordinates of your camera
Exterior orientation of your camera (Pitch, Roll and Yaw)
Interior orientation of your camera (focal length, principle point in x and y direction, radial and tangential distortion parameters)
So the XYZ of your camera and the exterior orientations are typically stored within the exif data of your image. This does however depend on the drone. There are some good python modules for extracting this information in the images. I use exifread.
The interior orientation can be more difficult to get, as you need to perform at camera calibration, which you can do with OpenCV. You should be able to use the tutorial Yunus linked in his comment. However there is a shortcut if you use photogrammetry software from pix4D, you can use their camera database which stores information about many different drones and their interior orientations. They are not perfect but should be alright for many use cases, see link.
When you have all of these parameters, you need to do the following:
Undistort your images
Create image coordinates of the points you wish to know the real world coordinate of, use undistorted image.
Create rotation matrices with rotations along X,Y,Z axis and do matrix multiplikation on them (RXRYRZ)
Apply collinearity equations
Regarding 1. you can use cv2.undistort for this, the link Yunus provided have a tutorial for this as well.
Regarding 3, I'm sure OpenCV probably can provide this matrix for you, but creating the function for your self is quite easy and good for understanding what is going on. See wikipedia: link It can be a little confusing which of the pitch, roll, yaw angles to use for which matrices. This all depends on how your cameras exterior coordinate system is. The typical convention is that pitch is around x, roll is around y and yaw is around z.
Regarding 4. The collinearity equations depends on the rotation matrices you created, see: link the term Z-Z0 just means the negative relative height above the object you try to find coordinates for. So if you do not have the relative height of your drone you need to know the height of your object and subtract the drones height from the objects height (you'll get a negative number).
I hope this helps you and have pointed you towards the right direction.

Compute camera angles and locations given matching points with OpenCV

I have stereo images (non co-planar cameras), which have matching points on a plane (wall) labeled.
I need to compute the camera locations in world space, and the angles they are focused at.
I can work out the math if I need to (with effort), I wonder if there is a shortcut to doing these computations in OpenCV that I might not be familiar with?
If they are both looking at a plane, all you have to do is estimate homographies (with findHomography) independently between the plane and each camera, then decompose them (decomposeHomographyMat) to get rotation and translation up to scale. To resolve scale you need to know the distance between at least two of the points on the plane.

In opencv's solvePnP, what should I pass for objectPoints?

OpenCV docs for solvePnp
In an augmented reality app, I detect the image in the scene so I know imagePoints, but the object I'm looking for (objectPoints) is a virtual marker just stored in memory to search for in the scene, so I don't know where it is in space. The book I'm reading(Mastering OpenCV with Practical Computer Vision Projects ) passes it as if the marker is a 1x1 matrix and it works fine, how? Doesn't solvePnP needs to know the size of the object and its projection so we know who much scale is applied ?
Assuming you're looking for a physical object, you should pass the 3D coordinates of the points on the model which are mapped (by projection) to the 2D points in the image. You can use any reference frame, and the results of the solvePnp will give you the position and orientation of the camera in that reference frame.
If you want to get the object position/orientation in camera space, you can then transform both by the inverse of the transform you got from solvePnp, so that the camera is moved to the origin.
For example, for a cube object of size 2x2x2, the visible corners may be something like: {-1,-1,-1},{1,-1,-1},{1,1,-1}.....
You have to pass the 3D coordinates of the real-world object that you want to map with the image. The scaling and rotation values will depend on the coordinate system that you use.
This is not as difficult as it sounds. See this blog post on head pose estimation. for more details with code.

Distance to the object using stereo camera

Is there a way to calculate the distance to specific object using stereo camera?
Is there an equation or something to get distance using disparity or angle?
NOTE: Everything described here can be found in the Learning OpenCV book in the chapters on camera calibration and stereo vision. You should read these chapters to get a better understanding of the steps below.
One approach that do not require you to measure all the camera intrinsics and extrinsics yourself is to use openCVs calibration functions. Camera intrinsics (lens distortion/skew etc) can be calculated with cv::calibrateCamera, while the extrinsics (relation between left and right camera) can be calculated with cv::stereoCalibrate. These functions take a number of points in pixel coordinates and tries to map them to real world object coordinates. CV has a neat way to get such points, print out a black-and-white chessboard and use the cv::findChessboardCorners/cv::cornerSubPix functions to extract them. Around 10-15 image pairs of chessboards should do.
The matrices calculated by the calibration functions can be saved to disc so you don't have to repeat this process every time you start your application. You get some neat matrices here that allow you to create a rectification map (cv::stereoRectify/cv::initUndistortRectifyMap) that can later be applied to your images using cv::remap. You also get a neat matrix called Q, which is a disparity-to-depth matrix.
The reason to rectify your images is that once the process is complete for a pair of images (assuming your calibration is correct), every pixel/object in one image can be found on the same row in the other image.
There are a few ways you can go from here, depending on what kind of features you are looking for in the image. One way is to use CVs stereo correspondence functions, such as Stereo Block Matching or Semi Global Block Matching. This will give you a disparity map for the entire image which can be transformed to 3D points using the Q matrix (cv::reprojectImageTo3D).
The downfall of this is that unless there is much texture information in the image, CV isn't really very good at building a dense disparity map (you will get gaps in it where it couldn't find the correct disparity for a given pixel), so another approach is to find the points you want to match yourself. Say you find the feature/object in x=40,y=110 in the left image and x=22 in the right image (since the images are rectified, they should have the same y-value). The disparity is calculated as d = 40 - 22 = 18.
Construct a cv::Point3f(x,y,d), in our case (40,110,18). Find other interesting points the same way, then send all of the points to cv::perspectiveTransform (with the Q matrix as the transformation matrix, essentially this function is cv::reprojectImageTo3D but for sparse disparity maps) and the output will be points in an XYZ-coordinate system with the left camera at the center.
I am still working on it, so I will not post entire source code yet. But I will give you a conceptual solution.
You will need the following data as input (for both cameras):
camera position
camera point of interest (point at which camera is looking)
camera resolution (horizontal and vertical)
camera field of view angles (horizontal and vertical)
You can measure the last one yourself, by placing the camera on a piece of paper and drawing two lines and measuring an angle between these lines.
Cameras do not have to be aligned in any way, you only need to be able to see your object in both cameras.
Now calculate a vector from each camera to your object. You have (X,Y) pixel coordinates of the object from each camera, and you need to calculate a vector (X,Y,Z). Note that in the simple case, where the object is seen right in the middle of the camera, the solution would simply be (camera.PointOfInterest - camera.Position).
Once you have both vectors pointing at your target, lines defined by these vectors should cross in one point in ideal world. In real world they would not because of small measurement errors and limited resolution of cameras. So use the link below to calculate the distance vector between two lines.
Distance between two lines
In that link: P0 is your first cam position, Q0 is your second cam position and u and v are vectors starting at camera position and pointing at your target.
You are not interested in the actual distance, they want to calculate. You need the vector Wc - we can assume that the object is in the middle of Wc. Once you have the position of your object in 3D space you also get whatever distance you like.
I will post the entire source code soon.
I have the source code for detecting human face and returns not only depth but also real world coordinates with left camera (or right camera, I couldn't remember) being origin. It is adapted from source code from "Learning OpenCV" and refer to some websites to get it working. The result is generally quite accurate.

How to calculate coordinates of center of image from an aerial camera whose FOV, attitude and position are given

I have a problem that involves a UAV flying with a camera mounted below it. Following information is provided:
GPS Location of the UAV in Lat/Long
GPS Height of the UAV in meters
Attitude of the UAV i.e. roll, pitch, and yaw in degrees
Field of View (FOV) of the camera in degrees
Elevation of the camera w.r.t UAV in degrees
Azimuth of camera w.r.t UAV in degrees
I have some some images taken from that camera during a flight and my task is to compute the locations (in Lat/Long) of 4 corners points and the center points of the image so that the image can be placed on the map at proper location.
I found a document while searching the internet that can be downloaded at the following link:
http://www.siaa.asn.au/get/2411853249.pdf
My maths background is very weak so I am not able to translate the document into a working solution.
Can somebody provide me a solution to my problem in the form of a simple algorithm or preferable in the form of code of some programming language?
Thanks.
As I see, it does not related to image-processing, because you need to determine coordinates of center of image (you even do not need FOV). You have to find intersection of camera principal ray and earth surface (if I've understood your task well). This is nothing more then basic matrix math.
See wiki:Transformation.

Resources