How do you counter a rotated camera? - image-processing

We are currently using opencv to track a planar rectangular target. While directly straight(no pitch), this works perfectly using findContours with solvePnp and returns a very accurate location of the target.
The problem is, is that obviously we get the different results once we increase the pitch. We know the pitch of the camera at all time.
How would I "cancel out" the pitch of the camera, and obtain coordinates as if the camera was facing straight ahead?

In the general case you can use an affine transform to map the quadrilateral seen by the camera back to the original rectangle. In your case the quadrilateral seen by the camera may be a good approximation of a parallelogram since only one angle is changing, but in real-world applications you can generally assume that the camera can have non-zero values for each of the three rotations (e.g. in pitch, yaw, and roll).
http://opencv.itseez.com/doc/tutorials/imgproc/imgtrans/warp_affine/warp_affine.html
The transform allows you to calculate the matching coordinates (x,y) within the rectangle's plane given coordinates (x', y') in the image of the rectangle.

Related

Real Distance of object from camera using camera matrix

How can I calculate the distance of an object of known size (e.g. aruco marker of 0.14m printed on paper) from camera. I know the camera matrix (camMatx) and my fx,fy ~= 600px assuming no distortion. From this data I am able to calculate the pose of the aruco marker and have obtained [R|t]. Now the task is to get the distance of the aruco marker from the camera. I also know the height of the camera from ground plane (15m).
How should I go about solving this problem. Any help would be appreciated. Also please note I have also seen approach of similar triangles, but that would work on knowing the distance of the object, which doesnt apply in my case as I have to calculate the distance.
N.B: I dont know the camera sensor height. But I know how high the camera is located above ground.
I know the dimensions of the area in which my object is moving (70m x 45m). In the end I would like to plot the coordinate of the moving object on a 2D map drawn to the scale.

How to calculate translation matrix?

I have 2D image data with respective camera location in latitude and longitude. I want to translate pixel co-ordinates to 3D world co-ordinates. I have access to intrinsic calibration parameters and Yaw, pitch and roll. Using Yaw, pitch and roll I can derive rotation matrix but I am not getting how to calculate translation matrix. As I am working on data set, I don't have access to camera physically. Please help me to derive translation matrix.
Cannot be done at all if you don't have the elevation of the camera with respect to the ground (AGL or ASL) or another way to resolve the scale from the image (e.g. by identifying in the image an object of known size, for example a soccer stadium in an aerial image).
Assuming you can resolve the scale, the next question is how precisely you can (or want to) model the terrain. For a first approximation you can use a standard geodetical ellipsoid (e.g. WGS-84). For higher precision - especially for images shot from lower altitudes - you will need use a DTM and register it to the images. Either way, it is a standard back-projection problem: you compute the ray from the camera centre to the pixel, transform it into world coordinates, then intersect with the ellipsoid or DTM.
There are plenty of open source libraries to help you do that in various languages (e.g GeographicLib)
Edited to add suggestions:
Express your camera location in ECEF.
Transform the ray from the camera in ECEF as well taking into account the camera rotation. You can both transformations using a library, e.g. nVector.
Then proceeed to intersect the ray with the ellipsoid, as explained in this answer.

how to obtain the world coordinates of an image

After to calibrated a camera using Jean- Yves Bouget's Camera Calibration Toolbox and checkerboard-patterns printed on cardboard, I´ve obtained extrinsic and intrinsic parameters, I can use the informations to find camera coordinates:
Pc = R * Pw + T
After that, how to obtain the world coordinates of an image using the Pc and calibration parametesr?
thanks in advance.
EDIT
The goal is to use the calibrated camera parameters to measure planar objects with a calibrated Camera). To perform this task i dont know to use the camera parameters. in other words i have to convert the pixels coordinates of the image to world coordinates using the calibrated parameters. I already have the parameters and the new image. How can i do this convertion?
thanks in advance.
I was thinking about problem, and came to the result:
You can't find the object size. The problem is by a single shot, when you have no idea how far the Object is from your camera you can't say something about the size of the object. The calibration just say how far is the image plane from the camera (focal length) and the open angles of the lense. When the focal length changes the calbriation changes too.
But there are some possibiltys:
How to get the real life size of an object from an image, when not knowing the distance between object and the camera?
So how I understand you can approximate the size of the objects.
Your problem can be solved if (and only if) you can express the plane of your object in calibrated camera coordinates.
The calibration procedure outputs, along with the camera intrinsic parameters K, a coordinate transform matrix for every calibration image Qwc_i = [Rwc_i |Twc_i] matrix, that expresses the location and pose of a particular scene coordinate frame in the camera coordinates at that calibration image. IIRC, in Jean-Yves toolbox this is the frame attached to the top-left corner of the calibration checkerboard.
So, if your planar object is on the same plane as the checkerboard in one of the calibration images, all you have to do in order to find its location in space is intersect the checkerboard plane with camera rays cast from the camera center (0,0,0) to the pixels into which the object is imaged.
If your object is NOT in one of those planes, all you can do is infer the object's own plane from additional information, if available, e.g. from a feature of known size and shape.

Find Distance between barcode and camera?

Is it possible to find the distance between the detected qr bar code (square) and the camera, if the size of the actual bar code and the (x,y) of all the corners of the bar code detected by the camera are known?
I want the method to work even if the camera is at an angle from the barcode.
I tried using a simple equation like f=d*z/D , where f is the local length of the camera, D is the size of the object, d is the width of the detected object in pixels, and z is the distance between the camera and the barcode. First, I calculate the focal length by using data from a known distance and then get the z values accordingly.
While the above method works pretty well but it has a lot of error if the camera is at an angle.
Is there is a better method to do this ?
Also, I can use only one camera, using two cameras is not an option.
Use your current formula (which you state works well) against the longest side and its opposite, then average the results.
Alternatively, just average the lengths of the longest side and its opposite. The relationships are all linear so you should end up with the same answer.
First you have to know the camera angle.
If you can not read that parameter from a device you could estimate that parameter by using other measures.
For example you know that a bar-code is rectangular. So by detecting it you could obtain four angles and from that estimate a homografy matrix. By knowing the homography matrix you could simplify your problem by just multiplying the coordinates with a homography inverse.
Homography matrix is wiedly used in camera calibration when a known pattern is presented such as chessboard.

Finding distance from camera to object of known size

I am trying to write a program using opencv to calculate the distance from a webcam to a one inch white sphere. I feel like this should be pretty easy, but for whatever reason I'm drawing a blank. Thanks for the help ahead of time.
You can use triangle similarity to calibrate the camera angle and find the distance.
You know your ball's size: D units (e.g. cm). Place it at a known distance Z, say 1 meter = 100cm, in front of the camera and measure its apparent width in pixels. Call this width d.
The focal length of the camera f (which is slightly different from camera to camera) is then f=d*Z/D.
When you see this ball again with this camera, and its apparent width is d' pixels, then by triangle similarity, you know that f/d'=Z'/D and thus: Z'=D*f/d' where Z' is the ball's current distance from the camera.
To my mind you will need a camera model = a calibration model if you want to measure distance or other things (int the real-world).
The pinhole camera model is simple, linear and gives good results (but won't correct distortions, (whether they are radial or tangential).
If you don't use that, then you'll be able to compute disparity-depth map, (for instance if you use stereo vision) but it is relative and doesn't give you an absolute measurement, only what is behind and what is in front of another object....
Therefore, i think the answer is : you will need to calibrate it somehow, maybe you could ask the user to approach the sphere to the camera till all the image plane is perfectly filled with the ball, and with a prior known of the ball measurement, you'll be able to then compute the distance....
Julien,

Resources