I would like to the get the coordinates of the centers of the Voronoi regions, i.e. the duals of the faces of the Delaunay triangulation.
In hgeometry PlanarSubdivisions I find dual as a Lens on PlanarGraph. However, I cannot find a function to convert a triangulation into a PlanarGraph (but into a PlaneGraph or a PlanarSubdivision). What I am missing? How to get the dual of a triangulation, specifically the coordinates of the dual of the Faces?
Related
I have 2D image data with respective camera location in latitude and longitude. I want to translate pixel co-ordinates to 3D world co-ordinates. I have access to intrinsic calibration parameters and Yaw, pitch and roll. Using Yaw, pitch and roll I can derive rotation matrix but I am not getting how to calculate translation matrix. As I am working on data set, I don't have access to camera physically. Please help me to derive translation matrix.
Cannot be done at all if you don't have the elevation of the camera with respect to the ground (AGL or ASL) or another way to resolve the scale from the image (e.g. by identifying in the image an object of known size, for example a soccer stadium in an aerial image).
Assuming you can resolve the scale, the next question is how precisely you can (or want to) model the terrain. For a first approximation you can use a standard geodetical ellipsoid (e.g. WGS-84). For higher precision - especially for images shot from lower altitudes - you will need use a DTM and register it to the images. Either way, it is a standard back-projection problem: you compute the ray from the camera centre to the pixel, transform it into world coordinates, then intersect with the ellipsoid or DTM.
There are plenty of open source libraries to help you do that in various languages (e.g GeographicLib)
Edited to add suggestions:
Express your camera location in ECEF.
Transform the ray from the camera in ECEF as well taking into account the camera rotation. You can both transformations using a library, e.g. nVector.
Then proceeed to intersect the ray with the ellipsoid, as explained in this answer.
I have stereo images (non co-planar cameras), which have matching points on a plane (wall) labeled.
I need to compute the camera locations in world space, and the angles they are focused at.
I can work out the math if I need to (with effort), I wonder if there is a shortcut to doing these computations in OpenCV that I might not be familiar with?
If they are both looking at a plane, all you have to do is estimate homographies (with findHomography) independently between the plane and each camera, then decompose them (decomposeHomographyMat) to get rotation and translation up to scale. To resolve scale you need to know the distance between at least two of the points on the plane.
I am currently working on a camera 3D realsense camera that detection and calculate the box or boxes dimension.
I am new in computer vision. I first worked on i just work on detection objects detection with color or without color to get a basic understanding. Using C++ and openCV, I want to managed to get the corners (and their x y z pixel coordinates) of the square using smoothing (remove noise), edge detection (canny function), lines detection (Hough transform) and lines intersection (mathematical calculation) on an simplified picture (uniform background).
Now is my question: do you have any direction/recommendation/advice/literature about dimension calculation of box. https://www.youtube.com/watch?v=l-i2E7aZY6A
i am using c++ and opencv with Intel realsens 3D camera.
thanks in advance((-_-))
Once you have the colour image pixel coordinates of the box you can obtain the real-world coordinates (also known as vertices or camera coordinates in the documentation) using methods in the projection interface, then simple pythagoras to calculate the distance between the points in mm.
If you have no experience with RealSense I'd recommend reading the documentation and looking through the sample apps included with the SDK.
With PCL (Point Cloud Library) you can find planes (or spheres and other surfaces), then refine the result with 2D image processing (eg. edge detection).
http://pointclouds.org/
https://www.youtube.com/watch?v=VD044WAHEe4
How do you stereo cameras so that the output of the triangulation is in a real world coordinate system, that is defined by known points?
OpenCV stereo calibration returns results based on the pose of the left hand camera being the reference coordinate system.
I am currently doing the following:
Intrinsically calibrating both the left and right camera using a chess board. This gives the Camera Matrix A, and the distortion coefficients for the camera.
Running stereo calibrate, again using the chessboard, for both cameras. This returns the extrinsic parameters, but they are relative to the cameras and not the coordinate system I would like to use.
How do I calibrate the cameras in such a way that known 3D point locations, with their corresponding 2D pixel locations in both images provides a method of extrinsically calibrating so the output of triangulation will be in my coordinate system?
Calculate disparity map from the stereo camera - you may use cvFindStereoCorrespondenceBM
After finding the disparity map, refer this: OpenCv depth estimation from Disparity map
Localization of an object specified in the image.
I am working on the project of computer vision to find the distance of an object using stereo images.I followed the following steps using OpenCV to achieve my objective
1. Calibration of camera
2. Surf matching to find fundamental matrix
3. Rotation and Translation vector using svd as method is described in Zisserman and Hartley book.
4. StereoRectify to get the projection matrix P1, P2 and Rotation matrices R1, R2. The Rotation matrices can also be find using Homography R=CameraMatrix.inv() H Camera Matrix.
Problems:
i triangulated point using least square triangulation method to find the real distance to the object. it returns value in the form of [ 0.79856 , .354541 .258] . How will i map it to real world coordinates to find the distance to an object.
http://www.morethantechnical.com/2012/01/04/simple-triangulation-with-opencv-from-harley-zisserman-w-code/
Alternative approach:
Find the disparity between the object in two images and find the depth using the given formula
Depth= ( focal length * baseline ) / disparity
for disparity we have to perform the rectification first and the points must be undistorted. My rectification images are black.
Please help me out.It is important
Here is the detail explanation of how i implemented the code.
Calibration of Camera using Circles grid to get the camera matrix and Distortion coefficient. The code is given on the Github (Andriod).
2.Take two pictures of a car. First from Left and other from Right. Take the sub-image and calculate the -fundmental matrix- essential matrix- Rotation matrix- Translation Matrix....
3.I have tried to projection in two ways.
Take the first image projection as identity matrix and make a second project 3x4d through rotation and translation matrix and perform Triangulation.
Get the Projection matrix P1 and P2 from Stereo Rectify to perform Triangulation.
My object is 65 meters away from the camera and i dont know how to calculate this true this based on the result of triangulation in the form of [ 0.79856 , .354541 .258]
Question: Do i have to do some extra calibration to get the result. My code is not based to know the detail of geometric size of the object.
So you already computed the triangulation? Well, then you have points in camera coordinates, i.e. in the coordinate frame centered on one of the cameras (the left or right one depending on how your code is written and the order in which you feed your images to it).
What more do you want? The vector length (square root of the sum of the square coordinates) of those points is their estimated distance from the same camera. If you want their position in some other "world" coordinate system, you need to give the coordinate transform between that system and the camera - presumably through a calibration procedure.