I need to restore 3D shape from its 2D projection plot by recognizing it and finding 3d coordinates of vertices. You can see the sample images here. How can this be done most effectively? Is there any library that could do this? Language choice doesn't matter.
Related
I got 2d coordinates of the rectangle(qr code) with the camera. Now, I need to convert these 2d coordinates into 3d coordinates with z index 0. As I understand I have to use camera pose estimation for this. But I could not find any special library that I can apply in rust. Is there any special library or sample code that already did this job?
I have 2 images for the same object from different views. I want to form a camera calibration, but from what I read so far I need to have a 3D world points to get the camera matrix.
I am stuck at this step, who can explain it to me
Popular camera calibration methods use 2D-3D point correspondences to determine the projective properties (intrinsic parameters) and the pose of a camera (extrinsic parameters). The most simple approach is the Direct Linear Transformation (DLT).
You might have seen, that often planar chessboards are used for camera calibrations. The 3D coordinates of it's corners can be chosen by the user itself. Many people choose the chessboard being in x-y plane [x,y,0]'. However, the 3D coordinates need to be consistent.
Coming back to your object: Span your own 3D coordinate system over the object and find at least six spots, from which you can determine easy their 3D position. Once you have that, you have to find their corresponding 2D positions (pixel) in your two images.
There are complete examples in OpenCV. Maybe you get a better picture when reading the code.
I have been able to convert a 3D mesh from Maya into Voxel art (looks like a bunch of cubes--similar to legos), all done in Maya. I plan on using the 3D art to wrap around my 2D textures to make it 2.5D. My question is: does the mesh being voxelized allow me to use the pieces as particles that i can put into a particle engine in XNA to have awesome dynamic effects?
No, because you get a set of vertices and index defining triangles with no information about cubes.
But you can create an algorithm that extract the info from the model. It's a bit hard but it's feasible.
I'd do it creating a 3d grid, and foreach face I'd launch rays from that face to the opposite face, taking every collision with the mesh, getting for each ray a number of collisions that should be pair (0, 2, 4,...), this between two points should have a solid volume.
That way it can be converted to voxels... on each collision it would be useful to store the bones that are related to the triangle that collides, this way you would be able to animate the voxel model.
I am doing a project on 3D rendering of a scene. I am using OpenCV. The steps I am doing are like this:
Taking two images of a scene.
Calculating object correspondence using SURF feature matching.
Calculating camera fundamental matrix.
Calculating the Disparity image.
Now I have two questions
After calculating fundamental matrix how can I calculate the Q matrix? (I can't calibrate the camera)
How can I render in 3D using opencv or any other library?
For the 3D part, you can render your scene with OpenGL or with PCL. You've two solutions:
For each pixel, you make a point with the right color extracted from the camera's image. This will give you a point cloud which can be processed with PCL (for 3D features extraction for example).
You apply a triangulation algorithm, but in order to apply this algorithm you must have the extrinsic matrices of your camera.
You can find more information about these techniques here:
Point Cloud technique
Triangulation algorithm
If you want to use OpenGL, you have to open a valid OpenGL context. I recommend you the SFML library or Qt. These libraries are very easy to use and have a good documentation. Both have tutorials about 3D rendering with OpenGL.
you can have Q matrix from stereo rectification via openCV method:
cv::stereoRectify
I think you want the Q matrix to reconstruct the 3D. However, you can reconstruct from intrinsic parameters via:
X = (u-cu)*base/d
Y = (v-cv)*base/d
Z = f*base/d
where (u,v) is a 2D point in the image coordinate system, (cu,cv) is the principal point of the camera, f is the focal length, base is the baseline, d is the disparity and (X,Y,Z) is a 3D point in the camera coordinate system.
For the visualization, it is possible to use PCL or VTK (the visualization of PCL is based on vtk, but for me more simple to implement).
If you just want to have a look to the output you can just use some software like Meshlab
Cheers
I am new to OpenCV. I have got the Surf Detection sample working. Now I want to place a 3d model on the detected image.
How can I find the 3d Projection matrix?
I guess you are talking about Augment Reality as you say you want to place a 3D model on the detected image (in the camera frame? ). They key of the problem is always to detect at least 4 points that match other 4 "keypoints" in our marker. Then, solving some equations we will get our homography, which will allow us to project any point.
In OpenCV there is a function that performs this task: cvFindHomography
You just need the pairs of matches, select a method (RANSAC, i.e.) and you will get the Homography.
Then you can project the points like explained here: