I have a partial mesh (vertices and normals) of a 3d object in world coordinates and also the 3d Model of the object.
How can I best match the location and place the 3D model in place of the mesh?
I know how to match 2 point clouds using methods like ICP in opencv and open3d etc.,
However, I do not know how to go about with 3d objects. Could anyone give a pointer to this?
I solved this by using ICP (point to point / point to plane methods) on two generated point clouds of 3D model and the partial mesh.
I generated one point cloud by re-sampling the 3D model and the second point cloud by re-sampling the partial mesh (libigl). I had to resample to have uniform number of points as ICP gave unstable results, if not.
Hope this helps someone.
P.S.: This was also suggested by #VB_overflow in the comments.
Related
I'm working with SharpDX C# libraries for DirectX11. I'm following the "Direct3D Rendering Cookbook" to load an external mesh. I need to find intersections between that mesh and a particular ray but here comes the problem. If I load the mesh and then do some operations on the World matrix (translation/rotation) the Triangles list of that mesh, which I use to compute the intersections, is not updated consequently.
the way to deal with ray cast intersection with the original model is fairly straightforward. I will though assume you have a matrix for transformation of the model into the world. What you need to do is create an inverse of this matrix and multiply the ray cast start and end by this matrix. This gives you the ray in the local space of the model (the untransformed version of the model). You then can call your ray cast function to get an accurate location hit. It's one of those unwritten tricks, don't move the model, move the ray, its cheaper and faster to do.
We are doing a virtual dressing room project and we could map .png image files on body joints. But we need to map 3D clothes to the body and the final out put should be a person(real person not an avatar) wearing a 3D cloth on a live video out put. But we don't know how to do this. Any help is so much appreciated.
Thanks in advance.
Answer for user3588017, its too lengthy but I'll try to explain how to get your project done. First play with XNA with their basic tutorials if you are totally new to 3D gaming. In my project I only focused on bending hands down. For this in you need to create a 3D model in blender and export it to .xna. I used this link as a start. the very beginning Actually this is the most hardest part after this you know how to animate your model using maths. Then you need to map kinect data as (x,y,z) to the parts of your model.
Ex:- map the model rotation to kinect's model rotation. For this I used simple calculations like measuring the depth of the shoulders and calculating the rotated angle and applied it to my cloth model.
I have been able to convert a 3D mesh from Maya into Voxel art (looks like a bunch of cubes--similar to legos), all done in Maya. I plan on using the 3D art to wrap around my 2D textures to make it 2.5D. My question is: does the mesh being voxelized allow me to use the pieces as particles that i can put into a particle engine in XNA to have awesome dynamic effects?
No, because you get a set of vertices and index defining triangles with no information about cubes.
But you can create an algorithm that extract the info from the model. It's a bit hard but it's feasible.
I'd do it creating a 3d grid, and foreach face I'd launch rays from that face to the opposite face, taking every collision with the mesh, getting for each ray a number of collisions that should be pair (0, 2, 4,...), this between two points should have a solid volume.
That way it can be converted to voxels... on each collision it would be useful to store the bones that are related to the triangle that collides, this way you would be able to animate the voxel model.
I have a set of 3-d points and some images with the projections of these points. I also have the focal length of the camera and the principal point of the images with the projections (resulting from previously done camera calibration).
Is there any way to, given these parameters, find the automatic correspondence between the 3-d points and the image projections? I've looked through some OpenCV documentation but I didn't find anything suitable until now. I'm looking for a method that does the automatic labelling of the projections and thus the correspondence between them and the 3-d points.
The question is not very clear, but I think you mean to say that you have the intrinsic calibration of the camera, but not its location and attitude with respect to the scene (the "extrinsic" part of the calibration).
This problem does not have a unique solution for a general 3d point cloud if all you have is one image: just notice that the image does not change if you move the 3d points anywhere along the rays projecting them into the camera.
If have one or more images, you know everything about the 3D cloud of points (e.g. the points belong to an object of known shape and size, and are at known locations upon it), and you have matched them to their images, then it is a standard "camera resectioning" problem: you just solve for the camera extrinsic parameters that make the 3D points project onto their images.
If you have multiple images and you know that the scene is static while the camera is moving, and you can match "enough" 3d points to their images in each camera position, you can solve for the camera poses up to scale. You may want to start from David Nister's and/or Henrik Stewenius's papers on solvers for calibrated cameras, and then look into "bundle adjustment".
If you really want to learn about this (vast) subject, Zisserman and Hartley's book is as good as any. For code, look into libmv, vxl, and the ceres bundle adjuster.
I am new to OpenCV. I have got the Surf Detection sample working. Now I want to place a 3d model on the detected image.
How can I find the 3d Projection matrix?
I guess you are talking about Augment Reality as you say you want to place a 3D model on the detected image (in the camera frame? ). They key of the problem is always to detect at least 4 points that match other 4 "keypoints" in our marker. Then, solving some equations we will get our homography, which will allow us to project any point.
In OpenCV there is a function that performs this task: cvFindHomography
You just need the pairs of matches, select a method (RANSAC, i.e.) and you will get the Homography.
Then you can project the points like explained here: