How to map 3D models (clothes) to kinect joints using C# - virtual

We are doing a virtual dressing room project and we could map .png image files on body joints. But we need to map 3D clothes to the body and the final out put should be a person(real person not an avatar) wearing a 3D cloth on a live video out put. But we don't know how to do this. Any help is so much appreciated.
Thanks in advance.

Answer for user3588017, its too lengthy but I'll try to explain how to get your project done. First play with XNA with their basic tutorials if you are totally new to 3D gaming. In my project I only focused on bending hands down. For this in you need to create a 3D model in blender and export it to .xna. I used this link as a start. the very beginning Actually this is the most hardest part after this you know how to animate your model using maths. Then you need to map kinect data as (x,y,z) to the parts of your model.
Ex:- map the model rotation to kinect's model rotation. For this I used simple calculations like measuring the depth of the shoulders and calculating the rotated angle and applied it to my cloth model.

Related

3D object transformation to match location

I have a partial mesh (vertices and normals) of a 3d object in world coordinates and also the 3d Model of the object.
How can I best match the location and place the 3D model in place of the mesh?
I know how to match 2 point clouds using methods like ICP in opencv and open3d etc.,
However, I do not know how to go about with 3d objects. Could anyone give a pointer to this?
I solved this by using ICP (point to point / point to plane methods) on two generated point clouds of 3D model and the partial mesh.
I generated one point cloud by re-sampling the 3D model and the second point cloud by re-sampling the partial mesh (libigl). I had to resample to have uniform number of points as ICP gave unstable results, if not.
Hope this helps someone.
P.S.: This was also suggested by #VB_overflow in the comments.

ARKit Perspective Rendering

I am working on a perspective problem with ARKit that I encountered in one of my projects.
I explain you: in the photo below, I display a 3D model along the lane, there are small 3D boxes every 2 meters. Then, I place indicators at the level of these boxes, as you can see on the photo, the 3D models shift and do not seem to be in the right place when we move away from the boxes.
I would like to know if this comes from ARKit or simply from my way of displaying this model. Knowing that the 3D model generated with Blender is flat (horizontal).
I suppose this comes from the detection of the marker to anchor the 3D model, the transformation matrix must be slightly inclined and must distort the perspective on long 3D models.
Thank you

ARKit plane with real world object above it

Thanks in advance for reading my question. I am really new to ARKit and have followed several tutorials which showed me how to use plane detection and using different textures for the planes. The feature is really amazing but here is my question. Would it be possible for the player to place the plane all over the desired area first and then interact with the new ground? For example, could I use the plane detection to detect and put grass texture over an area and then drive a real RC car over it? Just like driving it on real grass.
I have tried out the plane detection on my iPhone 6s while what I found is when I tried to put anything from real world on the top of plane surface it just simply got covered by the plane. Could you please give me some clue if it is possible to make the plane just stay on the ground without covering the real world object?
I think that's sth what you are searching for:
ARKit hide objects behind walls
Or another way is i think to track the position of the real world object for example with apples turicreate or CoreML or both -> then don't draw your stuff on the affected position.
Tracking moving objects is not supported, that's actually what it would be needed to make a real object interact with the a virtual one.
Said that I would recommend you using 2D image recognition and "read" every camera frame to detect the object while moving in the camera's view space. Look for the AVCaptureVideoDataOutputSampleBufferDelegate protocol in Apple's developer site
Share your code and I could help with some ideas

Inverse zoom in/out (scaling) of AR 3D model using ARToolkit for Unity5

I have done a AR project using ARToolkit for Unity. It works fine but the problem I'm trying to solve here is to inverse the scaling on 3D model. Right now when you take camera further away from the marker 3D object go smaller (Zooms out) and if I bring the camera closer,the 3D model gets bigger. But what I want to do is opposite of this behaviour.
Any ideas on how I go about it?
I think this is a bad idea because it completely breaks the concept of AR, which is that the 3D objects are related to the real world, but it is definitely possible.
ARToolkit provides a transformation matrix for each marker. That matrix includes position and rotation. On each iteration, the object is updated with those values. What you need to do is find where that is applied to the object and then measure the distance to the camera and update the translation to be at the distance you want.
That code is in the Unity plugin, so it should be easy to find.

set transparent layer on the 3d model and draw something on the transparent layer and project on the surface

I have done lots of research on projection mapping but did not found any solution that i want to do like map on the real 3d object by projector through iPad or desktop.
I want to draw something on the real time 3d object from ipad application. I have 3d model of that object and connect the iPad from projector, In the real time if i will Draw line on the 3d model it will display as it is on the 3d object by projector. It should be look like i am drawing on the real object.
I want to do something like...
http://www.youtube.com/watch?v=RoeDacxjtA4
Thanks,
Its obviously need extensive research but i have an idea in my mind.
Match real model and a 3d model in unity.
Your virtual camera need to be positioned exactly same way your real life projector is positioned. Your FOV need to be same as your projector
Virtual camera is to see only overlay layer, not the 3d object itself. So will give you only picture with the paint you can directly project on the real object
Regarding painting on the 3d model, there will be numbers of ways you can make it. There probably are some useful assets in the asset store you can buy
That looks interesting
http://forum.unity3d.com/threads/digital-paint-free-mesh-painting-in-unity-download.95535/

Resources