ARKit Perspective Rendering - ios

I am working on a perspective problem with ARKit that I encountered in one of my projects.
I explain you: in the photo below, I display a 3D model along the lane, there are small 3D boxes every 2 meters. Then, I place indicators at the level of these boxes, as you can see on the photo, the 3D models shift and do not seem to be in the right place when we move away from the boxes.
I would like to know if this comes from ARKit or simply from my way of displaying this model. Knowing that the 3D model generated with Blender is flat (horizontal).
I suppose this comes from the detection of the marker to anchor the 3D model, the transformation matrix must be slightly inclined and must distort the perspective on long 3D models.
Thank you

Related

Inverse zoom in/out (scaling) of AR 3D model using ARToolkit for Unity5

I have done a AR project using ARToolkit for Unity. It works fine but the problem I'm trying to solve here is to inverse the scaling on 3D model. Right now when you take camera further away from the marker 3D object go smaller (Zooms out) and if I bring the camera closer,the 3D model gets bigger. But what I want to do is opposite of this behaviour.
Any ideas on how I go about it?
I think this is a bad idea because it completely breaks the concept of AR, which is that the 3D objects are related to the real world, but it is definitely possible.
ARToolkit provides a transformation matrix for each marker. That matrix includes position and rotation. On each iteration, the object is updated with those values. What you need to do is find where that is applied to the object and then measure the distance to the camera and update the translation to be at the distance you want.
That code is in the Unity plugin, so it should be easy to find.

set transparent layer on the 3d model and draw something on the transparent layer and project on the surface

I have done lots of research on projection mapping but did not found any solution that i want to do like map on the real 3d object by projector through iPad or desktop.
I want to draw something on the real time 3d object from ipad application. I have 3d model of that object and connect the iPad from projector, In the real time if i will Draw line on the 3d model it will display as it is on the 3d object by projector. It should be look like i am drawing on the real object.
I want to do something like...
http://www.youtube.com/watch?v=RoeDacxjtA4
Thanks,
Its obviously need extensive research but i have an idea in my mind.
Match real model and a 3d model in unity.
Your virtual camera need to be positioned exactly same way your real life projector is positioned. Your FOV need to be same as your projector
Virtual camera is to see only overlay layer, not the 3d object itself. So will give you only picture with the paint you can directly project on the real object
Regarding painting on the 3d model, there will be numbers of ways you can make it. There probably are some useful assets in the asset store you can buy
That looks interesting
http://forum.unity3d.com/threads/digital-paint-free-mesh-painting-in-unity-download.95535/

How to deduce any view of a 3D object knowing all it's 2D views (top, down, front, back...)

Well, the title is quite self explanatory. I'm asking a way to calculate any view of a 3D object knowing it's rotation and all the 6 views (proyected on a cube; top, bottom, front, back...). Is it even posible?
(answer to the first comment)
I'm asking about a way to create an arbitrary 2D projection of the 3D object from a number of 2D views. Without having to create the 3D object first and then having to project it into 2D.
No, it is not possible. Even if you have much more 3D views than in your case it is not generally possible.
The underlying problem is known in the literature as shape from silhouette or visual hull. This is the problem of finding 3D shape from multiple 2D projections, and knowing the 3D shape is a prerequisite for what you want to know (a new 2D projection).
If you google for the two concepts you will find plenty of interesting algorithms.
The quality of the approximation of 3D shape from 2D projections depends on the geometry of the original 3D shape, on the number of projections available and on the placement of the cameras that generate these projections, so success is highly dependent on your individual problem. Six views, however, are almost certainly insufficient unless you have a very specific type of 3D shape.

How to map 3D models (clothes) to kinect joints using C#

We are doing a virtual dressing room project and we could map .png image files on body joints. But we need to map 3D clothes to the body and the final out put should be a person(real person not an avatar) wearing a 3D cloth on a live video out put. But we don't know how to do this. Any help is so much appreciated.
Thanks in advance.
Answer for user3588017, its too lengthy but I'll try to explain how to get your project done. First play with XNA with their basic tutorials if you are totally new to 3D gaming. In my project I only focused on bending hands down. For this in you need to create a 3D model in blender and export it to .xna. I used this link as a start. the very beginning Actually this is the most hardest part after this you know how to animate your model using maths. Then you need to map kinect data as (x,y,z) to the parts of your model.
Ex:- map the model rotation to kinect's model rotation. For this I used simple calculations like measuring the depth of the shoulders and calculating the rotated angle and applied it to my cloth model.

Voxel Animation

I have been able to convert a 3D mesh from Maya into Voxel art (looks like a bunch of cubes--similar to legos), all done in Maya. I plan on using the 3D art to wrap around my 2D textures to make it 2.5D. My question is: does the mesh being voxelized allow me to use the pieces as particles that i can put into a particle engine in XNA to have awesome dynamic effects?
No, because you get a set of vertices and index defining triangles with no information about cubes.
But you can create an algorithm that extract the info from the model. It's a bit hard but it's feasible.
I'd do it creating a 3d grid, and foreach face I'd launch rays from that face to the opposite face, taking every collision with the mesh, getting for each ray a number of collisions that should be pair (0, 2, 4,...), this between two points should have a solid volume.
That way it can be converted to voxels... on each collision it would be useful to store the bones that are related to the triangle that collides, this way you would be able to animate the voxel model.

Resources