Rendering 3d reconstructed models in ARCore - arcore

My objective is to create a 3d model of a real-life object and render it in an ARCore app. I used openMVG to reconstruct the model from a series of images. The model looks almost perfect when viewed in Meshlab. But when I try to render it in an ARCore app, the model becomes completely messed up with random lines protruding from various places and there is almost zero smoothness in the model.
If someone faced similar issue, please help in pointing out what could be wrong. I basically reused the obj file reader and renderer code from the ARCore sample app.

Related

ARKit and Unity - How can I detect the act of hitting the AR object by a real world object from the camera?

Think if someone in real life waved their hand and hit the 3D object in AR, how would I detect that? I basically want to know when something crosses over the AR object so I can know that something "hit" it and react.
Another example would be to place a virtual bottle on the table and then wave your hand in the air where the bottle is and then it gets knocked over.
Can this be done? If so how? I would prefer unity help but if this can only be done via Xcode and ARKit natively, I would be open to that as well.
ARKit does solve a ton of issues with AR and make them a breeze to work with. Your issue just isn't one of them.
As #Draco18s notes (and emphasizes well with the xkcd link 👍), you've perhaps unwittingly stepped into the domain of hairy computer vision problems. You have some building blocks to work with, though: ARKit provides pixel buffers for each video frame, and the projection matrix needed for you to work out what portion of the 2D image is overlaid by your virtual water bottle.
Deciding when to knock over the water bottle is then a problem of analyzing frame-to-frame differences over time in that region of the image. (And tracking that region's movement relative to the whole camera image, since the user probably isn't holding the device perfectly still.) The amount of of analysis required varies depending on the sophistication of effect you want... a simple pixel diff might work (for some value of "work"), or there might be existing machine learning models that you could put together with Vision and Core ML...
You should take a look at ManoMotion: https://www.manomotion.com/
They're working on this issue and suppose to release a solution in form of library soon.

how to interact c program with 3d model?

Since for a while, I have been really interested in Image processing especially in VFX. After I watched several movie such as Rise of the Planet of the Ape, I have been trying to replace the face of actor by 3d model without the special feature points have to marked on the face.
As you can see in the second picture, I can get the point on my face.
I wish to combine this points position with a 3d model so that I could control the model with my face expression...
What can I do? 3dMax or Maya do not have the plug in( or I am too stupid that I could not find that), unity is also a good solutions. I once tried to use openGL, and I could control the model's position, however, controlling the face expression of the model is more difficult...
Could anyone help me, give me some suggestions or some paper to read ~
Thanks a lot

set transparent layer on the 3d model and draw something on the transparent layer and project on the surface

I have done lots of research on projection mapping but did not found any solution that i want to do like map on the real 3d object by projector through iPad or desktop.
I want to draw something on the real time 3d object from ipad application. I have 3d model of that object and connect the iPad from projector, In the real time if i will Draw line on the 3d model it will display as it is on the 3d object by projector. It should be look like i am drawing on the real object.
I want to do something like...
http://www.youtube.com/watch?v=RoeDacxjtA4
Thanks,
Its obviously need extensive research but i have an idea in my mind.
Match real model and a 3d model in unity.
Your virtual camera need to be positioned exactly same way your real life projector is positioned. Your FOV need to be same as your projector
Virtual camera is to see only overlay layer, not the 3d object itself. So will give you only picture with the paint you can directly project on the real object
Regarding painting on the 3d model, there will be numbers of ways you can make it. There probably are some useful assets in the asset store you can buy
That looks interesting
http://forum.unity3d.com/threads/digital-paint-free-mesh-painting-in-unity-download.95535/

How to map 3D models (clothes) to kinect joints using C#

We are doing a virtual dressing room project and we could map .png image files on body joints. But we need to map 3D clothes to the body and the final out put should be a person(real person not an avatar) wearing a 3D cloth on a live video out put. But we don't know how to do this. Any help is so much appreciated.
Thanks in advance.
Answer for user3588017, its too lengthy but I'll try to explain how to get your project done. First play with XNA with their basic tutorials if you are totally new to 3D gaming. In my project I only focused on bending hands down. For this in you need to create a 3D model in blender and export it to .xna. I used this link as a start. the very beginning Actually this is the most hardest part after this you know how to animate your model using maths. Then you need to map kinect data as (x,y,z) to the parts of your model.
Ex:- map the model rotation to kinect's model rotation. For this I used simple calculations like measuring the depth of the shoulders and calculating the rotated angle and applied it to my cloth model.

How to render pictures of objects in 3D on mobile Safari?

I would like to take a picture of a basketball and render the picture in 3D on mobile Safari.
We need to photograph the object from different angles, but what other steps are needed to accomplish this? Are there any APIs to help with piecing together the different pictures to form a 3D image?
If it's not possible in mobile Safari, could I accomplish this with a native iPhone app?
Thanks!
Just to clarify a few things here:
After simply taking pictures of the basketball from different angles you can't "render it in 3D"
When one says "render" it usually means computing the output image based on a 3D model (which is a mathematical representation of a 3-dimensional object) and/or some algorithm that "renders" the scene
When you say "3D image" I take you mean you want to have a 2D visual representation of the ball which you can manipulate in 3D space (you don't want a 3D image with actual depth that would be visible to the naked eye) like rotating it
So from what you're saying I think you want to allow the user to rotate the ball "in 3D". This can be simply done by loading the images that you have created and then changing the frames when the user drags his finger across the screen. This can be done in Mobile safari by using touch events, but that's a different topic and you probably will have to ask a separate question about it.
If you want to actually create a 3D model (i.e. a model file, something that can be used by a 3D engine to render the object in 3D space) out of your pictures, you have to use some software that does that (like 3DSOM). It is completely unrelated to web APIs, mobile-safari, iOS etc.. It doesn't really matter what you use to create your 3D model provided it is supported by your 3D engine you want to use. You can then use libraries like three.js to render that object using WebGL or a software renderer (which is more likely to be supported by an iOS device).

Resources