3D- Model is not rendering in ARCore Augmented Faces in iOS - ios

I am following ARCORE AUGMENTED FACES iOS SDK. In-build fox_face.scn is working fine for me.
Now we have created some 3d models in Blender & export them in both .dae OR .obj formats. From xcode I converted these models in scn but when i try to render my scn models, its not rendering on face.
Same scn model is working fine with ARKIT but not working with ARCORE

In case, your model has any animation, check if your 3D file follows the requirements from here: https://developers.google.com/ar/develop/java/sceneform/animation/obtaining-assets

Rendering on iOS is done within ARKit scene, not ARCore. ARCore Face Augmentation is generating the 3D face assets which are delivered to SceneKit to render with each frame callback.
I'm not sure exactly why you said scn model is working fine with ARKit, but not AR Core?
I have been successful in exporting from Blender to .dae, and then converting to SceneKit file in xcode.
Having said that, I have been unsuccessful in cleanly exporting the default fox face and bones (and my geometry) from blender directly into Xcode to create what the demo has from default.
Instead I have had to copy/paste 3D geometry content from the imported/converted scene from blender into the original fox_face scene that comes with the project, ensuring all axis are correct.
In order to correctly position the asset relative to the original fox face I had to create some code to move the model around in the world.
I hope that helps.
But I would be very interested if you find a way to export cleanly from blender (including default face or fox ears etc) directly in as a whole scene, including your new geometry.

Related

Export of 3D models with animation to .dae format for use in Augmented Faces on iOS with ARCore?

The model created in 3dsMax with parametric animation (rotation around the axis) was exported to the .dae format, but when used in Augmented Faces on iOS - the animation does not work at all.
With what settings/animation features do you need to export models to .dae format in order for the animation to work?

How to make a 3D model from AVDepthData?

I’m interested in the issue of data processing from TrueDepth Camera. It is necessary to obtain the data of a person’s face, build a 3D model of the face and save this model in an .obj file.
Since in the 3D model needed presence of the person’s eyes and teeth, then ARKit / SceneKit is not suitable, because ARKit / SceneKit do not fill these areas with data.
But with the help of the SceneKit.ModelIO library, I managed to export ARSCNView.scene (type SCNScene) in the .obj format.
I tried to take this project as a basis:
https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/streaming_depth_data_from_the_truedepth_camera
In this project, working with TrueDepth Camera is done using Metal, but if I'm not mistaken, MTKView, rendered using Metal, is not a 3D model and cannot be exported as .obj.
Please tell me if there is a way to export MTKView to SCNScene or directly to .obj?
If there is no such method, then how to make a 3D model from AVDepthData?
Thanks.
It's possible to make a 3D model from AVDepthData, but that probably isn't what you want. One depth buffer is just that — a 2D array of pixel distance-from-camera values. So the only "model" you're getting from that isn't very 3D; it's just a height map. That means you can't look at it from the side and see contours that you couldn't have seen from the front. (The "Using Depth Data" sample code attached to the WWDC 2017 talk on depth photography shows an example of this.)
If you want more of a truly-3D "model", akin to what ARKit offers, you need to be doing the work that ARKit does — using multiple color and depth frames over time, along with a machine learning system trained to understand human faces (and hardware optimized for running that system quickly). You might not find doing that yourself to be a viable option...
It is possible to get an exportable model out of ARKit using Model I/O. The outline of the code you'd need goes something like this:
Get ARFaceGeometry from a face tracking session.
Create MDLMeshBuffers from the face geometry's vertices, textureCoordinates, and triangleIndices arrays. (Apple notes the texture coordinate and triangle index arrays never change, so you only need to create those once — vertices you have to update every time you get a new frame.)
Create a MDLSubmesh from the index buffer, and a MDLMesh from the submesh plus vertex and texture coordinate buffers. (Optionally, use MDLMesh functions to generate a vertex normals buffer after creating the mesh.)
Create an empty MDLAsset and add the mesh to it.
Export the MDLAsset to a URL (providing a URL with the .obj file extension so that it infers the format you want to export).
That sequence doesn't require SceneKit (or Metal, or any ability to display the mesh) at all, which might prove useful depending on your need. If you do want to involve SceneKit and Metal you can probably skip a few steps:
Create ARSCNFaceGeometry on your Metal device and pass it an ARFaceGeometry from a face tracking session.
Use MDLMesh(scnGeometry:) to get a Model I/O representation of that geometry, then follow steps 4-5 above to export it to an .obj file.
Any way you slice it, though... if it's a strong requirement to model eyes and teeth, none of the Apple-provided options will help you because none of them do that. So, some food for thought:
Consider whether that's a strong requirement?
Replicate all of Apple's work to do your own face-model inference from color + depth image sequences?
Cheat on eye modeling using spheres centered according to the leftEyeTransform/rightEyeTransform reported by ARKit?
Cheat on teeth modeling using a pre-made model of teeth, composed with the ARKit-provided face geometry for display? (Articulate your inner-jaw model with a single open-shut joint and use ARKit's blendShapes[.jawOpen] to animate it alongside the face.)

Workflow between blender and xcode (scenekit)

Im using blender to create landscape for a game being built with scenekit.
As it is just a landscape, I won't be using any animations so I'm wondering before I dive too deep into blender, should I be using blender to create the geometry and then create my own materials in scenekit?
I could still create the shadow, emission, specular etc. maps in blender but would there be a performance benefit or penalty for doing it this way?
Also if this is a path I could take should I then be exporting as .dae or is there a way to export to a normal map that xcode would be happy with?
SceneKit supports materials exported in DAE from Blender. It doesn't support every possible shading option that Blender has, but unless you're doing exotic stuff it should cover most of what you're looking for.
At run time there's no difference between materials loaded from DAE and those created programmatically.
What you do want to think about at authoring/export time is stuff like real-time versus static lighting/shadows and high-poly geometry versus baked normal maps. In other words, material performance is more about how the materials are set up (complexity) than where they're set up (imported or at run time). See WWDC 2014 session 610: Building a Game with SceneKit for some tips.

How to create a simple 3d sphere for cocos3d using blender and PowerVR SDK

I am a fresher in cocos3d. I want to create a simple project - a 3d sphere rotating. i have designed a 3d sphere using blender. So i want help in creating collada file and pod file. What all are the things should be taken care while creating this simple 3d object using blender and PowerVR SDK. Thanks
How about you make the simple sphere in blender, and then export it using Jeff LaMarche's Blender-to-iOS script? This wouldn't even require Cocos or PowerVR, but it's a good start. Since you can integrate Cocos with non-Cocos classes easily in iOS it might be helpful. You could go further and leverage Apple's GLKit which would probably be straightforward as well.
Just suggestions....
After you create the sphere in blender you need to export in .dae format then use collada to POD convertor which is free. It will convert .dae file to .pod file. and then pod file can easily be parsed into cocos3d.

Blender-File to Xcode

we want to make a 3D Game for Apple iPad and we ar thinking about
a possibility, to import 3D-Models from Blender into Xcode.
Is there a way to do that?
We want to use opengl-es 2.0.
XCode isn't a game engine or 3D SDK. You can't 'import' blender files into XCode. What you can do is take your 3D assets and use them within your apps, either directly through OpenGL (rather low level), or using a 3D engine such as Unity (easier).
Either way, there are a number of questions already on Stackoverflow that you might find useful:
Opengl ES how to import a 3D model and map textures to it on runtime
Iphone + OpenGL ES + Blender Model: Rotation by Touch
Choosing 3D Engine for iOS in C++
...I highly recommend you take a look at what options are out there, decide on the best way to implement your 3D game (be it raw OpenGL or an engine), and go from there.

Resources