I want to render Voxel/Octree objects (cubes with different sizes basically) in iOS; so far, I have been looking at SceneKit to perform the visualization. The goal is to render 10 to 15k voxels in a scene like rendering an Octomap for robotics applications (such as: https://www.youtube.com/watch?v=yKNzTg25RM8).
In SceneKit framework we rendered these cube objects with SCCNNode with SCNBox geometry in it. The performance degrades significantly if we use a lot of geometry objects, so we are unsure if we are using the best approach for it, we also discovered MDLVoxelArray for this purpose but have not used it yet. I have not been able to find the proper documentation or references to include it in SceneKit. Can MDLVoxelArray be a solution to visualize voxels properly?
Related
Is it possible to import a virtual lamp object into the AR scene, that projects a light cone, which illuminates the surrounding space in the room and the real objects in it, e.g. a table, floor, walls?
For ARKit, I found this SO post.
For ARCore, there is an example of relighting technique. And this source code.
I have also been suggested that post-processing can be used to brighten the whole scene.
However, these examples are from a while ago and perhaps threre is a newer or a more straight forward solution to this problem?
At the low level, RealityKit is only responsible for rendering virtual objects and overlaying them on top of the camera frame.
If you want to illuminate the real scene, you need to post-process the camera frame.
Here are some tutorials on how to do post-processing:
Tutorial1⃣️
Tutorial2⃣️
If all you need is an effect like This , then all you need to do is add a CGImage-based post-processing effect for the virtual object (lights).
More specifically, add a bloom filter to the rendered image(You can also simulate bloom filters with Gaussian blur).
In this way, the code is all around UIImage and CGImage, so it's pretty simple😎
If you want to be more realistic, consider using the depth map provided by LiDAR to calculate which areas can be illuminated for a more detailed brightness.
Or If you're a true explorer, you can use Metal to create a real world Digital Twin point cloud in real time to simulate occlusion of light.
There's nothing new in relighting techniques based on 3D compositing principles in 2021. At the moment, when you're working with RealityKit or SceneKit, you have to personally implement the relighting functionality with the help of two additional render passes (RGB pass is always needed) - Normals pass and PointPosition pass. Both AOVs must be 32-bit.
However, in the near future, when Apple engineers finally implement texture capturing in Scene Reconstruction – any inexperienced AR developer will be able to apply a relighting procedure.
Watch this Vimeo Video to find out how relighting can be achieved in The Foundry NUKE.
A crucial point here, when implementing the Relighting effect, is the presence of a LiDAR scanner (or iToF sensor if you're using ARCore). In other words, today's relighting solution for iOS is Metal + RealityKit.
I’m interested in the issue of data processing from TrueDepth Camera. It is necessary to obtain the data of a person’s face, build a 3D model of the face and save this model in an .obj file.
Since in the 3D model needed presence of the person’s eyes and teeth, then ARKit / SceneKit is not suitable, because ARKit / SceneKit do not fill these areas with data.
But with the help of the SceneKit.ModelIO library, I managed to export ARSCNView.scene (type SCNScene) in the .obj format.
I tried to take this project as a basis:
https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/streaming_depth_data_from_the_truedepth_camera
In this project, working with TrueDepth Camera is done using Metal, but if I'm not mistaken, MTKView, rendered using Metal, is not a 3D model and cannot be exported as .obj.
Please tell me if there is a way to export MTKView to SCNScene or directly to .obj?
If there is no such method, then how to make a 3D model from AVDepthData?
Thanks.
It's possible to make a 3D model from AVDepthData, but that probably isn't what you want. One depth buffer is just that — a 2D array of pixel distance-from-camera values. So the only "model" you're getting from that isn't very 3D; it's just a height map. That means you can't look at it from the side and see contours that you couldn't have seen from the front. (The "Using Depth Data" sample code attached to the WWDC 2017 talk on depth photography shows an example of this.)
If you want more of a truly-3D "model", akin to what ARKit offers, you need to be doing the work that ARKit does — using multiple color and depth frames over time, along with a machine learning system trained to understand human faces (and hardware optimized for running that system quickly). You might not find doing that yourself to be a viable option...
It is possible to get an exportable model out of ARKit using Model I/O. The outline of the code you'd need goes something like this:
Get ARFaceGeometry from a face tracking session.
Create MDLMeshBuffers from the face geometry's vertices, textureCoordinates, and triangleIndices arrays. (Apple notes the texture coordinate and triangle index arrays never change, so you only need to create those once — vertices you have to update every time you get a new frame.)
Create a MDLSubmesh from the index buffer, and a MDLMesh from the submesh plus vertex and texture coordinate buffers. (Optionally, use MDLMesh functions to generate a vertex normals buffer after creating the mesh.)
Create an empty MDLAsset and add the mesh to it.
Export the MDLAsset to a URL (providing a URL with the .obj file extension so that it infers the format you want to export).
That sequence doesn't require SceneKit (or Metal, or any ability to display the mesh) at all, which might prove useful depending on your need. If you do want to involve SceneKit and Metal you can probably skip a few steps:
Create ARSCNFaceGeometry on your Metal device and pass it an ARFaceGeometry from a face tracking session.
Use MDLMesh(scnGeometry:) to get a Model I/O representation of that geometry, then follow steps 4-5 above to export it to an .obj file.
Any way you slice it, though... if it's a strong requirement to model eyes and teeth, none of the Apple-provided options will help you because none of them do that. So, some food for thought:
Consider whether that's a strong requirement?
Replicate all of Apple's work to do your own face-model inference from color + depth image sequences?
Cheat on eye modeling using spheres centered according to the leftEyeTransform/rightEyeTransform reported by ARKit?
Cheat on teeth modeling using a pre-made model of teeth, composed with the ARKit-provided face geometry for display? (Articulate your inner-jaw model with a single open-shut joint and use ARKit's blendShapes[.jawOpen] to animate it alongside the face.)
I'm currently testing the feasibility of using SceneKit for a game I would like to make but having trouble figuring out how best to create a grid of many similar geometrical objects in SceneKit while maintaining acceptable performance. Here is what I would like to do:
Place hundreds of a geometric primitive, just an ordinary cube for now in a grid.
Apply various vertex and/or surface and/or fragment shaders to each cube. Some cubes may have the same shaders applied to them but in practice I would like to have at least tens of different shaders that I can apply to the cubes.
I would like to be able to zoom the camera out and view all of the cubes simultaneously while maintaining a smooth framerate.
I'm beginning with a grid of 25 cubes by 25 cubes. I have achieved good performance by building the grid in a loop using a single SCNBox object for the geometry, and setting the shaderModifiers property of the firstMaterial property. I add all 625 cubes to a node, then add the flattenedClone() of this node to my scene's root node. This means the entire scene can be rendered with just 1 draw call.
The problem with reusing the geometry, however, is that all cubes must use the same shaders. But if I create a new SCNBox for each cube (so that I can set the shaders individually for each cube) then I end up with a draw call being made for each cube, which is inefficient and performance will suffer quickly when more and more cubes are added to the scene. And if the geometry is anything more complex than a cube, performance is degraded severely.
Of course I could optimize and have any cubes that DO use the same shaders share the same geometry, add them to their own node and add the flattenedClone() of that node to the scene. But is there more that I can do to optimize in this case? Or am I better off looking for an alternative to SceneKit entirely?
I am looking for a tool to start with simple 3d objects for Scene Kit. I know there are a lot professional tools out there, but just buying blind a program to try a bit how things work would be wasted money.
How do I create 3D content for a Scene Kit scene, beyond using the builtin geometry and geometry creation tools within Scene Kit?
Short answer: Blender.
Long answer: Blender, because it's free and as good as any other 3D app (they are unfortunately all quite bad) but take this to the Apple Developer forums if you'd like more info, as this question is not appropriate for Stack Overflow. The way you're thinking about this is also not appropriate; no app is a "Colada editor". That would be like saying Photoshop is a JPEG editor.
The Unity forum is also a great place to hear a bunch of opinions; unlike any of Apple's tools other than Logic or perhaps Final Cut, it has a wide-ranging user base of technically-minded artists from whom to gather opinions.
Use Polygon Modelling.
COLLADA content is created in 3D modelling programs. There are three main ways of creating 3D content: polygon modelling, clay/brush style push/pull modelling of highly complex meshes, and what's most commonly known as NURBS modelling.
Polygon modelling involves creating shapes by articulating and adding primitive pieces of geometry to primitive pieces of geometry and progressively building up complex shapes in this manner. It has several massive advantages for game engine content creation that have seen it become (far and away) the most popular manner of making content for game engines.
Namely:
Strict and absolute control of the amount and type of geometry (performance)
Strict and absolute seam and inter-object connectivity control (animation)
Absolute texture control and material/shader control via geometry and bitmaps with 1:1 UVW mapping (ideal for expressing textures onto geometry with great performance)
Well designed and commonly used smoothing and tessellation algorithms nondestructive to, and considerate of, the above points
Given these massive benefits for game content creation, it's imperative you learn Polygon modelling first, for any and all game content creation. Most nearly all polygon modellers output a format compatible with COLLADA.
Understanding Polygon modelling will also give you understanding of some of how Geometry Shaders work and what they act on. The addition of Geometry Shaders is a relatively new feature of Scene Kit that provides vastly more interesting ways to manipulate geometry than the basic geometry creation tools provided within Scene Kit.
I have just delved into the world of Metal, and I thought that I'd got the hang of it! But then it occurred to me that if I wanted to make a game, then static objects moving around a screen wouldn't suffice. So my question is, 'Is it possible to create animations for models with Metal?'
I have looked at using other APIs, such as SpriteKit, and SceneKit, but I found that they do not support shaders, and are not as powerful as Metal.
The only way that I can think about how I would go about this, is by creating 60 different models, and then loading each one one after the other, to give a 'stop-motion' kind of effect, but I think that this would probably be incredibly inefficient, and was hoping that there was an easier answer?
Thanks a lot!
Yes, there are other, more efficient ways to do animation. But before getting into that, a warning: it really looks like you're barking up the wrong tree here.
Metal is a (conceptually) very low-level interface. You use Metal to talk (almost) directly to the GPU, so to work with it you need to think (sort of) like a GPU: in terms of data buffers, vertex transformations, etc. You seem to be working at a much higher conceptual level, so you're probably better served by one of the high-level game engines: SpriteKit for 2D or SceneKit for 3D. (Or a third party engine like Cocos or Unity.) Metal, on the other hand, is better suited for building those game engines.
SpriteKit and SceneKit do support shaders. Look at SKShader and SCNShadable in the docs (and be sure to click the "More" links to read the full overviews). SceneKit also supports character animations (aka skeletal animation aka skinning): typically one designs and rigs a model for animation in an external authoring tool (Maya, Blender, etc), then uses SceneKit to work with the animations at run time.
It is possible to do things like GPU-based skeletal animation in Metal. But I haven't seen any tutorials or similar written about it yet, probably because Metal is such a new technology. Fundamentally, though, it'd be based on the same sorts of techniques you'd use for skeletal animation in OpenGL or Direct3D — and much has been written about animation for those technologies. If you're willing to invest the time and energy to work at a low level, adapting the subject matter from GL/D3D tutorials is relatively easy.
You can do skeletal animation in Metal, SCNKit would be using the GPU to deform the mesh as well. But to do it in Metal you would need to pass skin weights, along with bone matrices for the bind pose and the transformations of the bones as they animate then calculate the new vertex positions based on these. In fact I think you need the inverse of the bind pose matrices. Each mesh vertex is then transformed by a weighted sum of transformations dictated by the skin weights.
I tried it but screwed it up somehow it didn’t deform properly, I don’t know if I’d obtained the wrong matrices from my custom script to grab animation data from blender or a bug in my shader maths or from the weights.
It was probably close, but with all the possible things that I may have got wrong in the process it was difficult to fix so I abandoned it in the end.
Probably easier to stick with SceneKit and let apple take care of the rest or use an existing game engine such as Unity.
Then again if you want a challenge, I’m sure it’s possible, just a little tricky. You could try CPU first to make sure the maths is ok then port it to the GPU to make it faster?
SceneKit do support shaders. And an object that manages the relationship between skeletal animations and the nodes and geometries they animate is SCNSkinner from SceneKit.
Typically, you need to create a skinned model using, for example, Autodesk Maya, save it along with animations that use the skeleton, in a scene file. You load the model from the scene file and pose or animate it in your app, either by using animation objects also loaded from the scene file or by directly manipulating the nodes in the skeleton. That's it.
Watch this 7-parts video about Blender's skeletal system and how to use it in SceneKit.
convenience init(baseGeometry: SCNGeometry?, //character
bones: [SCNNode], //array of bones
boneInverseBindTransforms: [NSValue]?, //ibt of matrix4
boneWeights: SCNGeometrySource, //influence on geometry
boneIndices: SCNGeometrySource //index mapping
)