SceneKit Textures - ios

I'm trying to learn SceneKit for iOS and get beyond basic shapes. I'm a little confused on how textures work. In the example project, the plane is a mesh and a flat png texture is applied to it. How do you "tell" the texture how to wrap to the object? In 3D graphics you UV unwrap, but I don't know how I would do this in SceneKit.

SceneKit doesn't have capabilities to create a mesh (other than programatically creating vertex positions, normals, UVs etc). What you'd need to do is create your mesh and texture in another bit of software (I use Blender). Then export the mesh as a collada .dae file and export the textures your model uses too as .png files. Your exported model will have UV coordinates imported with it that will correctly wrap your imported textures on your model.

Related

Metal dynamically generate 2D texture at run-time?

I'm working on a university project for which I want to texturize meshes which were generated using the new LiDAR sensor on recent Apple devices at run-time.
I've read some research papers about texturing 3D reconstructions and I want to try a 'simple' approach were mesh faces (triangles) are transformed to image-space of key-frames that were taken with the camera of the device. Then, I want to perform a pixel-lookup at the position of the triangle in the image and then add that part of the image to a texture map. When I keep track of all triangles in the mesh and determine its corresponding color in a relevant key-frame, I should be able to infer the texture / color of all triangles in the mesh.
Here is a demonstration of the aforementioned principle:
I need to do this natively in swift and therefore I will be using Metal for this. However, I have not worked with Metal before and my question is whether it is possible to "stich together" a texture from different image sources dynamically at run-time?

How to make a 3D model from AVDepthData?

I’m interested in the issue of data processing from TrueDepth Camera. It is necessary to obtain the data of a person’s face, build a 3D model of the face and save this model in an .obj file.
Since in the 3D model needed presence of the person’s eyes and teeth, then ARKit / SceneKit is not suitable, because ARKit / SceneKit do not fill these areas with data.
But with the help of the SceneKit.ModelIO library, I managed to export ARSCNView.scene (type SCNScene) in the .obj format.
I tried to take this project as a basis:
https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/streaming_depth_data_from_the_truedepth_camera
In this project, working with TrueDepth Camera is done using Metal, but if I'm not mistaken, MTKView, rendered using Metal, is not a 3D model and cannot be exported as .obj.
Please tell me if there is a way to export MTKView to SCNScene or directly to .obj?
If there is no such method, then how to make a 3D model from AVDepthData?
Thanks.
It's possible to make a 3D model from AVDepthData, but that probably isn't what you want. One depth buffer is just that — a 2D array of pixel distance-from-camera values. So the only "model" you're getting from that isn't very 3D; it's just a height map. That means you can't look at it from the side and see contours that you couldn't have seen from the front. (The "Using Depth Data" sample code attached to the WWDC 2017 talk on depth photography shows an example of this.)
If you want more of a truly-3D "model", akin to what ARKit offers, you need to be doing the work that ARKit does — using multiple color and depth frames over time, along with a machine learning system trained to understand human faces (and hardware optimized for running that system quickly). You might not find doing that yourself to be a viable option...
It is possible to get an exportable model out of ARKit using Model I/O. The outline of the code you'd need goes something like this:
Get ARFaceGeometry from a face tracking session.
Create MDLMeshBuffers from the face geometry's vertices, textureCoordinates, and triangleIndices arrays. (Apple notes the texture coordinate and triangle index arrays never change, so you only need to create those once — vertices you have to update every time you get a new frame.)
Create a MDLSubmesh from the index buffer, and a MDLMesh from the submesh plus vertex and texture coordinate buffers. (Optionally, use MDLMesh functions to generate a vertex normals buffer after creating the mesh.)
Create an empty MDLAsset and add the mesh to it.
Export the MDLAsset to a URL (providing a URL with the .obj file extension so that it infers the format you want to export).
That sequence doesn't require SceneKit (or Metal, or any ability to display the mesh) at all, which might prove useful depending on your need. If you do want to involve SceneKit and Metal you can probably skip a few steps:
Create ARSCNFaceGeometry on your Metal device and pass it an ARFaceGeometry from a face tracking session.
Use MDLMesh(scnGeometry:) to get a Model I/O representation of that geometry, then follow steps 4-5 above to export it to an .obj file.
Any way you slice it, though... if it's a strong requirement to model eyes and teeth, none of the Apple-provided options will help you because none of them do that. So, some food for thought:
Consider whether that's a strong requirement?
Replicate all of Apple's work to do your own face-model inference from color + depth image sequences?
Cheat on eye modeling using spheres centered according to the leftEyeTransform/rightEyeTransform reported by ARKit?
Cheat on teeth modeling using a pre-made model of teeth, composed with the ARKit-provided face geometry for display? (Articulate your inner-jaw model with a single open-shut joint and use ARKit's blendShapes[.jawOpen] to animate it alongside the face.)

Create lightmap baking using custom UV/texture coordinates in Xcode's SceneKit editor?

Im trying to bake light map texture for my models in Xcode's SceneKit editor. But it's not generating the texture based on my uv coordinates. Is there a way to specify the texture coordinates for Xcode to use?
I tried to do it with a simple rounded box. This is the result:
Import Collada model
I import the model with a single UV/texture coordinate source. The preview of it looks correct, as you can see below.
Bake the lightmap
When I then try to bake the lightmap to a texture, it creates new UV/texture coordinates with a corresponding texture. But this a different texture layout/texture coordinates (as you can see in the preview below). As you can see, the quality is not that great either, you can see a light gray square on the side. On the real models I'm testing on it looks MUCH worse, and everything looks like it's placed incorrectly.
This is what the texture looks like.
So is there any way to have it generate the texture using provided UV/texture coordinates? Or how I'm a supposed to use this?

Using a tileset for textures for a mesh

I'm trying to create a isometric game in Love2D. I need to use meshes because I want to be able to rotate the camera view, but I could not figure out how to use a tileset to provide textures for the mesh.
The mesh class accepts a texture only, unlike the sprite batch class, which accepts quads to direct it on what part of the texture to use. Is there a way to give the mesh class this information or even slice up the tileset into individual images to be used with meshes?

DirectX 9: Transform texture of one instance of a loaded mesh, but not others

In DirectX 9 I have a .X file which I've loaded and I draw several copies of it. I need to be able to alter the texture coordinates for each copy (e.g. give each one a different scale). Unfortunately because they're all the same mesh and use the same materials, it seems that transforming the texture for one does the transformation for all of them. Is there a way that I can use to transform the texture of each instance of a loaded mesh individually?
You could use a texture coordinate transform.
You could clone the mesh.
You could use a shader and scale the UVs in the shader.
You'll need to Clone the mesh in question, then adjust its information. This will prevent it from effecting the other Mesh instances.

Resources