How to add image texture to 3D asset in Xcode using SceneKit - ios

I have a 3D model for a coffee mug which is in .dae format. Now what I need is - to place a logo (a png image) in on it. How can I achieve this?

This isn’t really a Scenekit or IOS question. To apply a texture to a 3D model the model needs UV coordinates per vertex. The process of mapping a 3d model to a 2d texture is known as UV mapping ( https://en.m.wikipedia.org/wiki/UV_mapping ) and is done in 3d software like Blender, 3D studio max and similar packages, before the assets (model and textures) are used in Scenekit.
That said, in this case, because a mug is largely a cylinder, you could perhaps get away with using a SCNCylinder (which automatically comes with UV coords) and using the image with the logo, with a transparent background, as a texture for the cylinder. And then scale and position the cylinder over the mug and add it as a child node of the mug.

If you have your model in a node, yo can access the material trough the geometry like this
node.geometry?.firstMaterial?.diffuse.contents = <put your image here>
with this you will replace the texture of your geometry, don't really know if thats you want.

Related

OpenGL Image warping using lookup table

I am working on an Android application that slims or fatten faces by detecting it. Currently, I have achieved that by using the Thin-plate spline algorithm.
http://ipwithopencv.blogspot.com.tr/2010/01/thin-plate-spline-example.html
The problem is that the algorithm is not fast enough for me so I decided to change it to OpenGL. After some research, I see that the lookup table texture is the best option for this. I have a set of control points for source image and new positions of them for warp effect.
How should I create lookup table texture to get warp effect?
Are you really sure you need a lookup texture?
Seems that it`d be better if you had a textured rectangular mesh (or a non-rectangular mesh, of course, as the face detection algorithm you have most likely returns a face-like mesh) and warped it according to the algorithm:
Not only you`d be able to do that in a vertex shader, thus processing each mesh node in parallel, but also it`s less values to process compared to dynamic texture generation.
The most compatible method to achieve that is to give each mesh point a Y coordinate of 0 and X coordinate where the mesh index would be stored, and then pass a texture (maybe even a buffer texture if target devices support it) to the vertex shader, where at the needed index the R and G channels contain the desired X and Y coordinates.
Inside the vertex shader, the coordinates are to be loaded from the texture.
This approach allows for dynamic warping without reloading geometry, if the target data texture is properly updated — for example, inside a pixel shader.

iOS11 ARKit: Can ARKit also capture the Texture of the user's face?

I read the whole documentation on all ARKit classes up and down. I don't see any place that describes ability to actually get the user face's Texture.
ARFaceAnchor contains the ARFaceGeometry (topology and geometry comprised of vertices) and the BlendShapeLocation array (coordinates allowing manipulations of individual facial traits by manipulating geometric math on the user face's vertices).
But where can I get the actual Texture of the user's face. For example: the actual skin tone / color / texture, facial hair, other unique traits, such as scars or birth marks? Or is this not possible at all?
You want a texture-map-style image for the face? There’s no API that gets you exactly that, but all the information you need is there:
ARFrame.capturedImage gets you the camera image.
ARFaceGeometry gets you a 3D mesh of the face.
ARAnchor and ARCamera together tell you where the face is in relation to the camera, and how the camera relates to the image pixels.
So it’s entirely possible to texture the face model using the current video frame image. For each vertex in the mesh...
Convert the vertex position from model space to camera space (use the anchor’s transform)
Multiply with the camera projection with that vector to get to normalized image coordinates
Divide by image width/height to get pixel coordinates
This gets you texture coordinates for each vertex, which you can then use to texture the mesh using the camera image. You could do this math either all at once to replace the texture coordinate buffer ARFaceGeometry provides, or do it in shader code on the GPU during rendering. (If you’re rendering using SceneKit / ARSCNView you can probably do this in a shader modifier for the geometry entry point.)
If instead you want to know for each pixel in the camera image what part of the face geometry it corresponds to, it’s a bit harder. You can’t just reverse the above math because you’re missing a depth value for each pixel... but if you don’t need to map every pixel, SceneKit hit testing is an easy way to get geometry for individual pixels.
If what you’re actually asking for is landmark recognition — e.g. where in the camera image are the eyes, nose, beard, etc — there’s no API in ARKit for that. The Vision framework might help.
I've put together a demo iOS app that shows how to accomplish this. The demo captures a face texture map in realtime, applying it back to a ARSCNFaceGeometry to create a textured 3D model of the user's face.
Below you can see the realtime textured 3D face model in the top left, overlaid on top of the AR front facing camera view:
The demo works by rendering an ARSCNFaceGeometry, however instead of rendering it normally, you instead render it in texture space while continuing to use the original vertex positions to determine where to sample from in the captured pixel data.
Here are links to the relevant parts of the implementation:
FaceTextureGenerator.swift — The main class for generating face textures. This sets up a Metal render pipeline to generate the texture.
faceTexture.metal — The vertex and fragment shaders used to generate the face texture. These operate in texture space.
Almost all the work is done in a metal render pass, so it easily runs in realtime.
I've also put together some notes covering the limitations of the demo
If you instead want a 2D image of the user's face, you can try doing the following:
Render the transformed ARSCNFaceGeometry to a 1-bit buffer to create an image mask. Basically you just want places where the face model appears to be white, while everything else should be black.
Apply the mask to the captured frame image.
This should give you an image with just the face (although you will likely need to crop the result)
You can calculate the texture coordinates as follows:
let geometry = faceAnchor.geometry
let vertices = geometry.vertices
let size = arFrame.camera.imageResolution
let camera = arFrame.camera
modelMatrix = faceAnchor.transform
let textureCoordinates = vertices.map { vertex -> vector_float2 in
let vertex4 = vector_float4(vertex.x, vertex.y, vertex.z, 1)
let world_vertex4 = simd_mul(modelMatrix!, vertex4)
let world_vector3 = simd_float3(x: world_vertex4.x, y: world_vertex4.y, z: world_vertex4.z)
let pt = camera.projectPoint(world_vector3,
orientation: .portrait,
viewportSize: CGSize(
width: CGFloat(size.height),
height: CGFloat(size.width)))
let v = 1.0 - Float(pt.x) / Float(size.height)
let u = Float(pt.y) / Float(size.width)
return vector_float2(u, v)
}

Turn an entire SceneKit scene into an image suitable for a texture

I've written a little app using CoreMotion, AV and SceneKit to make a simple panorama. When you take a picture, it maps that onto a SK rectangle and places it in front of whatever CM direction the camera is facing. This is working fine, but...
I would like the user to be able to click a "done" button and turn the entire scene into a single image. I could then map that onto a sphere for future viewing rather than re-creating the entire set of objects. I don't need to stitch or anything like that, I want the individual images to remain separate rectangles, like photos glued to the inside of a ball.
I know about snapshot and tried using that with a really wide FOV, but that results in a fisheye view that does not map back properly (unless I'm doing it wrong). I assume there is some sort of transform I need to apply? Or perhaps there is an easier way to do this?
The key is "photos glued to the inside of a ball". You have a bunch of rectangles, suspended in space. Turning that into one image suitable for projection onto a sphere is a bit of work. You'll have to project each rectangle onto the sphere, and warp the image accordingly.
If you just want to reconstruct the scene for future viewing in SceneKit, use SCNScene's built in serialization, write(to:​options:​delegate:​progress​Handler:​) and SCNScene(named:).
To compute the mapping of images onto a sphere, you'll need some coordinate conversion. For each image, convert the coordinates of the corners into spherical coordinates, with the origin at your point of view. Change the radius of each corner's coordinate to the radius of your sphere, and you now have the projected corners' locations on the sphere.
It's tempting to repeat this process for each pixel in the input rectangular image. But that will leave empty pixels in the spherical output image. So you'll work in reverse. For each pixel in the spherical output image (within the 4 corner points), compute the ray (trivially done, in spherical coordinates) from POV to that point. Convert that ray back to Cartesian coordinates, compute its intersection with the rectangular image's plane, and sample at that point in your input image. You'll want to do some pixel weighting, since your output image and input image will have different pixel dimensions.

How to draw each a vertex of a mesh as a circle

How to draw each a vertex of a mesh as a circle?
You can do it using geometry shaders to create billboarding geometry from each vertex on the GPU. You can then either create the circles as geometry, or create quads and use a circle texture to draw them (I recommend the later). But geometry shaders are not extensively supported yet, even less in iOS. If you know for sure that that computer in which you'll run this supports it, go for it.
If geometry shading isn't an option, your two best options are:
Use a Particle System, that already handles mesh creation and billboarding. To create a particle at each vertex position use ParticleSystem.Emit. Your system's simulation space should be Local. If the vertices move, use SetParticles to update them.
Creating a procedural Mesh that already contains the geometry you need. If the camera and points don't move you can get away with creating the mesh in a fixed shape. Otherwise you will need to animate the billboarding, either on the procedural mesh, or by shader.
Update: 5,000,000 points is a lot. Although Particle Systems can work with big numbers by creating lots of internal meshes, the vast amount of processing really eats up the CPU. And even if the points are static in space, a procedural mesh with no special shaders must be updated each frame for billboading effects.
My advice is creating many meshes (a single mesh cannot handle that amount of geometry). The meshes will cointain a quad per point (or triangles if you dare, to make it faster), but the four vertices will be located in the same point. You then use the texture coordinates during the vertex program to expand it into a billboarding quad.
Assuming you are talking about 2D mesh:
Create a circle game object (or a game object with a circle shaped texture), and export it as a prefab:
var meshFilter = GetComponent(typeof(MeshFilter)) as MeshFilter;
var mesh = meshFilter.mesh;
foreach(var v in mesh.vertices)
{
var obj= Instantiate(circlePrefab, v, Quaternion.identity);
}

Texture getting stretched across faces of a cuboid in Open Inventor

I am trying to write a little script to apply texture to rectangular cuboids. To accomplish this, I run through the scenegraph, and wherever I find the SoIndexedFaceSet Nodes, I insert a SoTexture2 Node before that. I put my image file in the SoTexture2 Node. The problem I am facing is that the texture is applied correctly to 2 of the faces(say face1 and face2), in the Y-Z plane, but for the other 4 planes, it just stretches the texture at the boundaries of the two faces(1 and 2).
It looks something like this.
The front is how it should look, but as you can see, on the other two faces, it just extrapolates the corner values of the front face. Any ideas why this is happening and any way to avoid this?
Yep, assuming that you did not specify texture coordinates for your SoIndexedFaceSet, that is exactly the expected behavior.
If Open Inventor sees that you have applied a texture image to a geometry and did not specify texture coordinates, it will automatically compute some texture coordinates. Of course it's not possible to guess how you wanted the texture to be applied. So it computes the bounding box then computes texture coordinates that stretch the texture across the largest extent of the geometry (XY, YZ or XZ). If the geometry is a cuboid you can see the effect clearly as in your image. This behavior can be useful, especially as a quick approximation.
What you need to make this work the way you want, is to explicitly assign texture coordinates to the geometry such that the texture is mapped separately to each face. In Open Inventor you can actually still share the vertices between faces because you are allowed to specify different vertex indices and texture coordinate indices (of course this is only more convenient for the application because OpenGL doesn't support this and Open Inventor has to re-shuffle the data internally). If you applied the same texture to an SoCube node you would see that the texture is mapped separately to each face as expected. That's because SoCube defines texture coordinates for each face.

Resources