opengles display human face in iphone - ios

I need to make a human 2D face to 3D face.
I used this link to load an ".obj" file and map the textures. This example is only for cube and pyramid. I loaded a human face ".obj" file.
This loads the .obj file and can get the human face properly as below.
But my problem here is I need to display different human faces without changing the ".obj" file. just by texture mapping.
But the texture is not getting mapped properly, as the obj file is of different model. I just tried changing the ".png" file which is used as texture and the below is the result, where the texture is mapped but not exactly what I expected, as shown below.
The below are my few questions on it :
1) I need to load texture on same model( with same .obj file ) with different images. Is it possible in opengles?
2) If the solution for above problem is "shape matching", how can I do it with opengles?
3) And finally a basic question, I need to display the image in large area, how to make the display area bigger?

mtl2opengl is actually my project, so thanks for using it!
1) The only way you can achieve perfect texture swapping without distortion is if both textures are mapped onto the UV vertices in exactly the same way. Have a look at the images below:
Model A: Blonde Girl
Model B: Ashley Head
As you can see, textures are made to fit the model. So any swapping to a different geometry target will result in distortion. Simplified, human heads/faces have two components: Interior (Bone/Geometry) and Exterior (Skin/Texture). The interior aspect obviously defines the exterior, so perfect texture swapping on the same .obj file will not work unless you change the geometry of the model with the swap.
2) This is possible with a technique called displacement mapping that can be implemented in OpenGL ES, although with anticipated difficulty for multiple heads/faces. This would require your target .obj geometry to start with a pretty generic model, like a mannequin, and then use each texture to shift the position of the model vertices. I think you need to be very comfortable with Modeling, Graphics, Shaders, and Math to pull this one off!
Via Wikipedia
3) I will add more transform options (scale & translate) in the next update. The Xcode project was actually made to show off the PERL script, not as a primer for OpenGL ES on iOS. For now, find the modelViewMatrix and fiddle with this little bit:
GLKMatrix4Scale(_modelViewMatrix, 0.30, 0.33, 0.30);
Hope that answers all your questions!

Related

Metal dynamically generate 2D texture at run-time?

I'm working on a university project for which I want to texturize meshes which were generated using the new LiDAR sensor on recent Apple devices at run-time.
I've read some research papers about texturing 3D reconstructions and I want to try a 'simple' approach were mesh faces (triangles) are transformed to image-space of key-frames that were taken with the camera of the device. Then, I want to perform a pixel-lookup at the position of the triangle in the image and then add that part of the image to a texture map. When I keep track of all triangles in the mesh and determine its corresponding color in a relevant key-frame, I should be able to infer the texture / color of all triangles in the mesh.
Here is a demonstration of the aforementioned principle:
I need to do this natively in swift and therefore I will be using Metal for this. However, I have not worked with Metal before and my question is whether it is possible to "stich together" a texture from different image sources dynamically at run-time?

How to make a 3D model from AVDepthData?

I’m interested in the issue of data processing from TrueDepth Camera. It is necessary to obtain the data of a person’s face, build a 3D model of the face and save this model in an .obj file.
Since in the 3D model needed presence of the person’s eyes and teeth, then ARKit / SceneKit is not suitable, because ARKit / SceneKit do not fill these areas with data.
But with the help of the SceneKit.ModelIO library, I managed to export ARSCNView.scene (type SCNScene) in the .obj format.
I tried to take this project as a basis:
https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/streaming_depth_data_from_the_truedepth_camera
In this project, working with TrueDepth Camera is done using Metal, but if I'm not mistaken, MTKView, rendered using Metal, is not a 3D model and cannot be exported as .obj.
Please tell me if there is a way to export MTKView to SCNScene or directly to .obj?
If there is no such method, then how to make a 3D model from AVDepthData?
Thanks.
It's possible to make a 3D model from AVDepthData, but that probably isn't what you want. One depth buffer is just that — a 2D array of pixel distance-from-camera values. So the only "model" you're getting from that isn't very 3D; it's just a height map. That means you can't look at it from the side and see contours that you couldn't have seen from the front. (The "Using Depth Data" sample code attached to the WWDC 2017 talk on depth photography shows an example of this.)
If you want more of a truly-3D "model", akin to what ARKit offers, you need to be doing the work that ARKit does — using multiple color and depth frames over time, along with a machine learning system trained to understand human faces (and hardware optimized for running that system quickly). You might not find doing that yourself to be a viable option...
It is possible to get an exportable model out of ARKit using Model I/O. The outline of the code you'd need goes something like this:
Get ARFaceGeometry from a face tracking session.
Create MDLMeshBuffers from the face geometry's vertices, textureCoordinates, and triangleIndices arrays. (Apple notes the texture coordinate and triangle index arrays never change, so you only need to create those once — vertices you have to update every time you get a new frame.)
Create a MDLSubmesh from the index buffer, and a MDLMesh from the submesh plus vertex and texture coordinate buffers. (Optionally, use MDLMesh functions to generate a vertex normals buffer after creating the mesh.)
Create an empty MDLAsset and add the mesh to it.
Export the MDLAsset to a URL (providing a URL with the .obj file extension so that it infers the format you want to export).
That sequence doesn't require SceneKit (or Metal, or any ability to display the mesh) at all, which might prove useful depending on your need. If you do want to involve SceneKit and Metal you can probably skip a few steps:
Create ARSCNFaceGeometry on your Metal device and pass it an ARFaceGeometry from a face tracking session.
Use MDLMesh(scnGeometry:) to get a Model I/O representation of that geometry, then follow steps 4-5 above to export it to an .obj file.
Any way you slice it, though... if it's a strong requirement to model eyes and teeth, none of the Apple-provided options will help you because none of them do that. So, some food for thought:
Consider whether that's a strong requirement?
Replicate all of Apple's work to do your own face-model inference from color + depth image sequences?
Cheat on eye modeling using spheres centered according to the leftEyeTransform/rightEyeTransform reported by ARKit?
Cheat on teeth modeling using a pre-made model of teeth, composed with the ARKit-provided face geometry for display? (Articulate your inner-jaw model with a single open-shut joint and use ARKit's blendShapes[.jawOpen] to animate it alongside the face.)

Extract face features from ARSCNFaceGeometry

I've been trying without success to extract face features, for instance the mouth, from ARSCNFaceGeometry in order to change their color or add a different material.
I understand I need to create an SCNGeometry for which I have the SCNGeometrySource but haven't been able to create the SCNGeometryElement.
Have tried creating it from ARFaceAnchor in update(from faceGeometry: ARFaceGeometry) but so far have been unable.
Would really appreciate someone help
ARSCNFaceGeometry is a single mesh. If you want different areas of it to be different colors, your best bet is to apply a texture map (which you do in SceneKit by providing images for material property contents).
There’s no semantic information associated with the vertices in the mesh — that is, there’s nothing that says “this point is the tip of the nose, these points are the edge of the upper lip, etc”. But the mesh is topologically stable, so if you create a texture image that adds a bit of color around the lips or a lightning bolt over the eye or whatever, it’ll stay there as the face moves around.
If you need help getting started on painting a texture, there are a couple of things you could try:
Create a dummy texture first
Make a square image and fill it with a double gradient, such that the red and blue component for each pixel is based on the x and y coordinate of that pixel. Or some other distinctive pattern. Apply that texture to the model, and see how it looks — the landmarks in the texture will guide you where to paint.
Export the model
Create a dummy ARSCNFaceGeometry using the init(blendShapes:) initializer and an empty blendShapes dictionary (you don’t need an active ARFaceTracking session for this, but you do need an iPhone X). Use SceneKit’s scene export APIs (or Model I/O) to write that model out to a 3D file of some sort (.scn, which you can process further on the Mac, or something like .obj).
Import that file into your favorite 3D modeling tool (Blender, Maya, etc) and use that tool to paint a texture. Then use that texture in your app with real faces.
Actually, the above is sort of an oversimplification, even though it’s the simple answer for common cases. ARSCNFaceGeometry can actually contain up to four submeshes if you create it with the init(device:fillMesh:) initializer. But even then, those parts aren’t semantically labeled areas of the face — they’re the holes in the regular face model, flat fill-ins for the places where eyes and mouth show through.

How to make custom camera lens effects in ios

I am not an ios developer but my client wants me to make an iphone app like
https://itunes.apple.com/us/app/trippy-booth-amazing-filterswarps/id448037560?mt=8
I have seen some custom library like
https://github.com/BradLarson/GPUImage
but do not find any camera lens customization example.
any kind of suggestions would be helpful
Thanks in advance
You can do it through some custom shader written in OpenGL(or metal just for iOS), then you can apply your shader to do interesting stuff like the image in above link.
I suggest you take a look at how to use the OpenGL framework in iOS.
Basically the flow would like:
Use whatever framework to capture(even in real time) a image.
Use some framework to modify the image. (The magic occur here)
Use another stuff to present the image.
You should learn how to obtain a OpenGL context, draw a image on it, write a custom shader, apply the shader, get the output, to "distort the image". For real, the hardest part is how to create that "effect" in your mind by describing it using a formula.
This is quite similar to the photoshop mesh warp (Edit->Transform->Warp). Basically you treat your image as a texture and then you render it on to a mesh (Bezier Patch) that is a grid that has been distorted into bezier curves, but you leave the texture coordinates as if it was still a grid. This has the effect of "pulling" the image towards the nodes of the patch. You can use OpenGL (GL_PATCHES) for this; I imagine metal or sceneKit might work as well.
I can't tell from the screen shots but its possible that the examples you reference are actually placing their mesh based on facial recognition. CoreImage has basic facial recognition to give youth out and eye positions which you could use to control some of the nodes in your mesh.

How to texture a cylinder in XNA4 with multiple textures?

Basically, I'm trying to cover a slot machine reel (white cylinder model) with multiple evenly spaced textures around the exterior. The program will be Windows only and the textures will be dynamically loaded at run-time instead of using the content pipeline. (Windows based multi-screen setup with XNA from the Microsoft example)
Most of the examples I can find online are for XNA3 and are still seemingly gibberish to me at this point.
So I'm looking for any help someone can provide on the subject of in-game texturing of objects like cylinders with multiple textures.
Maybe there is a good book out there that can properly describe how texturing works in XNA (4.0 specifically)?
Thanks
You have a few options. It depends two things: whether the model is loaded or generated at runtime, and whether your multiple textures get combined into one or kept individual.
If you have art skills or know an artist, probably the easiest approach is to get them to texture map the cylinder with as many textures as you want (multiple materials). You'd want your Model to have one mesh (ModelMesh) and one material (ModelMeshPart) per texture required. This is assuming the cylinders always have a fixed number of textures!. Then, to swap the textures at runtime you'd iterate through the ModelMesh.Effects collection, cast each to a BasicEffect and set it's Texture property.
If you can't modify the model, you'll have to generate it. There's an example of this on the AppHub site: http://create.msdn.com/en-US/education/catalog/sample/primitives_3d. It probably does not generate texture coordinates so you'd need to add them. If you wanted 5 images per cylinder, you should make sure the number of segments is a multiple of 5 and the V coordinate should go from 0 to 1, 5 times as it wraps around the cylinder. To keep your textures individual with this technique, you'd need to draw the cylinder in 5 chunks, each time setting the GraphicsDevice.Textures[0] to your current texture.
With both techniques it would be possible to draw the cylinder in a single draw call, but you'd need to merge your textures into a single one using Texture2D.GetData and Texture2D.SetData. This is going to be more efficient, but really isn't worth the trouble. Well not unless you making some kind of crazy slot machine particle system anyway.

Resources