merge skinnedeffect with basiceffect - xna

I'm developing a game with XNA 4.0. I have a very big model that I want to import in XNA with animations and later add LookAt constraints. To achieve this I import the model with SkinnedModelProcessor and add the lookAt constraints with DigitalRune libraries.
My problem is due to the 72 MaxBones that SkinnedEffect can handle. If I want to import correctly the model I should add a bone for each mesh but the model has 152 distinct mesh and I can't split the model in submodels. If I don't add a bone for each mesh the processor throw this error "contains geometry without any vertex weights" forcing me to add a bone also to the mesh that I don't want to animate.
There is a way to import more than 72 bones? Or, alternatively, there is a way to merge in a single processor SkinnedEffect and BasicEffect in order to import skinned mesh (the ones with the associated bone) and mesh without a bone?
Thank you.

If you are using 3ds Max, you can skin each mesh that doesn't move with a single bone.
A link to the forum where this is answered:
http://forums.create.msdn.com/forums/t/94840.aspx

Related

How to make a 3D model from AVDepthData?

I’m interested in the issue of data processing from TrueDepth Camera. It is necessary to obtain the data of a person’s face, build a 3D model of the face and save this model in an .obj file.
Since in the 3D model needed presence of the person’s eyes and teeth, then ARKit / SceneKit is not suitable, because ARKit / SceneKit do not fill these areas with data.
But with the help of the SceneKit.ModelIO library, I managed to export ARSCNView.scene (type SCNScene) in the .obj format.
I tried to take this project as a basis:
https://developer.apple.com/documentation/avfoundation/cameras_and_media_capture/streaming_depth_data_from_the_truedepth_camera
In this project, working with TrueDepth Camera is done using Metal, but if I'm not mistaken, MTKView, rendered using Metal, is not a 3D model and cannot be exported as .obj.
Please tell me if there is a way to export MTKView to SCNScene or directly to .obj?
If there is no such method, then how to make a 3D model from AVDepthData?
Thanks.
It's possible to make a 3D model from AVDepthData, but that probably isn't what you want. One depth buffer is just that — a 2D array of pixel distance-from-camera values. So the only "model" you're getting from that isn't very 3D; it's just a height map. That means you can't look at it from the side and see contours that you couldn't have seen from the front. (The "Using Depth Data" sample code attached to the WWDC 2017 talk on depth photography shows an example of this.)
If you want more of a truly-3D "model", akin to what ARKit offers, you need to be doing the work that ARKit does — using multiple color and depth frames over time, along with a machine learning system trained to understand human faces (and hardware optimized for running that system quickly). You might not find doing that yourself to be a viable option...
It is possible to get an exportable model out of ARKit using Model I/O. The outline of the code you'd need goes something like this:
Get ARFaceGeometry from a face tracking session.
Create MDLMeshBuffers from the face geometry's vertices, textureCoordinates, and triangleIndices arrays. (Apple notes the texture coordinate and triangle index arrays never change, so you only need to create those once — vertices you have to update every time you get a new frame.)
Create a MDLSubmesh from the index buffer, and a MDLMesh from the submesh plus vertex and texture coordinate buffers. (Optionally, use MDLMesh functions to generate a vertex normals buffer after creating the mesh.)
Create an empty MDLAsset and add the mesh to it.
Export the MDLAsset to a URL (providing a URL with the .obj file extension so that it infers the format you want to export).
That sequence doesn't require SceneKit (or Metal, or any ability to display the mesh) at all, which might prove useful depending on your need. If you do want to involve SceneKit and Metal you can probably skip a few steps:
Create ARSCNFaceGeometry on your Metal device and pass it an ARFaceGeometry from a face tracking session.
Use MDLMesh(scnGeometry:) to get a Model I/O representation of that geometry, then follow steps 4-5 above to export it to an .obj file.
Any way you slice it, though... if it's a strong requirement to model eyes and teeth, none of the Apple-provided options will help you because none of them do that. So, some food for thought:
Consider whether that's a strong requirement?
Replicate all of Apple's work to do your own face-model inference from color + depth image sequences?
Cheat on eye modeling using spheres centered according to the leftEyeTransform/rightEyeTransform reported by ARKit?
Cheat on teeth modeling using a pre-made model of teeth, composed with the ARKit-provided face geometry for display? (Articulate your inner-jaw model with a single open-shut joint and use ARKit's blendShapes[.jawOpen] to animate it alongside the face.)

How do I design a Animoji 3D model used by ARKit?

I want to create an Animoji in my APP. But when I contact with some designers they didn't know how to design an Animoji 3D model. Where can I find a solution for reference?
Solution I can thought is create many bones on face of 3D model, And when I get blendShapes of ARFaceAnchor, which contain the detail information of face expression, then I use it to update bone animations of partial face.
Thank you for reading. Any advises is appreciated.
First, to clear the air a bit: Animoji is a product built on top of ARKit, not in any way a feature of ARKit itself. There's no simple path to "build a model in this format and it 'just works' in (or like) Animoji".
That said, there are multiple ways to use the face expression data vended by ARKit to perform 3D animation, so how you do it depends more on what you and your artist are comfortable with. And remember, for any of these you can use as many or as few of the blend shapes as you like, depending on how realistic you want the animation to be.
Skeletal animation
As you suggested, create bones corresponding to each of the blend shapes you're interested in, along with a mapping of blend shape values to bone positions. For example, you'll want to define two positions for the bone for the browOuterUpLeft parameter such that one of them corresponds to a value of 0.0 and another to a value of 1.0 and you can modulate its transform anywhere between those states. (And set up the bone influences in the mesh such that moving it between those two positions creates an effect similar to the reference design when applied to your model.)
Morph target animation
Define multiple, topologically equivalent meshes, one for each blend shape parameter you're interested in. Each one should represent the target state of your character for when that blend shape's weight is 1.0 and all other blend shapes are at 0.0.
Then, at render time, set each vertex position to the weighted average of the same vertex's position in all blend shape targets. Pseudocode:
for vertex in i..<vertexCount {
outPosition = float4(0)
for shape in 0..<blendShapeCount {
outPosition += targetMeshes[shape][vertex] * blendShapeWeights[shape]
}
}
An actual implementation of the above algorithm is more likely to be done in a vertex shader on the GPU, so the for vertex part would be implicit there — you'd just need to feed all your blend shape targets in as vertex attributes. (Or use a compute shader?)
If you're using SceneKit, you can let Apple implement the algorithm for you by feeding your blend shape target meshes to SCNMorpher.
This is where the name "blend shape" comes from, by the way. And rumor has it the built-in ARFaceGeometry is built this way, too.
Simpler and Hybrid approaches
As you can see in Apple's sample code, you can go even simpler — breaking a face into separate pieces (nodes in SceneKit) and setting their positions or transforms based on the blend shape parameters.
You can also combine some of these approaches. For example, a cartoon character could use morph targets for skin deformation around the mouth, but have floating 2D eyebrows that animate simply through setting node positions.
Check-out the 'weboji' javascript library on gitHub. The CG artists we hired to create the 3D models get used with the workflow in minutes. Also, it could be an interesting approach to avoid proprietary formats and closed ecosystem issues.
Screenshots of a 3D Fox (THREE.JS based demo) and a 2D Cartman (SVG based demo).
Demo on youtube featuring a 2D 'Cartman'.

opengles display human face in iphone

I need to make a human 2D face to 3D face.
I used this link to load an ".obj" file and map the textures. This example is only for cube and pyramid. I loaded a human face ".obj" file.
This loads the .obj file and can get the human face properly as below.
But my problem here is I need to display different human faces without changing the ".obj" file. just by texture mapping.
But the texture is not getting mapped properly, as the obj file is of different model. I just tried changing the ".png" file which is used as texture and the below is the result, where the texture is mapped but not exactly what I expected, as shown below.
The below are my few questions on it :
1) I need to load texture on same model( with same .obj file ) with different images. Is it possible in opengles?
2) If the solution for above problem is "shape matching", how can I do it with opengles?
3) And finally a basic question, I need to display the image in large area, how to make the display area bigger?
mtl2opengl is actually my project, so thanks for using it!
1) The only way you can achieve perfect texture swapping without distortion is if both textures are mapped onto the UV vertices in exactly the same way. Have a look at the images below:
Model A: Blonde Girl
Model B: Ashley Head
As you can see, textures are made to fit the model. So any swapping to a different geometry target will result in distortion. Simplified, human heads/faces have two components: Interior (Bone/Geometry) and Exterior (Skin/Texture). The interior aspect obviously defines the exterior, so perfect texture swapping on the same .obj file will not work unless you change the geometry of the model with the swap.
2) This is possible with a technique called displacement mapping that can be implemented in OpenGL ES, although with anticipated difficulty for multiple heads/faces. This would require your target .obj geometry to start with a pretty generic model, like a mannequin, and then use each texture to shift the position of the model vertices. I think you need to be very comfortable with Modeling, Graphics, Shaders, and Math to pull this one off!
Via Wikipedia
3) I will add more transform options (scale & translate) in the next update. The Xcode project was actually made to show off the PERL script, not as a primer for OpenGL ES on iOS. For now, find the modelViewMatrix and fiddle with this little bit:
GLKMatrix4Scale(_modelViewMatrix, 0.30, 0.33, 0.30);
Hope that answers all your questions!

How to texture a cylinder in XNA4 with multiple textures?

Basically, I'm trying to cover a slot machine reel (white cylinder model) with multiple evenly spaced textures around the exterior. The program will be Windows only and the textures will be dynamically loaded at run-time instead of using the content pipeline. (Windows based multi-screen setup with XNA from the Microsoft example)
Most of the examples I can find online are for XNA3 and are still seemingly gibberish to me at this point.
So I'm looking for any help someone can provide on the subject of in-game texturing of objects like cylinders with multiple textures.
Maybe there is a good book out there that can properly describe how texturing works in XNA (4.0 specifically)?
Thanks
You have a few options. It depends two things: whether the model is loaded or generated at runtime, and whether your multiple textures get combined into one or kept individual.
If you have art skills or know an artist, probably the easiest approach is to get them to texture map the cylinder with as many textures as you want (multiple materials). You'd want your Model to have one mesh (ModelMesh) and one material (ModelMeshPart) per texture required. This is assuming the cylinders always have a fixed number of textures!. Then, to swap the textures at runtime you'd iterate through the ModelMesh.Effects collection, cast each to a BasicEffect and set it's Texture property.
If you can't modify the model, you'll have to generate it. There's an example of this on the AppHub site: http://create.msdn.com/en-US/education/catalog/sample/primitives_3d. It probably does not generate texture coordinates so you'd need to add them. If you wanted 5 images per cylinder, you should make sure the number of segments is a multiple of 5 and the V coordinate should go from 0 to 1, 5 times as it wraps around the cylinder. To keep your textures individual with this technique, you'd need to draw the cylinder in 5 chunks, each time setting the GraphicsDevice.Textures[0] to your current texture.
With both techniques it would be possible to draw the cylinder in a single draw call, but you'd need to merge your textures into a single one using Texture2D.GetData and Texture2D.SetData. This is going to be more efficient, but really isn't worth the trouble. Well not unless you making some kind of crazy slot machine particle system anyway.

how can i animate fbx model by changing frames in xna

i am simulating a tennis game and i have a tennis player which i made with 3d max and it has
frames for each movement and exported it as fbx model but i don't know how to use this frames to animate the player inside the xna code .
thanks
In the Skinned Model Sample and related educational material you will find how to do character animation using XNA.
The main trick is "rigging" the model for animation using a set of bones and then specifying animations in terms of those bones. This way the animations can be bound to any other model that has the same bone structure, and animations can be blended (say, swing the racket and run at the same time).
Be warned, this is a fairly advanced technique, as it requires quite a lot of work from the programmer and the artist to get right.

Resources