I have been able to convert a 3D mesh from Maya into Voxel art (looks like a bunch of cubes--similar to legos), all done in Maya. I plan on using the 3D art to wrap around my 2D textures to make it 2.5D. My question is: does the mesh being voxelized allow me to use the pieces as particles that i can put into a particle engine in XNA to have awesome dynamic effects?
No, because you get a set of vertices and index defining triangles with no information about cubes.
But you can create an algorithm that extract the info from the model. It's a bit hard but it's feasible.
I'd do it creating a 3d grid, and foreach face I'd launch rays from that face to the opposite face, taking every collision with the mesh, getting for each ray a number of collisions that should be pair (0, 2, 4,...), this between two points should have a solid volume.
That way it can be converted to voxels... on each collision it would be useful to store the bones that are related to the triangle that collides, this way you would be able to animate the voxel model.
Related
I'm using python kivy to render meshes with opengl onto a canvas. I want to return vertex data from the fragment shader so i can build a collider (to use on my cpu event listeners after doing projection and model view transforms). I can replicate the matrix multiplications on the cpu (i guess that's the easy way out), but then i would have to do the same calculations twice (not good).
The only way I can think of doing this (after some browsing) is to imprint an object id onto my rendered mesh alpha channel (wouldn't affect much if i'd keep data coding near value 1 for alpha ). And create some kind of 'color picker' on the cpu side to decode it (I'm guessing that's not hard to do using kivy).
Anyone has a better idea to deal with this? Or a better approach?
First criterion here is: do you need collision for picking or for physics simulation?
If it is for physics: you almost never want the same mesh for rendering and for physics collisions. Typically, you use a very rough approximation for the physics shape, nearly always a convex shape, or a union of convex shapes. (Colliding arbitrary concave meshes is something that no physics engine can do well, and if they attempt it at all, performance will be poor.)
If it is for the purpose of picking an object with a mouse-click: you can go two different ways for this:
You replicate the geometry on the CPU, and use the mouse-location plus camera-view to create a ray that intersects this geometry, to see what is hit first.
After rendering your scene, you read back a single pixel from the depth buffer. (The pixel that your mouse is over.) With the depth value you get back, plus camera info, you can reconstruct a corresponding 3D position in your world. Once you have a 3D location, you can query your world to see which object is the closest to this point, and you will have your hit.
I want to create an Animoji in my APP. But when I contact with some designers they didn't know how to design an Animoji 3D model. Where can I find a solution for reference?
Solution I can thought is create many bones on face of 3D model, And when I get blendShapes of ARFaceAnchor, which contain the detail information of face expression, then I use it to update bone animations of partial face.
Thank you for reading. Any advises is appreciated.
First, to clear the air a bit: Animoji is a product built on top of ARKit, not in any way a feature of ARKit itself. There's no simple path to "build a model in this format and it 'just works' in (or like) Animoji".
That said, there are multiple ways to use the face expression data vended by ARKit to perform 3D animation, so how you do it depends more on what you and your artist are comfortable with. And remember, for any of these you can use as many or as few of the blend shapes as you like, depending on how realistic you want the animation to be.
Skeletal animation
As you suggested, create bones corresponding to each of the blend shapes you're interested in, along with a mapping of blend shape values to bone positions. For example, you'll want to define two positions for the bone for the browOuterUpLeft parameter such that one of them corresponds to a value of 0.0 and another to a value of 1.0 and you can modulate its transform anywhere between those states. (And set up the bone influences in the mesh such that moving it between those two positions creates an effect similar to the reference design when applied to your model.)
Morph target animation
Define multiple, topologically equivalent meshes, one for each blend shape parameter you're interested in. Each one should represent the target state of your character for when that blend shape's weight is 1.0 and all other blend shapes are at 0.0.
Then, at render time, set each vertex position to the weighted average of the same vertex's position in all blend shape targets. Pseudocode:
for vertex in i..<vertexCount {
outPosition = float4(0)
for shape in 0..<blendShapeCount {
outPosition += targetMeshes[shape][vertex] * blendShapeWeights[shape]
}
}
An actual implementation of the above algorithm is more likely to be done in a vertex shader on the GPU, so the for vertex part would be implicit there — you'd just need to feed all your blend shape targets in as vertex attributes. (Or use a compute shader?)
If you're using SceneKit, you can let Apple implement the algorithm for you by feeding your blend shape target meshes to SCNMorpher.
This is where the name "blend shape" comes from, by the way. And rumor has it the built-in ARFaceGeometry is built this way, too.
Simpler and Hybrid approaches
As you can see in Apple's sample code, you can go even simpler — breaking a face into separate pieces (nodes in SceneKit) and setting their positions or transforms based on the blend shape parameters.
You can also combine some of these approaches. For example, a cartoon character could use morph targets for skin deformation around the mouth, but have floating 2D eyebrows that animate simply through setting node positions.
Check-out the 'weboji' javascript library on gitHub. The CG artists we hired to create the 3D models get used with the workflow in minutes. Also, it could be an interesting approach to avoid proprietary formats and closed ecosystem issues.
Screenshots of a 3D Fox (THREE.JS based demo) and a 2D Cartman (SVG based demo).
Demo on youtube featuring a 2D 'Cartman'.
I've been trying without success to extract face features, for instance the mouth, from ARSCNFaceGeometry in order to change their color or add a different material.
I understand I need to create an SCNGeometry for which I have the SCNGeometrySource but haven't been able to create the SCNGeometryElement.
Have tried creating it from ARFaceAnchor in update(from faceGeometry: ARFaceGeometry) but so far have been unable.
Would really appreciate someone help
ARSCNFaceGeometry is a single mesh. If you want different areas of it to be different colors, your best bet is to apply a texture map (which you do in SceneKit by providing images for material property contents).
There’s no semantic information associated with the vertices in the mesh — that is, there’s nothing that says “this point is the tip of the nose, these points are the edge of the upper lip, etc”. But the mesh is topologically stable, so if you create a texture image that adds a bit of color around the lips or a lightning bolt over the eye or whatever, it’ll stay there as the face moves around.
If you need help getting started on painting a texture, there are a couple of things you could try:
Create a dummy texture first
Make a square image and fill it with a double gradient, such that the red and blue component for each pixel is based on the x and y coordinate of that pixel. Or some other distinctive pattern. Apply that texture to the model, and see how it looks — the landmarks in the texture will guide you where to paint.
Export the model
Create a dummy ARSCNFaceGeometry using the init(blendShapes:) initializer and an empty blendShapes dictionary (you don’t need an active ARFaceTracking session for this, but you do need an iPhone X). Use SceneKit’s scene export APIs (or Model I/O) to write that model out to a 3D file of some sort (.scn, which you can process further on the Mac, or something like .obj).
Import that file into your favorite 3D modeling tool (Blender, Maya, etc) and use that tool to paint a texture. Then use that texture in your app with real faces.
Actually, the above is sort of an oversimplification, even though it’s the simple answer for common cases. ARSCNFaceGeometry can actually contain up to four submeshes if you create it with the init(device:fillMesh:) initializer. But even then, those parts aren’t semantically labeled areas of the face — they’re the holes in the regular face model, flat fill-ins for the places where eyes and mouth show through.
Well, the title is quite self explanatory. I'm asking a way to calculate any view of a 3D object knowing it's rotation and all the 6 views (proyected on a cube; top, bottom, front, back...). Is it even posible?
(answer to the first comment)
I'm asking about a way to create an arbitrary 2D projection of the 3D object from a number of 2D views. Without having to create the 3D object first and then having to project it into 2D.
No, it is not possible. Even if you have much more 3D views than in your case it is not generally possible.
The underlying problem is known in the literature as shape from silhouette or visual hull. This is the problem of finding 3D shape from multiple 2D projections, and knowing the 3D shape is a prerequisite for what you want to know (a new 2D projection).
If you google for the two concepts you will find plenty of interesting algorithms.
The quality of the approximation of 3D shape from 2D projections depends on the geometry of the original 3D shape, on the number of projections available and on the placement of the cameras that generate these projections, so success is highly dependent on your individual problem. Six views, however, are almost certainly insufficient unless you have a very specific type of 3D shape.
I'm making a game with my friend that involves randomly generating planets based on certain properties. Originally this game was all 2D, but now we've decided to enhance the purpose of planets in the game and make it 2.5D, with planets being rendered as 3D spheres in an otherwise 2D world. Now, up to this point we had a pretty good thing going with the way planets looked. We used layered textures, one for each property (water, land, atmosphere) depending on how our algorithms created the planet. This looked pretty, but the planet surfaces were largely lame and didn't vary as they were all made from the same few textures.
Now that we are going 3D, I want to create a nice planetary map which will determine the topography of the planet based on its properties to make each planet have different bodies of water, land masses, etc. I also want to draw different textures on the surface of the planet based on that map, with them blending together at the edges.
I've considered two possibilities: rendering the textures based on the map to a RenderTarget and then wrapping that RenderTarget around my sphere model, or converting the map to vertex data and writing a shader to draw the textures with the proper weight.
The problem is, I'm a novice at both RenderTargets and HLSL (as a matter of fact, I don't even know if the RenderTarget method is possible), so I feel the need for some guidance here. What would be recommended for rendering multiple textures to a sphere model based on a generated terrain map? Also, are there any suggestions for what format to create the terrain map in (it would be some sort of data structure which would represent the type of terrain at any coordinate on the planet's surface)?
I have looked at other multi-texture tutorials, but they all seem based on a pre-determined texture or set of values. I need to be able to randomly generate the terrain in-game.