Is there any point in using multiple render targets? Couldn't one just draw to one render target and store that in a texture before clearing the target and using it again?
The key thing to understand about MRT is that you're not drawing the exact same data out to all the render targets.
A pixel shader can only output 4 floating point values, typically you'll use these 4 values to generate a colour at that given point. However you may want to output depth data or normal data instead, so you use those 4 floating point values to represent other information you might need.
The advantage is with MRT you only need to draw the scene once and output to the various render targets, so in one render pass you can output to one render target which will recieve the diffuse colour data, another render target will recieve the normal data, and a third render target will recieve depth data. See below to get a better understanding of what I mean:
It's really a case of the RGBA values becoming other things, for example the RGB because X,Y,Z of the normal of the polygon being drawn.
There are some catches with using MRT, such as all your render targets have to have the same bit depth and you do start pushing your GPUs texture fillrate with it but overall the advantages outway the pitfalls.
Multiple render targets are used when you want to render one piece of geometry and collect different outputs into separate render targets.
Today this technology is primarily used to implement deferred shading. In deferred shading the multiple render targets store lighting information, such as surface normal, specular color and specular exponent, as well as depth and diffuse color information. The set of combined render targets is referred to as a G-Buffer.
See 6800 Leagues Under the Sea (Hargreaves & Harris 2004) for a primer of Deferred Shading and G-Buffers.
Related
I have a glTF face material question: In the source model, it contains a mesh body with two different face materials. How export the different face materials in one source mesh body into one glTF mesh?
Example:
The source model is a cube with red and blue face colors with the texture material.
The glTF file: a cube with two primitives under one glTF mesh. For one primitive, set its material color as red; for another primitive, set its material color as blue. Is it a correct and a good solution? Is there any other better way?
A better way would be to use a (very small) texturemap to indicate the different face colors.
Each glTF material corresponds to the shader program, texture stack, and material settings used in one draw call. If you ask for different settings, for example different baseColorFactor values on some of the polygons, then a typical rendering engine will need to make more than one draw call to get the whole mesh rendered.
In your cube example, this means a typical rendering engine would likely set some uniform value to red, then load just the vertices of the red faces of the cube into the graphics pipeline, and then run the whole pipeline. Wait for the vertex shader to do its thing, the rasterizer cuts up triangles into fragments, the fragment shader colors all the fragments red, and pixels come out the other side. Then, once that whole pipeline is flushed clean of data and reset, only then will the uniform value be modified from red to blue. Now, load the vertices for the blue faces, and restart the whole pipeline from the beginning. Ouch. Too slow.
Instead, put a small texturemap with a red area and a blue area. The cube faces don't share normals and some don't share UVs, so there will be 24 verts on the cube. The rendering engine loads the shader program, loads the texturemap, and then dumps all 24 verts into the graphics pipeline, and lets it rip. The whole mesh can be done in a single material in this manner, using a single glTF primitive resulting in a single draw call.
This much effort to optimize away a single draw call isn't worth it in a vacuum, but if this is indicative of a pattern that is repeated often in the model, or the mesh or model is instanced a lot of times, the optimization becomes worthwhile. If each animation frame climbs into the thousands of draw calls, some users on some systems may start to notice the lag.
I want to create an Animoji in my APP. But when I contact with some designers they didn't know how to design an Animoji 3D model. Where can I find a solution for reference?
Solution I can thought is create many bones on face of 3D model, And when I get blendShapes of ARFaceAnchor, which contain the detail information of face expression, then I use it to update bone animations of partial face.
Thank you for reading. Any advises is appreciated.
First, to clear the air a bit: Animoji is a product built on top of ARKit, not in any way a feature of ARKit itself. There's no simple path to "build a model in this format and it 'just works' in (or like) Animoji".
That said, there are multiple ways to use the face expression data vended by ARKit to perform 3D animation, so how you do it depends more on what you and your artist are comfortable with. And remember, for any of these you can use as many or as few of the blend shapes as you like, depending on how realistic you want the animation to be.
Skeletal animation
As you suggested, create bones corresponding to each of the blend shapes you're interested in, along with a mapping of blend shape values to bone positions. For example, you'll want to define two positions for the bone for the browOuterUpLeft parameter such that one of them corresponds to a value of 0.0 and another to a value of 1.0 and you can modulate its transform anywhere between those states. (And set up the bone influences in the mesh such that moving it between those two positions creates an effect similar to the reference design when applied to your model.)
Morph target animation
Define multiple, topologically equivalent meshes, one for each blend shape parameter you're interested in. Each one should represent the target state of your character for when that blend shape's weight is 1.0 and all other blend shapes are at 0.0.
Then, at render time, set each vertex position to the weighted average of the same vertex's position in all blend shape targets. Pseudocode:
for vertex in i..<vertexCount {
outPosition = float4(0)
for shape in 0..<blendShapeCount {
outPosition += targetMeshes[shape][vertex] * blendShapeWeights[shape]
}
}
An actual implementation of the above algorithm is more likely to be done in a vertex shader on the GPU, so the for vertex part would be implicit there — you'd just need to feed all your blend shape targets in as vertex attributes. (Or use a compute shader?)
If you're using SceneKit, you can let Apple implement the algorithm for you by feeding your blend shape target meshes to SCNMorpher.
This is where the name "blend shape" comes from, by the way. And rumor has it the built-in ARFaceGeometry is built this way, too.
Simpler and Hybrid approaches
As you can see in Apple's sample code, you can go even simpler — breaking a face into separate pieces (nodes in SceneKit) and setting their positions or transforms based on the blend shape parameters.
You can also combine some of these approaches. For example, a cartoon character could use morph targets for skin deformation around the mouth, but have floating 2D eyebrows that animate simply through setting node positions.
Check-out the 'weboji' javascript library on gitHub. The CG artists we hired to create the 3D models get used with the workflow in minutes. Also, it could be an interesting approach to avoid proprietary formats and closed ecosystem issues.
Screenshots of a 3D Fox (THREE.JS based demo) and a 2D Cartman (SVG based demo).
Demo on youtube featuring a 2D 'Cartman'.
I was trying to recover some vertex data from vertex shader, but I haven't found any relevant information about this on the internet.
I'm using the vertex shader to calculate my vertex positions using the GPU, but I need to get the results for the logic of my application in Javascript. Is there a possible way to do this without calculating it in Javascript too?
In WebGL2 you can use transform feedback (as Pauli suggests) and you can read back the data with getBufferSubData although ideally, if you're just going to use the data in another draw call you should not read it back as readbacks are slow.
Transform feedback simply means your vertex shader can write its output to a buffer.
In WebGL1 you could do it by rendering your vertices to a floating point texture attached to a framebuffer. You'd include a vertex id attribute with each vertex. You'd use that attribute to set gl_Position. You'd draw with gl.POINT. It would allow you to render to each individual pixel in the output texture effectively letting you get transform feedback. The difference being your result would end up in a texture instead of a buffer. You can kind of see a related example of that here
If you don't need the values back in JavaScript then you can just use the texture you just wrote to as input to future draw calls. If you do need the values back in JavaScript you'll have to first convert the values from floating point into a readable format (using a shader) and then read the values out using gl.readPixel
Transform feedback is OpenGL way to return vertex processing results back to application code. But that is only available with webgl 2. Transform feedback also outputs primitives instead of vertices making it unlikely to be perfect match.
A newer alternative is image load store and shader storage buffer objects. But I think those are missing from webgl 2 too.
In short you either need to calculate same data in javascript or move your application logic to shaders. If you need transformed vertex data for coalition detection you could use bounding box testing and do vertex level transformation only when bounding box hits.
You could use multi level bounding boxes where you have one big box around whole object and then next bounding box level that splits object in to small parts like separate box for each disjoint part in body (for instance, split in knee and ankle in legs). That way javascript mainly only transform single bounding box/sphere for every object in every frame. Only transform second level boxes when objects are near. Then do per vertex transformation only when objects are very close to touch.
I have seen demos on WebGL that
color rectangular surface
attach textures to the rectangles
draw wireframes
have semitransparent textures
What I do not understand is how to combine these effects into a single program, and how to interact with objects to change their look.
Suppose I want to create a scene with all the above, and have the ability to change the color of any rectangle, or change the texture.
I am trying to understand the organization of the code. Here are some short, related questions:
I can create a vertex buffer with corresponding color buffer. Can I have some rectangles with texture and some without?
If not, I have to create one vertex buffer for all objects with colors, and another with textures. Can I attach a different texture to each rectangle in a vector?
For a case with some rectangles with colors, and others with textures, it requires two different shader programs. All the demos I see have only one, but clearly more complicated programs have multiple. How do you switch between shaders?
How to draw wireframe on and off? Can it be combined with textures? In other words, is it possible to write a shader that can turn features like wireframe on and off with a flag, or does it take two different calls to two different shaders?
All the demos I have seen use an index buffer with triangles. Is Quads no longer supported in WebGL? Obviously for some things triangles would be needed, but if I have a bunch of rectangles it would be nice not to have to create an index of triangles.
For all three of the above scenarios, if I want to change the points, the color, the texture, or the transparency, am I correct in understanding the glSubBuffer will allow replacing data currently in the buffer with new data.
Is it reasonable to have a single object maintaining these kinds of objects and updating color and textures, or is this not a good design?
The question you ask is not just about WebGL, but also about OpenGL and 3D.
The most used way to interact is setting attributes at the start and uniforms at the start and on the run.
In general, answer to all of your questions is "use engine".
Imagine it like you have javascript, CPU based lang, then you have WebGL, which is like a library of stuff for JS that allows low level comunication with GPU (remember, low level), and then you have shader which is GPU program you must provide, but it works only with specific data.
Do anything that is more then "simple" requires a tool, that will allow you to skip using WebGL directly (and very often also write shaders directly). The tool we call engine. Engine usually binds together some set of abilities and skips the others (difference betwen 2D and 3D engine for example). Engine functions call some WebGL preset functions with specific order, so you must not ever touch WebGL API again. Engine also provides very complicated logic to build only single pair, or few pairs of shaders, based just on few simple engine api calls. The reason is that during entire program, swapping shader program cost is heavy.
Your questions
I can create a vertex buffer with corresponding color buffer. Can I
have some rectangles with texture and some without? If not, I have to
create one vertex buffer for all objects with colors, and another with
textures. Can I attach a different texture to each rectangle in a
vector?
Lets have a buffer, we call vertex buffer. We put various data in vertex buffer. Data doesnt go as individuals, but as sets. Each unique data in set, we call attribute. The attribute can has any meaning for its vertex that vertex shader or fragment shader code decides.
If we have buffer full of data for triangles, it is possible to set for example attribute that says if specific vertex should texture the triangle or not and do the texturing logic in the shader. Anyway I think that data size of attributes for each vertex must be equal (so the textured triangles will eat same size as nontextured).
For a case with some rectangles with colors, and others with textures,
it requires two different shader programs. All the demos I see have
only one, but clearly more complicated programs have multiple. How do
you switch between shaders?
Not true, even very complicated programs might have only one pair of shaders (one WebGL program). But still it is possible to change program on the run:
https://www.khronos.org/registry/webgl/specs/latest/1.0/#5.14.9
WebGL API function useProgram
How to draw wireframe on and off? Can it be combined with textures? In
other words, is it possible to write a shader that can turn features
like wireframe on and off with a flag, or does it take two different
calls to two different shaders?
WebGL API allows to draw in wireframe mode. It is shader program independent option. You can switch it with each draw call. Anyway it is also possible to write shader that will draw as wireframe and control it with flag (flag might be both, uniform or attribute based).
All the demos I have seen use an index buffer with triangles. Is Quads
no longer supported in WebGL? Obviously for some things triangles
would be needed, but if I have a bunch of rectangles it would be nice
not to have to create an index of triangles.
WebGL supports only Quads and triangles. I guess it is because without quads, shaders are more simple.
For all three of the above scenarios, if I want to change the points,
the color, the texture, or the transparency, am I correct in
understanding the glSubBuffer will allow replacing data currently in
the buffer with new data.
I would say it is rare to update buffer data on the run. It slows a program a lot. glSubBuffer is not in WebGL (different name???). Anyway dont use it ;)
Is it reasonable to have a single object maintaining these kinds of
objects and updating color and textures, or is this not a good design?
Yes, it is called Scene graph and is widely used and might be also combined with other techniques like display list.
my trying to use XNA4.0 to render a dense point cloud from Kinect. The only way I know is to render each point as a triangle primitive. It works fine for a small set of points however, the maximum number of primitive I can draw from one call is 65535, but I want to draw a dense 640*480 depth image. Any suggestion on how to do this? Thanks!
You are targetting Reach profile, change your project settings to HiDef instead; this way you will be able to draw 1048575 primitives per call.
Is there a reason you want to draw the entire point cloud in one call? Populate a dynamic buffer with as many points as you can fit, render it, then populate it with the next batch and render again, etc. It's not quite as efficient as a single draw call, but 640x480 points is still only 5 batches of 65535, which is by no means excessive.
You might also want to look into hardware instancing, which will still run into the same problem, but which is more efficient for rendering large numbers of identical objects.