Is there no way to ask Metal to give us vertexes per instances? - metal

Is there any way to ask Metal to give us vertexes per instances ?
I am drawing bezier lines. for that i want to change the number of vertexes for each bezier lines
Any way to do that ?

There isn't a way to change the number of vertices per instance in Metal (or any other API AFAIK).
The main benefit of instancing is that is allows you to draw many instances of the same mesh with a single draw call. This lowers the CPU overhead and the size of the command buffer. However, it's not for drawing many different meshes with a single draw call.
Instead, you can use a new feature in Metal which will be available in iOS 12 and macOS 10.14 to add many draw calls (each with a different number of vertices) to an indirect command buffer. Executing this buffer requires only single call, so it has the same performance benefits as instancing but is more flexible.
If you're targeting earlier OSes, you can build a series MTLDrawPrimitivesIndirectArguments into a Metal buffer and call -[MTLRenderCommandEncoder drawPrimitives:indirectBuffer:indirectBufferOffset:] for each. This will add a draw call per object so it's not as fast as instancing or indirect command buffers, but it allows you to do some interesting things (like build a list of draw calls on the GPU with a compute kernel)

Related

For batch rendering multiple similar objects which is more performant, drawArrays(TRIANGLE_STRIP) with "degenerate triangles" or drawArraysInstanced?

MDN states that:
Fewer, larger draw operations will generally improve performance. If
you have 1000 sprites to paint, try to do it as a single drawArrays()
or drawElements() call.
It's common to use "degenerate triangles" if you need to draw
discontinuous objects as a single drawArrays(TRIANGLE_STRIP) call.
Degenerate triangles are triangles with no area, therefore any
triangle where more than one point is in the same exact location.
These triangles are effectively skipped, which lets you start a new
triangle strip unattached to your previous one, without having to
split into multiple draw calls.
However, it is also commmonly recommended that for multiple similar objects one should use instanced rendered. For webGl2 something like drawArraysInstanced() or for webGl1 drawArrays with the ANGLE_instanced_arrays extension activated.
For my personal purposes I need to render a large amount of rectangles of the same width in a 2d plane but with varying heights (webgl powered charting application). So any recommendation particular to my usecase is valuable.
Degenerate triangles are generally faster than drawArraysInstanced but there's arguably no reason to use degenerate triangles when you can just make quads with no degenerate triangles.
While it's probably true that degenerate triangles are slightly faster than quads you're unlikely to notice that difference. In fact I suspect it wold be difficult to create an example in WebGL that would show the difference.
To be clear I'm suggesting manually instanced quads. If you want to draw 1000 quads put 1000 quads in a single vertex buffer and draw all with 1 draw call using either drawElements or drawArrays
On the other hand instanced quads using drawArraysInstances might be the most convenient way depending on what you are trying to do.
If it was me though I'd first test without optimization, drawing 1 quad per draw call unless I already knew I was going to draw > 1000 quads. Then I'd find some low-end hardware and see if it's too slow. Most GPU apps get fillrate bound (drawing pixels) before they get vertex bound so even on a slow machine drawing lots of quads might be slow in a way that optimizing vertex calculation won't fix the issue.
You might find this and/or this useful
You can take as a given that the performance of rendering has been optimized by the compiler and the OpenGL core.
static buffers
If you have a buffers that are static then there is generally an insignificant performance difference between the techniques mentioned. Though different hardware (GPUs) will favor one technique over another, but there is no way to know what type of GPU you are running on.
Dynamic buffers
If however when the buffers are dynamic then you need to consider the transfer of data from the CPU RAM to the GPU RAM. This transfer is a slow point and on most GPU's will stop rendering as the data is moved (Messing up concurrent rendering advantages).
On average anything that can be done to reduce the size of the buffers moved will improve the performance.
2D Sprites Triangle V Triangle_Strip
At the most basic 2 floats per vertex (x,y for 2D sprites) you need to modify and transfer a total of 6 verts per quad for gl.TRIANGLE (6 * 2 * b = 48bytes per quad. where b is bytes per float (4)). If you use (gl.TRIANGLE_STRIP) you need to move only 4 verts for a single quad, but for more than 1 you need to create the degenerate triangle each of which requires an additional 2 verts infront and 2 verts behind. So the size per quad is (8 * 2 * 4 = 64bytes per quad (actual can drop 2verts lead in and 2 lead out, start and end of buffer))
Thus for 1000 sprites there are 12000 doubles (64Bit) that are converted to Floats (32Bit) then transfer is 48,000bytes for gl.TRIANGLE. For gl.TRIANGLE_STRIP there are 16,000 doubles for a total of 64,000bytes transferred
There is a clear advantage when using triangle over triangle strip in this case. This is compounded if you include additional per vert data (eg texture coords, color data, etc)
Draw Array V Element
The situation changes when you use drawElements rather than drawArray as the verts used when drawing elements are located via the indices buffer (a static buffer). In this case you need only modify 4Verts per quad (for 1000 quads modify 8000 doubles and transfer 32,000bytes)
Instanced V modify verts
Now using elements we have 4 verts per quad (modify 8 doubles, transfer 32bytes).
Using drawArray or drawElement and each quad has a uniform scale, be rotated, and a position (x,y), using instanced rendering each quad needs only 4 doubles per vert, the position, scale, and rotation (done by the vertex shader).
In this case we have reduced the work load down to (for 1000 quads) modify 4,000 doubles and transfer 16,000bytes
Thus instanced quads are the clear winner in terms of alleviating the transfer and JavaScript bottle necks.
Instanced elements can go further, in the case where it is only position needed, and if that position is only within a screen you can position a quad using only 2 shorts (16bit Int) reducing the work load to modify 2000 ints (32bit JS Number convert to shorts which is much quicker than the conversion of Double to Float)) and transfer only 4000bytes
Conclusion
It is clear in the best case that instanced elements offer up to 16times less work setting and transferring quads to the GPU.
This advantage does not always hold true. It is a balance between the minimal data required per quad compared to the minimum data set per vert per quad (4 verts per quad).
Adding additional capabilities per quad will alter the balance, so will how often you modify the buffers (eg with texture coords you may only need to set the coords once when not using instanced, by for instanced you need to transfer all the data per quad each time anything for that quad has changed (Note the fancy interleaving of instance data can help)
There is also the hardware to consider. Modern GPUs are much better at state changes (transfer speeds), in these cases its all in the JavaScript code where you can gain any significant performance increase. Low end GPUs are notoriously bad at state changes, though optimal JS code is always important, reducing the data per quad is where the significant performance is when dealing with low end devices

Should I use duplicate Shader programs with different uniforms?

Let's say I wanted to draw the same mesh hundreds of times with the same shader program, but different uniforms.
I have 2 choices:
Create a single program and update all uniforms before each draw call
Create a program per draw operation and set uniforms at init time and only call gl.useProgram before each draw call
It sounds like the later would perform better, but I haven't noticed a notable performance difference, but maybe I run into an edge case or this could only be a problem for mobile devices.

How to draw different geometries in DirectX?

I want to draw many cubes and many lines.
I am dealing with many cubes, that is ok. But to do if I also want to draw another shapes(not triangles)?
Do I need to create 2 vertex and index buffers? One for cubes and one for lines? If yes then line vertex buffer is just like below?
Vertex vList[] =
{
{ 0.0f, 0.0f, 0.0f},
{ 1.0f, 0.0f, 0.0f}
}
And also, if yes, then in UpdatePipeline() I should check whether I want to draw a triangle or a line, and reset Input Assembler’s vertex buffer, index buffer and primitive topology?
What I generally want is to draw particles, connected by a line(but not all connected which each other). So I gonna to draw draw cubes(I don’t know how to draw sphere), I then draw lines.
There are numerous ways to draw geometry in DirectX because the right solution depends on what you are trying to do. The main limitation is that everything you draw in a single call to Draw must use the same state/shaders--commonly called 'material'. For performance reasons, you want to be drawing thousands or tens of thousands of vertices in each Draw call.
You can use some tricks to combine different materials into a single Draw, but it's easier to think of each call as a single 'material'.
Given that there are three basic ways to draw geometry in DirectX:
Static submission In this case you copy your vertex/index data into a vertex/index buffer and reuse it many times. This is the most efficient way to render because the data can be placed in GPU only memory. You can use transformations, merged buffers, and other tricks to reuse the same vertex/index data. This is typically how objects are drawn in most scenes.
For an example, see the GeometricPrimitive and Modelclasses in DirectX Tool Kit.
Because of the data upload model of DirectX 12, you have to explicitly convert these from D3D12_HEAP_TYPE_UPLOAD to D3D12_HEAP_TYPE_DEFAULT via a LoadStaticBuffers method, but it achieves the same thing as DirectX 11's D3D11_USAGE_DEFAULT copying from a D3D11_USAGE_STAGING resource.
Dynamic submission builds up the vertex/index buffer every time it is used. This is not as efficient because the buffer itself has to reside in memory shared between the CPU & GPU, but it is a very useful way to handle cases where you are creating the geometry on the CPU every render frame. For DirectX 12 this is a D3D12_HEAP_UPLOAD resource. See DX11: How to: Use dynamic resources for using D3D11_USAGE_DYNAMIC in DirectX 11.
For an examples of this, see the SpriteBatch and PrimitiveBatch classes in DirectX Tool Kit
Generally the most efficient way to draw a bunch of the same shape (assuming you are using the same state/shader) is to use instancing.
See the SimpleInstancing sample for DX11 and DX12.
If you are new to DirectX, you should strongly consider using DirectX 11 first instead of DirectX 12. In any case, see the DirectX Tool Kit for DX11 / DX12.

WebGL: How to interact between javascript and shaders, and how to use multiple shaders

I have seen demos on WebGL that
color rectangular surface
attach textures to the rectangles
draw wireframes
have semitransparent textures
What I do not understand is how to combine these effects into a single program, and how to interact with objects to change their look.
Suppose I want to create a scene with all the above, and have the ability to change the color of any rectangle, or change the texture.
I am trying to understand the organization of the code. Here are some short, related questions:
I can create a vertex buffer with corresponding color buffer. Can I have some rectangles with texture and some without?
If not, I have to create one vertex buffer for all objects with colors, and another with textures. Can I attach a different texture to each rectangle in a vector?
For a case with some rectangles with colors, and others with textures, it requires two different shader programs. All the demos I see have only one, but clearly more complicated programs have multiple. How do you switch between shaders?
How to draw wireframe on and off? Can it be combined with textures? In other words, is it possible to write a shader that can turn features like wireframe on and off with a flag, or does it take two different calls to two different shaders?
All the demos I have seen use an index buffer with triangles. Is Quads no longer supported in WebGL? Obviously for some things triangles would be needed, but if I have a bunch of rectangles it would be nice not to have to create an index of triangles.
For all three of the above scenarios, if I want to change the points, the color, the texture, or the transparency, am I correct in understanding the glSubBuffer will allow replacing data currently in the buffer with new data.
Is it reasonable to have a single object maintaining these kinds of objects and updating color and textures, or is this not a good design?
The question you ask is not just about WebGL, but also about OpenGL and 3D.
The most used way to interact is setting attributes at the start and uniforms at the start and on the run.
In general, answer to all of your questions is "use engine".
Imagine it like you have javascript, CPU based lang, then you have WebGL, which is like a library of stuff for JS that allows low level comunication with GPU (remember, low level), and then you have shader which is GPU program you must provide, but it works only with specific data.
Do anything that is more then "simple" requires a tool, that will allow you to skip using WebGL directly (and very often also write shaders directly). The tool we call engine. Engine usually binds together some set of abilities and skips the others (difference betwen 2D and 3D engine for example). Engine functions call some WebGL preset functions with specific order, so you must not ever touch WebGL API again. Engine also provides very complicated logic to build only single pair, or few pairs of shaders, based just on few simple engine api calls. The reason is that during entire program, swapping shader program cost is heavy.
Your questions
I can create a vertex buffer with corresponding color buffer. Can I
have some rectangles with texture and some without? If not, I have to
create one vertex buffer for all objects with colors, and another with
textures. Can I attach a different texture to each rectangle in a
vector?
Lets have a buffer, we call vertex buffer. We put various data in vertex buffer. Data doesnt go as individuals, but as sets. Each unique data in set, we call attribute. The attribute can has any meaning for its vertex that vertex shader or fragment shader code decides.
If we have buffer full of data for triangles, it is possible to set for example attribute that says if specific vertex should texture the triangle or not and do the texturing logic in the shader. Anyway I think that data size of attributes for each vertex must be equal (so the textured triangles will eat same size as nontextured).
For a case with some rectangles with colors, and others with textures,
it requires two different shader programs. All the demos I see have
only one, but clearly more complicated programs have multiple. How do
you switch between shaders?
Not true, even very complicated programs might have only one pair of shaders (one WebGL program). But still it is possible to change program on the run:
https://www.khronos.org/registry/webgl/specs/latest/1.0/#5.14.9
WebGL API function useProgram
How to draw wireframe on and off? Can it be combined with textures? In
other words, is it possible to write a shader that can turn features
like wireframe on and off with a flag, or does it take two different
calls to two different shaders?
WebGL API allows to draw in wireframe mode. It is shader program independent option. You can switch it with each draw call. Anyway it is also possible to write shader that will draw as wireframe and control it with flag (flag might be both, uniform or attribute based).
All the demos I have seen use an index buffer with triangles. Is Quads
no longer supported in WebGL? Obviously for some things triangles
would be needed, but if I have a bunch of rectangles it would be nice
not to have to create an index of triangles.
WebGL supports only Quads and triangles. I guess it is because without quads, shaders are more simple.
For all three of the above scenarios, if I want to change the points,
the color, the texture, or the transparency, am I correct in
understanding the glSubBuffer will allow replacing data currently in
the buffer with new data.
I would say it is rare to update buffer data on the run. It slows a program a lot. glSubBuffer is not in WebGL (different name???). Anyway dont use it ;)
Is it reasonable to have a single object maintaining these kinds of
objects and updating color and textures, or is this not a good design?
Yes, it is called Scene graph and is widely used and might be also combined with other techniques like display list.

ios games - Is there any drawbacks on GPU side calculations?

Topic is pretty much the question. I'm trying to understand how CPU and GPU cooperation works.
I'm developing my game via cocos2d. It is a game engine so it redraws the whole screen 60 times per second. Every node in cocos2d draws its own set of triangles. Usually you set vertexes for triangle after performing node transforms (from node to world) on CPU side. I've realized the way to do it on GPU side with vertex shaders by passing view model projection to uniforms.
I see CPU time decreasing by ~1ms and gpu time raised by ~0.5ms.
Can I consider this as a performance gain?
In other words: if something can be done on GPU side is there any reasons you shouldn't do it?
The only time you shouldn't do something on the GPU side is if you need the result (in easily accessible form) on the CPU side to further the simulation.
Taking your example. If we assume you have 4 250KB meshes which represent a hierarchy of body parts (as a skeleton). Lets assume you are using a 4x4 matrix of floats for the transformations (64bytes) for each mesh. You could either:
Each frame, perform the mesh transformation calculations on the application side (CPU) and then upload the four meshes to the GPU. This would result in about ~1000kb of data being sent to the GPU per frame.
When the application starts, upload the data for the 4 meshes to the GPU (this will be in a rest / identity pose). Then each frame when you make the render call, you calculate only the new matrices for each mesh (position/rotation/scale) and upload those matrices to the GPU and perform the transformation there. This results in ~256bytes being sent to the GPU per frame.
As you can see, even if the data in the example is fabricated, the main advantage is that you are minimizing the amount of data being transferred between CPU and GPU on a per frame basis.
The only time you would prefer the first option is if your application needs the results of the transformation to do some other work. The GPU is very efficient (especially at processing vertices in parallel), but it isn't too easy to get information back from the GPU (and then its usually in the form on a texture - i.e. a RenderTarget). One concrete example of this 'further work' might be performing collision checks on transformed mesh positions.
edit
You can tell based on how you are calling the openGL api where the data is stored to some extent*. Here is a quick run-down:
Vertex Arrays
glVertexPointer(...)
glDrawArray(...)
using this method passing an array of vertices from the CPU -> GPU each frame. The vertices are processed sequentially as they appear in the array. There is a variation of this method (glDrawElements) which lets you specify indices.
VBOs
glBindBuffer(...)
glBufferData(...)
glDrawElements(...)
VBOs allow you to store the mesh data on the GPU (see below for note). In this way, you don't need to send the mesh data to the GPU each frame, only the transformation data.
*Although we can indicate where our data is to be stored, it is not actually specified in the OpenGL specification how the vendors are to implement this. It means that, we can give hints that our vertex data should be stored in VRAM, but ultimately, it is down to the driver!
Good reference links for this stuff is:
OpenGL ref page: https://www.opengl.org/sdk/docs/man/html/start.html
OpenGL explanations: http://www.songho.ca/opengl
Java OpenGL concepts for rendering: http://www.java-gaming.org/topics/introduction-to-vertex-arrays-and-vertex-buffer-objects-opengl/24272/view.html

Resources