Drawing multiple primitives in D3D11 with and without Index Buffers - directx

For example, if I want to draw a triangle and a square on the same scene, what is the proper technique to do so? I know that using index buffers is faster, but I wish to know the technique which does not employ index buffers for educational purposes.
Assume rudimentary knowledge of directx in your answers.

Related

How to draw different geometries in DirectX?

I want to draw many cubes and many lines.
I am dealing with many cubes, that is ok. But to do if I also want to draw another shapes(not triangles)?
Do I need to create 2 vertex and index buffers? One for cubes and one for lines? If yes then line vertex buffer is just like below?
Vertex vList[] =
{
{ 0.0f, 0.0f, 0.0f},
{ 1.0f, 0.0f, 0.0f}
}
And also, if yes, then in UpdatePipeline() I should check whether I want to draw a triangle or a line, and reset Input Assembler’s vertex buffer, index buffer and primitive topology?
What I generally want is to draw particles, connected by a line(but not all connected which each other). So I gonna to draw draw cubes(I don’t know how to draw sphere), I then draw lines.
There are numerous ways to draw geometry in DirectX because the right solution depends on what you are trying to do. The main limitation is that everything you draw in a single call to Draw must use the same state/shaders--commonly called 'material'. For performance reasons, you want to be drawing thousands or tens of thousands of vertices in each Draw call.
You can use some tricks to combine different materials into a single Draw, but it's easier to think of each call as a single 'material'.
Given that there are three basic ways to draw geometry in DirectX:
Static submission In this case you copy your vertex/index data into a vertex/index buffer and reuse it many times. This is the most efficient way to render because the data can be placed in GPU only memory. You can use transformations, merged buffers, and other tricks to reuse the same vertex/index data. This is typically how objects are drawn in most scenes.
For an example, see the GeometricPrimitive and Modelclasses in DirectX Tool Kit.
Because of the data upload model of DirectX 12, you have to explicitly convert these from D3D12_HEAP_TYPE_UPLOAD to D3D12_HEAP_TYPE_DEFAULT via a LoadStaticBuffers method, but it achieves the same thing as DirectX 11's D3D11_USAGE_DEFAULT copying from a D3D11_USAGE_STAGING resource.
Dynamic submission builds up the vertex/index buffer every time it is used. This is not as efficient because the buffer itself has to reside in memory shared between the CPU & GPU, but it is a very useful way to handle cases where you are creating the geometry on the CPU every render frame. For DirectX 12 this is a D3D12_HEAP_UPLOAD resource. See DX11: How to: Use dynamic resources for using D3D11_USAGE_DYNAMIC in DirectX 11.
For an examples of this, see the SpriteBatch and PrimitiveBatch classes in DirectX Tool Kit
Generally the most efficient way to draw a bunch of the same shape (assuming you are using the same state/shader) is to use instancing.
See the SimpleInstancing sample for DX11 and DX12.
If you are new to DirectX, you should strongly consider using DirectX 11 first instead of DirectX 12. In any case, see the DirectX Tool Kit for DX11 / DX12.

Monogame - how to have draw layer while on SpriteSortMode.Texture

I have a problem in which in my game I have to use SpriteSortMode.Texture because I have a lot of objects with few textures, so I cannot afford to use SpriteSortMode.BackToFront.
The thing is this means I cannot draw by layers, unless I do SpriteBatch.Begin with the exact same settings, which is what I'm currently doing.
I only have 3 draw layers I need - a Tileset surface, Objects like rocks or characters on the surface, and UI.
Other solutions I've found is using texture quads (which supposedly also improves tileset drawing performance), going 3D with orthogonal view which I haven't researched yet.
I'm hoping there's a better to make this work.
Why would having a lot of objects with few textures mean you have to use SpriteSortMode.Texture?
"This can improve performance when drawing non-overlapping sprites of uniform depth." says the MSDN page, and this is clearly not what you are doing.
Just use the default SpriteSortMode.Deferred and draw things back to front in order.

WebGL: How to interact between javascript and shaders, and how to use multiple shaders

I have seen demos on WebGL that
color rectangular surface
attach textures to the rectangles
draw wireframes
have semitransparent textures
What I do not understand is how to combine these effects into a single program, and how to interact with objects to change their look.
Suppose I want to create a scene with all the above, and have the ability to change the color of any rectangle, or change the texture.
I am trying to understand the organization of the code. Here are some short, related questions:
I can create a vertex buffer with corresponding color buffer. Can I have some rectangles with texture and some without?
If not, I have to create one vertex buffer for all objects with colors, and another with textures. Can I attach a different texture to each rectangle in a vector?
For a case with some rectangles with colors, and others with textures, it requires two different shader programs. All the demos I see have only one, but clearly more complicated programs have multiple. How do you switch between shaders?
How to draw wireframe on and off? Can it be combined with textures? In other words, is it possible to write a shader that can turn features like wireframe on and off with a flag, or does it take two different calls to two different shaders?
All the demos I have seen use an index buffer with triangles. Is Quads no longer supported in WebGL? Obviously for some things triangles would be needed, but if I have a bunch of rectangles it would be nice not to have to create an index of triangles.
For all three of the above scenarios, if I want to change the points, the color, the texture, or the transparency, am I correct in understanding the glSubBuffer will allow replacing data currently in the buffer with new data.
Is it reasonable to have a single object maintaining these kinds of objects and updating color and textures, or is this not a good design?
The question you ask is not just about WebGL, but also about OpenGL and 3D.
The most used way to interact is setting attributes at the start and uniforms at the start and on the run.
In general, answer to all of your questions is "use engine".
Imagine it like you have javascript, CPU based lang, then you have WebGL, which is like a library of stuff for JS that allows low level comunication with GPU (remember, low level), and then you have shader which is GPU program you must provide, but it works only with specific data.
Do anything that is more then "simple" requires a tool, that will allow you to skip using WebGL directly (and very often also write shaders directly). The tool we call engine. Engine usually binds together some set of abilities and skips the others (difference betwen 2D and 3D engine for example). Engine functions call some WebGL preset functions with specific order, so you must not ever touch WebGL API again. Engine also provides very complicated logic to build only single pair, or few pairs of shaders, based just on few simple engine api calls. The reason is that during entire program, swapping shader program cost is heavy.
Your questions
I can create a vertex buffer with corresponding color buffer. Can I
have some rectangles with texture and some without? If not, I have to
create one vertex buffer for all objects with colors, and another with
textures. Can I attach a different texture to each rectangle in a
vector?
Lets have a buffer, we call vertex buffer. We put various data in vertex buffer. Data doesnt go as individuals, but as sets. Each unique data in set, we call attribute. The attribute can has any meaning for its vertex that vertex shader or fragment shader code decides.
If we have buffer full of data for triangles, it is possible to set for example attribute that says if specific vertex should texture the triangle or not and do the texturing logic in the shader. Anyway I think that data size of attributes for each vertex must be equal (so the textured triangles will eat same size as nontextured).
For a case with some rectangles with colors, and others with textures,
it requires two different shader programs. All the demos I see have
only one, but clearly more complicated programs have multiple. How do
you switch between shaders?
Not true, even very complicated programs might have only one pair of shaders (one WebGL program). But still it is possible to change program on the run:
https://www.khronos.org/registry/webgl/specs/latest/1.0/#5.14.9
WebGL API function useProgram
How to draw wireframe on and off? Can it be combined with textures? In
other words, is it possible to write a shader that can turn features
like wireframe on and off with a flag, or does it take two different
calls to two different shaders?
WebGL API allows to draw in wireframe mode. It is shader program independent option. You can switch it with each draw call. Anyway it is also possible to write shader that will draw as wireframe and control it with flag (flag might be both, uniform or attribute based).
All the demos I have seen use an index buffer with triangles. Is Quads
no longer supported in WebGL? Obviously for some things triangles
would be needed, but if I have a bunch of rectangles it would be nice
not to have to create an index of triangles.
WebGL supports only Quads and triangles. I guess it is because without quads, shaders are more simple.
For all three of the above scenarios, if I want to change the points,
the color, the texture, or the transparency, am I correct in
understanding the glSubBuffer will allow replacing data currently in
the buffer with new data.
I would say it is rare to update buffer data on the run. It slows a program a lot. glSubBuffer is not in WebGL (different name???). Anyway dont use it ;)
Is it reasonable to have a single object maintaining these kinds of
objects and updating color and textures, or is this not a good design?
Yes, it is called Scene graph and is widely used and might be also combined with other techniques like display list.

How should I optimize drawing a large, dynamic number of collections of vertices?

...or am I insane to even try?
As a novice to using bare vertices for 3d graphics, I haven't ever worked with vertex buffers and the like before. I am guessing that I should use a dynamic buffer because my game deals with manipulating, adding and deleting primitives. But how would I go about doing that?
So far I have stored my indices in a Triangle.cs class. Triangles are stored in Quads (which contain the vertices that correspond to their indices), quads are stored in blocks. In my draw method, I iterate through each block, each quad in each block, and finally each triangle, apply the appropriate texture to my effect, then call DrawUserIndexedPrimitives to draw the vertices stored in the triangle.
I'd like to use a vertex buffer because this method cannot support the scale I am going for. I am assuming it to be dynamic. Since my vertices and indices are stored in a collection of separate classes, though, can I still effectively use a buffer? Is using separate buffers for each quad silly (I'm guessing it is)? Is it feasible and effective for me to dump vertices into the buffer the first time a quad is drawn and then store where those vertices were so that I can apply that offset to that triangle's indices for successive draws? Is there a feasible way to handle removing vertices from the buffer in this scenario (perhaps event-based shifting of index offsets in triangles)?
I apologize that these questions may be either far too novicely or too confusing/vague. I'd be happy to provide clarification. But as I've said, I'm new to this and I may not even know what I'm talking about...
I can't exactly tell what you're trying to do, but using a seperate buffer for every quad is very silly.
The golden rule in graphics programming is batch, batch, batch. This means to pack as much stuff into a single DrawUserIndexedPrimitives call as possible, your graphics card will love you for it.
In your case, put all of your verticies and indicies into one vertex buffer and index buffer (you might need to use more, I have no idea how many verticies we're talking about). Whenever the user changes one of the primatives, regenerate the entire buffer. If you really have a lot of primatives, split them up into multiple buffers and on only regenerate the ones you need when the user changes something.
The most important thing is to minimize the amount of 'DrawUserIndexedPrimitives' calls, those things have a lot of overhead, you could easily make your game on the order of 20x faster.
Graphics cards are pipelines, they like being given a big chunk of data for them to eat away at. What you're doing by giving it one triangle at a time is like forcing a large-scale car factory to only make one car at a time. Where they can't start on building the next car before the last one is finished.
Anyway good luck, and feel free to ask any questions.

is depth buffers mandatory

I am just trying to better understand the directX pipeline. Just curious if depth buffers are mandatory in order to get things work. Or is it just a buffer you need if you want objects to appear behind one another.
The depth buffer is not mandatory. In a 2D game, for example, there is usually no need for it.
You need a depth buffer if you want objects to appear behind each other, but still want to be able to draw them in arbitrary order.
If you draw all triangles from the back to the front, and none of them intersect, then you could do without the depth buffer. However, it's generally easier to do away with depth sorting and just to use the depth buffer anyway.
Depth buffers are not mandatory. They simply solve the following problem: suppose you have an object near to the camera which is drawn first. Then, after that is already drawn, you want to draw an object which is far away, but at the same position as the nearby object on-screen. Without depth buffers, it gets drawn on top, which looks wrong. With depth buffers, it is obscured, because the GPU figures out its behind something else that has already been drawn.
You can turn them off and deal with that, eg. by drawing back-to-front (but this has other problems solved by depth buffering), which is easy in 2D games. Alternatively for some reason you might want that over-draw as some kind of effect. But it is by no means necessary for basic rendering.

Resources