Open GL ES - what is the best way to draw multiple objects - ios

I am using Open GL ES 2.0 in iOS (using GLkit) and wonder what would be the best way to draw multiple objects, say polylines:
Use a separate vertex buffer for each polyline, without an index buffer, and simply draw GL_LINE_LOOP. In this case there would be a draw call for each object.
Gather all the vertices into one buffer and prepare an index buffer. In this case there would be one draw call which will draw GL_LINES.
Any other way to do this.
Also, how would the answer change if each line has a different color (but the color does not change for each vertex).

Since we're defining “best” in terms of performance:
It should always be faster to use a single vertex buffer, as long as all your polylines have the same vertex attributes and it’s straightforward to gather them into a single vertex buffer (i.e. you don’t frequently need to gather and repack them). This requires fewer OpenGL ES API calls to rebind the data, since you can use a single call to glVertexAttribPointer* per attribute and rely on the first argument to glDrawArrays or offset indices (as you described in your question).
Assuming each polyline has distinct vertices and vertex counts, instancing doesn’t apply. I think at this point there are probably two good options here, along a space-for-CPU-time tradeoff curve:
Pass your polyline color as a vertex attribute, but by setting the current value via glVertexAttrib4f, rather than using actual per-vertex array data. Loop through your polylines, updating the color with glVertexAttrib4f, then using glDrawArrays(GL_LINE_LOOP, first, count), where first and count are defined by how you packed your polylines into the single vertex buffer.
Pass your polyline color as a vertex attribute, and expand the set of data you store per vertex to include the color, which is repeated for every vertex in a polyline. (This is where instancing would have helped you if you were repeating the same polyline.) Prepare the index buffer, much like you were doing for 2) in your question.
Note that this proposed 1) is similar to the 1) in your question, but all vertex data lives in a single vertex buffer anyway. Since both solutions now use a single vertex buffer, the choice is primarily between whether you are willing to make additional calls to glVertexAttrib4f and glDrawArrays in order to avoid allocating extra memory for repeated colors and index data. Note that changing a vertex attribute’s current value and the start offset used for each draw should still be significantly cheaper than binding new vertex buffers and resetting vertex array state.

Related

Multiple primitives with different color for one Mesh object (glTF 2)

I am leveraging mesh with multiple primitives to support a cube with different face colors. I found two possible implementations:
Duplicate the shared vertex.
From the example https://github.com/KhronosGroup/glTF-Asset-Generator/tree/master/Output/Positive/Mesh_Primitives it duplicate the vertex for shared vertex in order to create separate buffer view and vertex accessors. My understanding is this way will increase the file size as duplicate data write out but may have better drawing performance as it only load suset of data in each gl draw.
share the buffer view and vertex accessor for all primitives
Alternatively we can avoid the vertex duplication by writing all vertexes into the buffer and create single buffer view (hence vertex accessor) for different faces, as a result each primitive actually linked to the whole vertexes in the buffer. the impacts I guess is each primitive will have load all vertex in the buffer and this may lead drawing performance not good?
I would like to hear which manner is recommended? Thank you!

How do I use indexed normals as an attribute? (WebGL) [duplicate]

I have some vertex data. Positions, normals, texture coordinates. I probably loaded it from a .obj file or some other format. Maybe I'm drawing a cube. But each piece of vertex data has its own index. Can I render this mesh data using OpenGL/Direct3D?
In the most general sense, no. OpenGL and Direct3D only allow one index per vertex; the index fetches from each stream of vertex data. Therefore, every unique combination of components must have its own separate index.
So if you have a cube, where each face has its own normal, you will need to replicate the position and normal data a lot. You will need 24 positions and 24 normals, even though the cube will only have 8 unique positions and 6 unique normals.
Your best bet is to simply accept that your data will be larger. A great many model formats will use multiple indices; you will need to fixup this vertex data before you can render with it. Many mesh loading tools, such as Open Asset Importer, will perform this fixup for you.
It should also be noted that most meshes are not cubes. Most meshes are smooth across the vast majority of vertices, only occasionally having different normals/texture coordinates/etc. So while this often comes up for simple geometric shapes, real models rarely have substantial amounts of vertex duplication.
GL 3.x and D3D10
For D3D10/OpenGL 3.x-class hardware, it is possible to avoid performing fixup and use multiple indexed attributes directly. However, be advised that this will likely decrease rendering performance.
The following discussion will use the OpenGL terminology, but Direct3D v10 and above has equivalent functionality.
The idea is to manually access the different vertex attributes from the vertex shader. Instead of sending the vertex attributes directly, the attributes that are passed are actually the indices for that particular vertex. The vertex shader then uses the indices to access the actual attribute through one or more buffer textures.
Attributes can be stored in multiple buffer textures or all within one. If the latter is used, then the shader will need an offset to add to each index in order to find the corresponding attribute's start index in the buffer.
Regular vertex attributes can be compressed in many ways. Buffer textures have fewer means of compression, allowing only a relatively limited number of vertex formats (via the image formats they support).
Please note again that any of these techniques may decrease overall vertex processing performance. Therefore, it should only be used in the most memory-limited of circumstances, after all other options for compression or optimization have been exhausted.
OpenGL ES 3.0 provides buffer textures as well. Higher OpenGL versions allow you to read buffer objects more directly via SSBOs rather than buffer textures, which might have better performance characteristics.
I found a way that allows you to reduce this sort of repetition that runs a bit contrary to some of the statements made in the other answer (but doesn't specifically fit the question asked here). It does however address my question which was thought to be a repeat of this question.
I just learned about Interpolation qualifiers. Specifically "flat". It's my understanding that putting the flat qualifier on your vertex shader output causes only the provoking vertex to pass it's values to the fragment shader.
This means for the situation described in this quote:
So if you have a cube, where each face has its own normal, you will need to replicate the position and normal data a lot. You will need 24 positions and 24 normals, even though the cube will only have 8 unique positions and 6 unique normals.
You can have 8 vertexes, 6 of which contain the unique normals and 2 of normal values are disregarded, so long as you carefully order your primitives indices such that the "provoking vertex" contains the normal data you want to apply to the entire face.
EDIT: My understanding of how it works:

OpenGL ES: Should I use DrawElements for a TRIANGLE_STRIP array?

I'm trying to draw a simple array of triangles. They are all connected, so I'm currently using DrawArrays and GL_TRIANGLE_STRIP. However, when checking the XCode profiler, it suggests using DrawElements and an indexed array instead.
Should I actually be doing this? I noticed that DrawElements also has an option for TRIANGLE_STRIP, but I don't see an advantage since there aren't any repeated vertices when I use glDrawArrays.
Here's a diagram of the triangles I'm drawing:
As you can see there's no repeats as I'm using TRIANGLE_STRIP, so is there any advantage in indexing this?
Usually glDrawElements is faster but and in your case (with only two rows of vertices) it won't affect performance and glDrawElements could be even slower because you have also to handle the index buffer.
In some other cases where you have three or more rows of vertices you will start to have vertex repetitions and you should be using glDrawElements and index your vertex buffer. The advantages of indexing are:
not only your 3D model is smaller and consumes less memory but it becomes faster to load it into the graphic card memory. So less memory means also less memory transfers.
If your shaders are complex and has too many operations, indexing could impact performance positively. In fact, if you are indexing vertices there is no need to recompute the result for the same vertex multiple times. The result is computed one time, cached and used again when another index points to the same vertex.
When you have a deformable object (i.e. the positions of the vertices change due to physical collision), indexing will help. Assume that you didn't index, you will be repeating the same vertices(position) for every triangle. So if you want to change the position of the vertex to simulate collision you will have to update the vertex position for all triangles. However, if you index your vertex buffer, you will only have to change the position of the vertex and keep the index buffer the same.

Is it possible to get the actual value of a vertex?

I was trying to recover some vertex data from vertex shader, but I haven't found any relevant information about this on the internet.
I'm using the vertex shader to calculate my vertex positions using the GPU, but I need to get the results for the logic of my application in Javascript. Is there a possible way to do this without calculating it in Javascript too?
In WebGL2 you can use transform feedback (as Pauli suggests) and you can read back the data with getBufferSubData although ideally, if you're just going to use the data in another draw call you should not read it back as readbacks are slow.
Transform feedback simply means your vertex shader can write its output to a buffer.
In WebGL1 you could do it by rendering your vertices to a floating point texture attached to a framebuffer. You'd include a vertex id attribute with each vertex. You'd use that attribute to set gl_Position. You'd draw with gl.POINT. It would allow you to render to each individual pixel in the output texture effectively letting you get transform feedback. The difference being your result would end up in a texture instead of a buffer. You can kind of see a related example of that here
If you don't need the values back in JavaScript then you can just use the texture you just wrote to as input to future draw calls. If you do need the values back in JavaScript you'll have to first convert the values from floating point into a readable format (using a shader) and then read the values out using gl.readPixel
Transform feedback is OpenGL way to return vertex processing results back to application code. But that is only available with webgl 2. Transform feedback also outputs primitives instead of vertices making it unlikely to be perfect match.
A newer alternative is image load store and shader storage buffer objects. But I think those are missing from webgl 2 too.
In short you either need to calculate same data in javascript or move your application logic to shaders. If you need transformed vertex data for coalition detection you could use bounding box testing and do vertex level transformation only when bounding box hits.
You could use multi level bounding boxes where you have one big box around whole object and then next bounding box level that splits object in to small parts like separate box for each disjoint part in body (for instance, split in knee and ankle in legs). That way javascript mainly only transform single bounding box/sphere for every object in every frame. Only transform second level boxes when objects are near. Then do per vertex transformation only when objects are very close to touch.

WebGL: How to interact between javascript and shaders, and how to use multiple shaders

I have seen demos on WebGL that
color rectangular surface
attach textures to the rectangles
draw wireframes
have semitransparent textures
What I do not understand is how to combine these effects into a single program, and how to interact with objects to change their look.
Suppose I want to create a scene with all the above, and have the ability to change the color of any rectangle, or change the texture.
I am trying to understand the organization of the code. Here are some short, related questions:
I can create a vertex buffer with corresponding color buffer. Can I have some rectangles with texture and some without?
If not, I have to create one vertex buffer for all objects with colors, and another with textures. Can I attach a different texture to each rectangle in a vector?
For a case with some rectangles with colors, and others with textures, it requires two different shader programs. All the demos I see have only one, but clearly more complicated programs have multiple. How do you switch between shaders?
How to draw wireframe on and off? Can it be combined with textures? In other words, is it possible to write a shader that can turn features like wireframe on and off with a flag, or does it take two different calls to two different shaders?
All the demos I have seen use an index buffer with triangles. Is Quads no longer supported in WebGL? Obviously for some things triangles would be needed, but if I have a bunch of rectangles it would be nice not to have to create an index of triangles.
For all three of the above scenarios, if I want to change the points, the color, the texture, or the transparency, am I correct in understanding the glSubBuffer will allow replacing data currently in the buffer with new data.
Is it reasonable to have a single object maintaining these kinds of objects and updating color and textures, or is this not a good design?
The question you ask is not just about WebGL, but also about OpenGL and 3D.
The most used way to interact is setting attributes at the start and uniforms at the start and on the run.
In general, answer to all of your questions is "use engine".
Imagine it like you have javascript, CPU based lang, then you have WebGL, which is like a library of stuff for JS that allows low level comunication with GPU (remember, low level), and then you have shader which is GPU program you must provide, but it works only with specific data.
Do anything that is more then "simple" requires a tool, that will allow you to skip using WebGL directly (and very often also write shaders directly). The tool we call engine. Engine usually binds together some set of abilities and skips the others (difference betwen 2D and 3D engine for example). Engine functions call some WebGL preset functions with specific order, so you must not ever touch WebGL API again. Engine also provides very complicated logic to build only single pair, or few pairs of shaders, based just on few simple engine api calls. The reason is that during entire program, swapping shader program cost is heavy.
Your questions
I can create a vertex buffer with corresponding color buffer. Can I
have some rectangles with texture and some without? If not, I have to
create one vertex buffer for all objects with colors, and another with
textures. Can I attach a different texture to each rectangle in a
vector?
Lets have a buffer, we call vertex buffer. We put various data in vertex buffer. Data doesnt go as individuals, but as sets. Each unique data in set, we call attribute. The attribute can has any meaning for its vertex that vertex shader or fragment shader code decides.
If we have buffer full of data for triangles, it is possible to set for example attribute that says if specific vertex should texture the triangle or not and do the texturing logic in the shader. Anyway I think that data size of attributes for each vertex must be equal (so the textured triangles will eat same size as nontextured).
For a case with some rectangles with colors, and others with textures,
it requires two different shader programs. All the demos I see have
only one, but clearly more complicated programs have multiple. How do
you switch between shaders?
Not true, even very complicated programs might have only one pair of shaders (one WebGL program). But still it is possible to change program on the run:
https://www.khronos.org/registry/webgl/specs/latest/1.0/#5.14.9
WebGL API function useProgram
How to draw wireframe on and off? Can it be combined with textures? In
other words, is it possible to write a shader that can turn features
like wireframe on and off with a flag, or does it take two different
calls to two different shaders?
WebGL API allows to draw in wireframe mode. It is shader program independent option. You can switch it with each draw call. Anyway it is also possible to write shader that will draw as wireframe and control it with flag (flag might be both, uniform or attribute based).
All the demos I have seen use an index buffer with triangles. Is Quads
no longer supported in WebGL? Obviously for some things triangles
would be needed, but if I have a bunch of rectangles it would be nice
not to have to create an index of triangles.
WebGL supports only Quads and triangles. I guess it is because without quads, shaders are more simple.
For all three of the above scenarios, if I want to change the points,
the color, the texture, or the transparency, am I correct in
understanding the glSubBuffer will allow replacing data currently in
the buffer with new data.
I would say it is rare to update buffer data on the run. It slows a program a lot. glSubBuffer is not in WebGL (different name???). Anyway dont use it ;)
Is it reasonable to have a single object maintaining these kinds of
objects and updating color and textures, or is this not a good design?
Yes, it is called Scene graph and is widely used and might be also combined with other techniques like display list.

Resources