Is it possible to get the actual value of a vertex? - webgl

I was trying to recover some vertex data from vertex shader, but I haven't found any relevant information about this on the internet.
I'm using the vertex shader to calculate my vertex positions using the GPU, but I need to get the results for the logic of my application in Javascript. Is there a possible way to do this without calculating it in Javascript too?

In WebGL2 you can use transform feedback (as Pauli suggests) and you can read back the data with getBufferSubData although ideally, if you're just going to use the data in another draw call you should not read it back as readbacks are slow.
Transform feedback simply means your vertex shader can write its output to a buffer.
In WebGL1 you could do it by rendering your vertices to a floating point texture attached to a framebuffer. You'd include a vertex id attribute with each vertex. You'd use that attribute to set gl_Position. You'd draw with gl.POINT. It would allow you to render to each individual pixel in the output texture effectively letting you get transform feedback. The difference being your result would end up in a texture instead of a buffer. You can kind of see a related example of that here
If you don't need the values back in JavaScript then you can just use the texture you just wrote to as input to future draw calls. If you do need the values back in JavaScript you'll have to first convert the values from floating point into a readable format (using a shader) and then read the values out using gl.readPixel

Transform feedback is OpenGL way to return vertex processing results back to application code. But that is only available with webgl 2. Transform feedback also outputs primitives instead of vertices making it unlikely to be perfect match.
A newer alternative is image load store and shader storage buffer objects. But I think those are missing from webgl 2 too.
In short you either need to calculate same data in javascript or move your application logic to shaders. If you need transformed vertex data for coalition detection you could use bounding box testing and do vertex level transformation only when bounding box hits.
You could use multi level bounding boxes where you have one big box around whole object and then next bounding box level that splits object in to small parts like separate box for each disjoint part in body (for instance, split in knee and ankle in legs). That way javascript mainly only transform single bounding box/sphere for every object in every frame. Only transform second level boxes when objects are near. Then do per vertex transformation only when objects are very close to touch.

Related

WebGL: How to interact between javascript and shaders, and how to use multiple shaders

I have seen demos on WebGL that
color rectangular surface
attach textures to the rectangles
draw wireframes
have semitransparent textures
What I do not understand is how to combine these effects into a single program, and how to interact with objects to change their look.
Suppose I want to create a scene with all the above, and have the ability to change the color of any rectangle, or change the texture.
I am trying to understand the organization of the code. Here are some short, related questions:
I can create a vertex buffer with corresponding color buffer. Can I have some rectangles with texture and some without?
If not, I have to create one vertex buffer for all objects with colors, and another with textures. Can I attach a different texture to each rectangle in a vector?
For a case with some rectangles with colors, and others with textures, it requires two different shader programs. All the demos I see have only one, but clearly more complicated programs have multiple. How do you switch between shaders?
How to draw wireframe on and off? Can it be combined with textures? In other words, is it possible to write a shader that can turn features like wireframe on and off with a flag, or does it take two different calls to two different shaders?
All the demos I have seen use an index buffer with triangles. Is Quads no longer supported in WebGL? Obviously for some things triangles would be needed, but if I have a bunch of rectangles it would be nice not to have to create an index of triangles.
For all three of the above scenarios, if I want to change the points, the color, the texture, or the transparency, am I correct in understanding the glSubBuffer will allow replacing data currently in the buffer with new data.
Is it reasonable to have a single object maintaining these kinds of objects and updating color and textures, or is this not a good design?
The question you ask is not just about WebGL, but also about OpenGL and 3D.
The most used way to interact is setting attributes at the start and uniforms at the start and on the run.
In general, answer to all of your questions is "use engine".
Imagine it like you have javascript, CPU based lang, then you have WebGL, which is like a library of stuff for JS that allows low level comunication with GPU (remember, low level), and then you have shader which is GPU program you must provide, but it works only with specific data.
Do anything that is more then "simple" requires a tool, that will allow you to skip using WebGL directly (and very often also write shaders directly). The tool we call engine. Engine usually binds together some set of abilities and skips the others (difference betwen 2D and 3D engine for example). Engine functions call some WebGL preset functions with specific order, so you must not ever touch WebGL API again. Engine also provides very complicated logic to build only single pair, or few pairs of shaders, based just on few simple engine api calls. The reason is that during entire program, swapping shader program cost is heavy.
Your questions
I can create a vertex buffer with corresponding color buffer. Can I
have some rectangles with texture and some without? If not, I have to
create one vertex buffer for all objects with colors, and another with
textures. Can I attach a different texture to each rectangle in a
vector?
Lets have a buffer, we call vertex buffer. We put various data in vertex buffer. Data doesnt go as individuals, but as sets. Each unique data in set, we call attribute. The attribute can has any meaning for its vertex that vertex shader or fragment shader code decides.
If we have buffer full of data for triangles, it is possible to set for example attribute that says if specific vertex should texture the triangle or not and do the texturing logic in the shader. Anyway I think that data size of attributes for each vertex must be equal (so the textured triangles will eat same size as nontextured).
For a case with some rectangles with colors, and others with textures,
it requires two different shader programs. All the demos I see have
only one, but clearly more complicated programs have multiple. How do
you switch between shaders?
Not true, even very complicated programs might have only one pair of shaders (one WebGL program). But still it is possible to change program on the run:
https://www.khronos.org/registry/webgl/specs/latest/1.0/#5.14.9
WebGL API function useProgram
How to draw wireframe on and off? Can it be combined with textures? In
other words, is it possible to write a shader that can turn features
like wireframe on and off with a flag, or does it take two different
calls to two different shaders?
WebGL API allows to draw in wireframe mode. It is shader program independent option. You can switch it with each draw call. Anyway it is also possible to write shader that will draw as wireframe and control it with flag (flag might be both, uniform or attribute based).
All the demos I have seen use an index buffer with triangles. Is Quads
no longer supported in WebGL? Obviously for some things triangles
would be needed, but if I have a bunch of rectangles it would be nice
not to have to create an index of triangles.
WebGL supports only Quads and triangles. I guess it is because without quads, shaders are more simple.
For all three of the above scenarios, if I want to change the points,
the color, the texture, or the transparency, am I correct in
understanding the glSubBuffer will allow replacing data currently in
the buffer with new data.
I would say it is rare to update buffer data on the run. It slows a program a lot. glSubBuffer is not in WebGL (different name???). Anyway dont use it ;)
Is it reasonable to have a single object maintaining these kinds of
objects and updating color and textures, or is this not a good design?
Yes, it is called Scene graph and is widely used and might be also combined with other techniques like display list.

Efficient frustum culling while using shaders

I'd like to know what's the most efficient way of doing frustum culling using the programmable pipeline. I mean, if I understand correctly, following the method described here: Geometric Aproach (by the way the only method described there that worked for me some time ago), functions like glGetFloatv(GL_MODELVIEW_MATRIX, ...) are not valid anymore, as the final vertex position is computed in shader stage. Do I have to compute the frustum planes on the client side for every bounding box transformation I have to check before rendering?
Thanks.
The idea of frustum culling is to prevent polygons from being sent to the GPU in the first place, those polygons you already know that will be culled after the vertex shader. So idea is to prevent the vertex shader from transforming those polygons. Using shaders or not, the best way is to keep track of the frustum planes on client side, and traverse the scene graph (could be hierarchical tree or just a list) and cull objects that lay outside the frustum, and don't use glGetFloatv or it's equivalent it is not efficient as it will copy the data from the GPU. You can use feedback buffers instead.

Open GL ES - what is the best way to draw multiple objects

I am using Open GL ES 2.0 in iOS (using GLkit) and wonder what would be the best way to draw multiple objects, say polylines:
Use a separate vertex buffer for each polyline, without an index buffer, and simply draw GL_LINE_LOOP. In this case there would be a draw call for each object.
Gather all the vertices into one buffer and prepare an index buffer. In this case there would be one draw call which will draw GL_LINES.
Any other way to do this.
Also, how would the answer change if each line has a different color (but the color does not change for each vertex).
Since we're defining “best” in terms of performance:
It should always be faster to use a single vertex buffer, as long as all your polylines have the same vertex attributes and it’s straightforward to gather them into a single vertex buffer (i.e. you don’t frequently need to gather and repack them). This requires fewer OpenGL ES API calls to rebind the data, since you can use a single call to glVertexAttribPointer* per attribute and rely on the first argument to glDrawArrays or offset indices (as you described in your question).
Assuming each polyline has distinct vertices and vertex counts, instancing doesn’t apply. I think at this point there are probably two good options here, along a space-for-CPU-time tradeoff curve:
Pass your polyline color as a vertex attribute, but by setting the current value via glVertexAttrib4f, rather than using actual per-vertex array data. Loop through your polylines, updating the color with glVertexAttrib4f, then using glDrawArrays(GL_LINE_LOOP, first, count), where first and count are defined by how you packed your polylines into the single vertex buffer.
Pass your polyline color as a vertex attribute, and expand the set of data you store per vertex to include the color, which is repeated for every vertex in a polyline. (This is where instancing would have helped you if you were repeating the same polyline.) Prepare the index buffer, much like you were doing for 2) in your question.
Note that this proposed 1) is similar to the 1) in your question, but all vertex data lives in a single vertex buffer anyway. Since both solutions now use a single vertex buffer, the choice is primarily between whether you are willing to make additional calls to glVertexAttrib4f and glDrawArrays in order to avoid allocating extra memory for repeated colors and index data. Note that changing a vertex attribute’s current value and the start offset used for each draw should still be significantly cheaper than binding new vertex buffers and resetting vertex array state.

Make camera in webgl with vertex shader?

I have some idea, but I'm not sure about it, because I'm no guru in webgl.
In WebGL, there is no camera. If one want to simulate it, then he has to do operations with a lot of objects... to be precise, he must change position to hundreds of thousands of vertices. I didn't study Three.js or Babylon js that deep, so I have no clue, how do they work with cameras.
Since vertex shader can transform vertex positions and because we can pass camera matrix to vertex shader, does it make sence to let it make calculations, so GPU will do the hard work instead of CPU?
Do you mean : Create the view Matrix in the vertex shader using some data like position and orientation instead of Create it from application and send the resulting matrix in the shader ?
Actually the last is the case for solutions like THREE.js and a lot of others :
An object representing a camera can be manipulated from JavaScript. Then at draw, the view matrix is built from some position and orientation parameters, and sent to the active shader program.
Creating a view matrix is done in some steps :
Create a Identity matrix (no transformation) ;
Combine the matrix with each transformations to apply to the camera : translations and rotations ;
Invert the matrix (this computation may be quite heavy).
To decide where (or when) the matrix should be computed depends on the idea that the view matrix does not change in the process of drawing only one image. In fact view matrix and projection matrix are meant to be modified between two images.
Don't forget that vertex shader is executed once for each vertex you send to the pipeline.
If you make the computation of the view matrix in the vertex shader, it will be re-computed a thousand times per image.
So the computation of the view matrix should better stay in the application and sent to the shader program.
If you want to take benefit from performances of shaders in floating point arithmetics, you can think about using compute shaders (unfortunately not present in WebGL) or take into account some mechanics to read back data from shader programs in order to compute the matrix once, and send it at next draw calls.

OpenGL batching and instance uniqueness

I've been working on improving my OpenGL ES 2.0 render performance by introducing batching; specifically one creates a RenderBatch, specifying a texture and a shader (for now) upon creation. This sets the state into a VAO to allow for inexpensive state switching. I started the implementation looking something like this:
batch = RenderBatch.new "SpriteSheet" "FlatShader"
batch.begin GL_TRIANGLE_STRIP
batch.addGeometry Geometry.newFromFile "Billboard"
batch.end
batch.render renderEngine
But then it hit me: my Billboard file has vertices that are meant to be scaled and translated for specific instance usage. So I added a transform argument to the addGeometry call.
batch.addGeometry(Geometry.newFromFile("Billboard"), myObject.transform)
This solves the problem of scaling, translating, and rotating the vertices, but it does so by first looking up the vertex information, transforming it by the transform matrix, and then inserts it into the batch data. While this works it seems inefficient; it is CPU intensive and doesn't take advantage of the GPU's transformation power. However, it works, so not that big of a deal. (Would be nice to have a better way to do this though)
However, I've run into a roadblock: texture coordinates may need to be different for each instance as well, and that means I would have to pass in a texture transformation matrix, and now this is feeling hacky.
Is there an easier way to handle this kind of transformation to existing data using shaders that does not limit the geometry/models given and is easily extensible to use normal maps, UV maps, and other fancy tricks? Thanks!
It seems to me that what you are talking about are shader uniforms. Normally you would set up the vertex data and attributes for each batch in a VBO and a VAO. Then, in your render method, you switch to the correct VAO and set up the shader uniforms. These normally include a model-view-projection matrix to transform vertices into clip space, which necessarily would change nearly every frame, the correct texture to use, etc.
This is efficient because the unchanging vertex data is held in GPU memory, the VAO takes care of cheap state switching, and only the uniforms, which generally change often, are sent to the GPU each render call.
If you are batching multiple objects that require separate model view projection matrices, then you have a few options:
you have to perform a separate draw call for each batch that requires a separate model view projection matrix
use an array of model view projection matrices as a uniform and have an attribute for each object that provides the correct projection matrix index to use
you have to transform the vertices using the CPU and refill the VBO with the updated data
The first method is the preferred solution, it will be efficient and simple. The slow part of rendering lots of draw calls is generally getting the data from the CPU to the GPU, if you already have the vertex data in VBOs then the overhead of a draw call per object is not going to be a big deal. This also solves the problem of how to provide different uniforms per object based on object properties. In each objects render method, the relevant properties are set up as uniforms before the draw call is made. If each object requires different data sent to the GPU, then how else could this work?
This is a trade-off situation. Costs of state changes due to insufficient batching compared to costs of transformation on the CPU. There is no single best solution, but it depends on how much of your scene is static, how much is dynamic and how it is laid out.
A common solution is to put static objects, whose transformation relative to each other never changes into a single VBO, or few VBOs (if they use different textures, vertex formats, etc), completely transformed. This is done once before rendering. Not each frame. Dynamic objects (players, monster, whatever) are then rendered individually, with transformation done in the vertex shader.
You can still optimize for state changes by roughly ordering the drawing of the individual objects by textures and programs.

Resources