Efficient frustum culling while using shaders - opengl-3

I'd like to know what's the most efficient way of doing frustum culling using the programmable pipeline. I mean, if I understand correctly, following the method described here: Geometric Aproach (by the way the only method described there that worked for me some time ago), functions like glGetFloatv(GL_MODELVIEW_MATRIX, ...) are not valid anymore, as the final vertex position is computed in shader stage. Do I have to compute the frustum planes on the client side for every bounding box transformation I have to check before rendering?
Thanks.

The idea of frustum culling is to prevent polygons from being sent to the GPU in the first place, those polygons you already know that will be culled after the vertex shader. So idea is to prevent the vertex shader from transforming those polygons. Using shaders or not, the best way is to keep track of the frustum planes on client side, and traverse the scene graph (could be hierarchical tree or just a list) and cull objects that lay outside the frustum, and don't use glGetFloatv or it's equivalent it is not efficient as it will copy the data from the GPU. You can use feedback buffers instead.

Related

Get GLSL vertex shader positions back to use on cpu event collider functions

I'm using python kivy to render meshes with opengl onto a canvas. I want to return vertex data from the fragment shader so i can build a collider (to use on my cpu event listeners after doing projection and model view transforms). I can replicate the matrix multiplications on the cpu (i guess that's the easy way out), but then i would have to do the same calculations twice (not good).
The only way I can think of doing this (after some browsing) is to imprint an object id onto my rendered mesh alpha channel (wouldn't affect much if i'd keep data coding near value 1 for alpha ). And create some kind of 'color picker' on the cpu side to decode it (I'm guessing that's not hard to do using kivy).
Anyone has a better idea to deal with this? Or a better approach?
First criterion here is: do you need collision for picking or for physics simulation?
If it is for physics: you almost never want the same mesh for rendering and for physics collisions. Typically, you use a very rough approximation for the physics shape, nearly always a convex shape, or a union of convex shapes. (Colliding arbitrary concave meshes is something that no physics engine can do well, and if they attempt it at all, performance will be poor.)
If it is for the purpose of picking an object with a mouse-click: you can go two different ways for this:
You replicate the geometry on the CPU, and use the mouse-location plus camera-view to create a ray that intersects this geometry, to see what is hit first.
After rendering your scene, you read back a single pixel from the depth buffer. (The pixel that your mouse is over.) With the depth value you get back, plus camera info, you can reconstruct a corresponding 3D position in your world. Once you have a 3D location, you can query your world to see which object is the closest to this point, and you will have your hit.

Is it possible to get the actual value of a vertex?

I was trying to recover some vertex data from vertex shader, but I haven't found any relevant information about this on the internet.
I'm using the vertex shader to calculate my vertex positions using the GPU, but I need to get the results for the logic of my application in Javascript. Is there a possible way to do this without calculating it in Javascript too?
In WebGL2 you can use transform feedback (as Pauli suggests) and you can read back the data with getBufferSubData although ideally, if you're just going to use the data in another draw call you should not read it back as readbacks are slow.
Transform feedback simply means your vertex shader can write its output to a buffer.
In WebGL1 you could do it by rendering your vertices to a floating point texture attached to a framebuffer. You'd include a vertex id attribute with each vertex. You'd use that attribute to set gl_Position. You'd draw with gl.POINT. It would allow you to render to each individual pixel in the output texture effectively letting you get transform feedback. The difference being your result would end up in a texture instead of a buffer. You can kind of see a related example of that here
If you don't need the values back in JavaScript then you can just use the texture you just wrote to as input to future draw calls. If you do need the values back in JavaScript you'll have to first convert the values from floating point into a readable format (using a shader) and then read the values out using gl.readPixel
Transform feedback is OpenGL way to return vertex processing results back to application code. But that is only available with webgl 2. Transform feedback also outputs primitives instead of vertices making it unlikely to be perfect match.
A newer alternative is image load store and shader storage buffer objects. But I think those are missing from webgl 2 too.
In short you either need to calculate same data in javascript or move your application logic to shaders. If you need transformed vertex data for coalition detection you could use bounding box testing and do vertex level transformation only when bounding box hits.
You could use multi level bounding boxes where you have one big box around whole object and then next bounding box level that splits object in to small parts like separate box for each disjoint part in body (for instance, split in knee and ankle in legs). That way javascript mainly only transform single bounding box/sphere for every object in every frame. Only transform second level boxes when objects are near. Then do per vertex transformation only when objects are very close to touch.

Make camera in webgl with vertex shader?

I have some idea, but I'm not sure about it, because I'm no guru in webgl.
In WebGL, there is no camera. If one want to simulate it, then he has to do operations with a lot of objects... to be precise, he must change position to hundreds of thousands of vertices. I didn't study Three.js or Babylon js that deep, so I have no clue, how do they work with cameras.
Since vertex shader can transform vertex positions and because we can pass camera matrix to vertex shader, does it make sence to let it make calculations, so GPU will do the hard work instead of CPU?
Do you mean : Create the view Matrix in the vertex shader using some data like position and orientation instead of Create it from application and send the resulting matrix in the shader ?
Actually the last is the case for solutions like THREE.js and a lot of others :
An object representing a camera can be manipulated from JavaScript. Then at draw, the view matrix is built from some position and orientation parameters, and sent to the active shader program.
Creating a view matrix is done in some steps :
Create a Identity matrix (no transformation) ;
Combine the matrix with each transformations to apply to the camera : translations and rotations ;
Invert the matrix (this computation may be quite heavy).
To decide where (or when) the matrix should be computed depends on the idea that the view matrix does not change in the process of drawing only one image. In fact view matrix and projection matrix are meant to be modified between two images.
Don't forget that vertex shader is executed once for each vertex you send to the pipeline.
If you make the computation of the view matrix in the vertex shader, it will be re-computed a thousand times per image.
So the computation of the view matrix should better stay in the application and sent to the shader program.
If you want to take benefit from performances of shaders in floating point arithmetics, you can think about using compute shaders (unfortunately not present in WebGL) or take into account some mechanics to read back data from shader programs in order to compute the matrix once, and send it at next draw calls.

Simple flat shading using Stage3D/AGAL

I'm relatively new to 3D development and am currently using Actionscript, Stage3D and AGAL to learn. I'm trying to create a scene with a simple procedural mesh that is flat shaded. However, I'm stuck on exactly how I should be passing surface normals to the shader for the lighting. I would really like to just use a single surface normal for each triangle and do flat, even shading for each. I know it's easy to achieve better looking lighting with normals for each vertex, but this is the look I'm after.
Since the shader normally processes every vertex, not every triangle, is it possible for me to just pass a single normal per triangle, rather than one per vertex? Is my thinking completely off here? If anyone had a working example of doing simple, flat shading I'd greatly appreciate it.
I'm digging up an old question here since I stumbled on it via google and can see there is no accepted answer.
Stage3D does not have an equivalent "GL_FLAT" option for it's shader engine. What this means is that the fragment shader program always receives a "varying" or interpolated value from the output of the three respective vertices (via the vertex program). If you want flat shading, you have basically only one option:
Create three unique vertices for each triangle and set the normal for
each vertex to the face normal of the triangle. This way, each vertex
will calculate the same lighting and result in the same vertex color.
When the fragment shader interpolates, it will be interpolating three
identical values, resulting in flat shading.
This is pretty lame. The requirement of unique vertices per triangle means you can't share vertices between triangles. This will definitely increase your vertex count, causing increased delays during your VertexBuffer3D uploads as well as overall lower frame rates. However, I have not seen a better solution anywhere.

Calculation of vertex normals in DirectX

As a learning experience, I'm writing an Immediate mode managed DirectX 9 application.
I'm manually calculating Vertex normals across all triangles in a scene to allow smooth Gouraud shading.
This works as expected, but I'm guessing this is not the most efficient approach. Is it possible to get the GPU to do this for me?
You could in theory generate the vertex normals inside the vertex shader. That involves computation every single time you render a mesh using that shader though, so why not generate them in advance.
If you mean you want to generate them in advance of rendering, but use the GPU instead of the CPU, I would say that it's not worth the bother of speeding up something you are only going to do once. Besides, I'm not sure if DX9 has a way to get computed vertex information back from a shader (DX10 does).
All in all, the best thing to do in most cases is the traditional: compute vertex normals in the program that saves the data files that contain the meshes - do it as a pre-computation step. Usually you have them if the mesh came from a 3d package like Max or Maya, because there is artistic information in the normals, unless you know the whole mesh is supposed to be perfectly smooth (or faceted), it's not computable in the general case.

Resources