Topic is pretty much the question. I'm trying to understand how CPU and GPU cooperation works.
I'm developing my game via cocos2d. It is a game engine so it redraws the whole screen 60 times per second. Every node in cocos2d draws its own set of triangles. Usually you set vertexes for triangle after performing node transforms (from node to world) on CPU side. I've realized the way to do it on GPU side with vertex shaders by passing view model projection to uniforms.
I see CPU time decreasing by ~1ms and gpu time raised by ~0.5ms.
Can I consider this as a performance gain?
In other words: if something can be done on GPU side is there any reasons you shouldn't do it?
The only time you shouldn't do something on the GPU side is if you need the result (in easily accessible form) on the CPU side to further the simulation.
Taking your example. If we assume you have 4 250KB meshes which represent a hierarchy of body parts (as a skeleton). Lets assume you are using a 4x4 matrix of floats for the transformations (64bytes) for each mesh. You could either:
Each frame, perform the mesh transformation calculations on the application side (CPU) and then upload the four meshes to the GPU. This would result in about ~1000kb of data being sent to the GPU per frame.
When the application starts, upload the data for the 4 meshes to the GPU (this will be in a rest / identity pose). Then each frame when you make the render call, you calculate only the new matrices for each mesh (position/rotation/scale) and upload those matrices to the GPU and perform the transformation there. This results in ~256bytes being sent to the GPU per frame.
As you can see, even if the data in the example is fabricated, the main advantage is that you are minimizing the amount of data being transferred between CPU and GPU on a per frame basis.
The only time you would prefer the first option is if your application needs the results of the transformation to do some other work. The GPU is very efficient (especially at processing vertices in parallel), but it isn't too easy to get information back from the GPU (and then its usually in the form on a texture - i.e. a RenderTarget). One concrete example of this 'further work' might be performing collision checks on transformed mesh positions.
edit
You can tell based on how you are calling the openGL api where the data is stored to some extent*. Here is a quick run-down:
Vertex Arrays
glVertexPointer(...)
glDrawArray(...)
using this method passing an array of vertices from the CPU -> GPU each frame. The vertices are processed sequentially as they appear in the array. There is a variation of this method (glDrawElements) which lets you specify indices.
VBOs
glBindBuffer(...)
glBufferData(...)
glDrawElements(...)
VBOs allow you to store the mesh data on the GPU (see below for note). In this way, you don't need to send the mesh data to the GPU each frame, only the transformation data.
*Although we can indicate where our data is to be stored, it is not actually specified in the OpenGL specification how the vendors are to implement this. It means that, we can give hints that our vertex data should be stored in VRAM, but ultimately, it is down to the driver!
Good reference links for this stuff is:
OpenGL ref page: https://www.opengl.org/sdk/docs/man/html/start.html
OpenGL explanations: http://www.songho.ca/opengl
Java OpenGL concepts for rendering: http://www.java-gaming.org/topics/introduction-to-vertex-arrays-and-vertex-buffer-objects-opengl/24272/view.html
Related
I'm trying to implement fluid dynamics using compute shaders. In the article there are a series of passes done on a texture since this was written before compute shaders.
Would it be faster to do each pass on a texture or buffer? The final pass would have to be applied to a texture anyways.
I would recommend using whichever dimensionality of resource fits the simulation. If it's a 1D simulation, use a RWBuffer, if it's a 2D simulation use a RWTexture2D and if it's a 3D simulation use a RWTexture3D.
There appear to be stages in the algorithm that you linked that make use of bilinear filtering. If you restrict yourself to using a Buffer you'll have to issue 4 or 8 memory fetches (depending on 2D or 3D) and then more instructions to calculate the weighted average. Take advantage of the hardware's ability to do this for you where possible.
Another thing to be aware of is that data in textures is not laid out row by row (linearly) as you might expect, instead it's laid in such a way that neighbouring texels are as close to one another in memory as possible; this can be called Tiling or Swizzling depending on whose documentation you read. For that reason, unless your simulation is one-dimensional, you may well get far better cache coherency on reads/writes from a resource whose layout most closely matches the dimensions of the simulation.
I'm attempting to render a large number of textured quads on the iPhone. To improve render speeds I've created a VBO that I leverage to render my objects in a single draw call. This seems to work well, but I'm new to OpenGL and have run into issues when it comes to providing a unique transform for each of my quads (ultimately I'm looking for each quad to have a custom scale, position and rotation).
After a decent amount of Googling, it appears that the standard means of handling this situation is to pass a uniform matrix to the vertex shader and to have each quad take care of rendering itself. But this approach seems to negate the purpose of the VBO, by ultimately requiring a draw call per object.
In my mind, it makes sense that each object should keep it's own model view matrix, using it to transform, scale and rotate the object as necessary. But applying separate matrices to objects in a VBO has me lost. I've considered two approaches:
Send the model view matrix to the vertex shader as a non-uniform attribute and apply it within the shader.
Or transform the vertex data before it's stored in the VBO and sent to the GPU
But the fact that I'm finding it difficult to find information on how best to handle this leads me to believe I'm confusing the issue. What's the best way of handling this?
This is the "evergreen" question (a good one) on how to optimize the rendering of many simple geometries (a quad is in fact 2 triangles, 6 vertices most of the time unless we use a strip).
Anyway, the use of VBO vs VAO in this case should not mean a significant advantage since the size of the data to be transferred on the memory buffer is rather low (32 bytes per vertex, 96 bytes per triangle, 192 per quad) which is not a big effort for nowadays memory bandwidth (yet it depends on How many quads you mean. If you have 20.000 quads per frame then it would be a problem anyway).
A possible approach could be to batch the drawing of the quads by building a new VAO at each frame with the different quads positioned in your own coordinate system. Something like shifting the quads vertices to the correct position in a "virtual" mesh origin. Then you just perform a single draw of the newly creates mesh in your VAO.
In this way, you could batch the drawing of multiple objects in fewer calls.
The problem would be if your quads need to "scale" and "rotate" and not just translate, you can compute it with CPU the actual vertices position but it would be way to costly in terms of computing power.
A simple suggestion on top of the way you transfer the meshes is to use a texture atlas for all the textures of your quads, in this way you will need a much lower (if not needed at all) texture bind operation which might be costly in rendering operations.
iOS devices use a PowerVR graphics architecture. The PowerVR architecture is a tile-based deferred rendering model. The primary benefit of this model is that it does not use a depth buffer.
However, I can access the depth buffer on my iOS device. Specifically, I can use an offscreen Frame Buffer Object to turn the depth buffer into a color texture and render it.
If the PowerVR architecture doesn't use a depth buffer, how is it that I'm able to render a depth buffer?
It is true that a tile-based renderer doesn't need a traditional depth buffer in order to work.
TBR split the screen in tiles and completely renders the contents of this tile using fast on-chip memory to store temporary colors and depths. Then, when the tile is finished, the final values are moved to the actual framebuffer. However, depth values in a depth buffer are traditionally temporary because they are just used as a hidden surface algorithm. Then depth values in this case can be completely discarded after the tile is rendered.
That means that effectively tile-based renderers don't really need a full screen depth buffer in slower video memory, saving both bandwidth and memory.
The Metal API easily exposes this functionality, allowing you to set the storeAction of the depth buffer to 'don't care' value, meaning that it will not back up the resulting depth values into main memory.
The exception to this case is that you may need the depth buffer contents after rendering (i.e. for a deferred renderer or as a source for some algorithm that operates with depth values). In that case the hardware must ensure that the depth values are stored in the framebuffer for you tu use.
Tile-based deferred rendering — as the name clearly says — works on a tile-by-tile basis. Each portion of the screen is loaded into internal caches, processed and written out again. The hardware takes the list of triangles overlapping the current tile and the current depth values and from those comes up with a new set of colours and depths and then writes those all out again. So whereas a completely dumb GPU might do one read and one write to every relevant depth buffer value per triangle, the PowerVR will do one read and one write per batch of geometry, with the ray casting-style algorithm doing the rest in between.
It isn't really possible to implement OpenGL on a 'pure' tile-based deferred renderer because the depth buffer needs to be readable. It also generally isn't efficient to make the depth buffer write only because the colour buffer is readable at any time (either explicitly via glReadPixels or implicitly according to whenever you present the frame buffer as per your OS's mechanisms), meaning that the hardware may have to draw a scene, then draw more onto it.
PowerVR does use a depth buffer, but in a different way than a regular(Immediate Mode Rendering) GPU
The differed part of Tile-based differed rendering means that triangles for a give scene are first processed (shaded, transformed clipped, etc. ) and saved into an intermediate buffer. Only after the entire scene is processed the tiles are rendered one by one.
Having all the processed triangles in one buffer allows the hardware to perform hidden surface removal - removing the triangles that will end up being hidden/overdrawn by other triangles. This significantly reduces the number of rendered triangles, resulting in improved performance and reduced power consumption.
Hidden surface removal typically uses something called a Tab Buffer as well as a depth buffer. (Both are small on-chip memories as they store a tile at a time)
Not sure why you're saying that PowerVR doesn't use a depth buffer. My guess is that it is just a "marketing" way of saying that there is not need to perform expensive writes and reads from system memory in order to perform depth test.
p.s
Just to add to Tommy's answer: the primary benefits of tile based differed rendering are:
Since fragments are processed a tile at a time all color/depth/stencil buffer read and writes are performed from a fast on-chip memory. While the color buffer still has to be read/written to system memory ones per tile, in many cases the depth and stencil buffers need to be written to system memory only if it is required for later use(like your user case). System memory traffic is a significant source of power consumption... so you can see how it reduced power consumption.
Differed rendering enables hidden surface removal. Less rendered triangles means less fragments processing, means less texture memory access.
For simplicity of the problem let's consider spheres. Let's say I have a sphere, and before execution I know the radius, the position and the triangle count. Let's also say the triangle count is sufficiently large (e.g. ~50k triangles).
Would it be faster generally to create this sphere mesh before hand and stream all 50k triangles to the graphics card, or would it be faster to send a single point (representing the centre of the sphere) and use tessellation and geometry shaders to build the sphere on the GPU?
Would it still be faster if I had 100 of these spheres in different positions? Can I use hull/geometry shaders to create something which I can then combine with instancing?
Tessellation is certainly valuable. Especially when combined with displacement from a heightmap. The isolated environment described in your question is bound not to fully answer your question.
Before using tessellation you would need to know that you will become CPU poly/triangle bound and therefore need to start utilizing the GPU to help you increase the overall triangles of your game/scene. Calculations are very fast on the GPU so yes using tessellation multiple subdivision levels is advisable if you are going to do it...though sometimes I've been happy with just subdividing 3-4 times from a 200 tri plane.
Mainly tessellation is used for environmental/static mesh scene objects so that you can spend your tri's on characters and other moving/animated models without becoming CPU bound.
Checkout engines like Unity3D and CryEngine for tessellation examples to help the learning curve.
I just so happen to be working with this at the same time.
In terms of FPS, the pre-computed method would be faster in this situation since you can
dump one giant 50K triangle sphere payload (like any other model) and
draw it in multiple places from there.
The tessellation method would be slower since all the triangles would
be generated from a formula, multiple times per frame.
I've been working on improving my OpenGL ES 2.0 render performance by introducing batching; specifically one creates a RenderBatch, specifying a texture and a shader (for now) upon creation. This sets the state into a VAO to allow for inexpensive state switching. I started the implementation looking something like this:
batch = RenderBatch.new "SpriteSheet" "FlatShader"
batch.begin GL_TRIANGLE_STRIP
batch.addGeometry Geometry.newFromFile "Billboard"
batch.end
batch.render renderEngine
But then it hit me: my Billboard file has vertices that are meant to be scaled and translated for specific instance usage. So I added a transform argument to the addGeometry call.
batch.addGeometry(Geometry.newFromFile("Billboard"), myObject.transform)
This solves the problem of scaling, translating, and rotating the vertices, but it does so by first looking up the vertex information, transforming it by the transform matrix, and then inserts it into the batch data. While this works it seems inefficient; it is CPU intensive and doesn't take advantage of the GPU's transformation power. However, it works, so not that big of a deal. (Would be nice to have a better way to do this though)
However, I've run into a roadblock: texture coordinates may need to be different for each instance as well, and that means I would have to pass in a texture transformation matrix, and now this is feeling hacky.
Is there an easier way to handle this kind of transformation to existing data using shaders that does not limit the geometry/models given and is easily extensible to use normal maps, UV maps, and other fancy tricks? Thanks!
It seems to me that what you are talking about are shader uniforms. Normally you would set up the vertex data and attributes for each batch in a VBO and a VAO. Then, in your render method, you switch to the correct VAO and set up the shader uniforms. These normally include a model-view-projection matrix to transform vertices into clip space, which necessarily would change nearly every frame, the correct texture to use, etc.
This is efficient because the unchanging vertex data is held in GPU memory, the VAO takes care of cheap state switching, and only the uniforms, which generally change often, are sent to the GPU each render call.
If you are batching multiple objects that require separate model view projection matrices, then you have a few options:
you have to perform a separate draw call for each batch that requires a separate model view projection matrix
use an array of model view projection matrices as a uniform and have an attribute for each object that provides the correct projection matrix index to use
you have to transform the vertices using the CPU and refill the VBO with the updated data
The first method is the preferred solution, it will be efficient and simple. The slow part of rendering lots of draw calls is generally getting the data from the CPU to the GPU, if you already have the vertex data in VBOs then the overhead of a draw call per object is not going to be a big deal. This also solves the problem of how to provide different uniforms per object based on object properties. In each objects render method, the relevant properties are set up as uniforms before the draw call is made. If each object requires different data sent to the GPU, then how else could this work?
This is a trade-off situation. Costs of state changes due to insufficient batching compared to costs of transformation on the CPU. There is no single best solution, but it depends on how much of your scene is static, how much is dynamic and how it is laid out.
A common solution is to put static objects, whose transformation relative to each other never changes into a single VBO, or few VBOs (if they use different textures, vertex formats, etc), completely transformed. This is done once before rendering. Not each frame. Dynamic objects (players, monster, whatever) are then rendered individually, with transformation done in the vertex shader.
You can still optimize for state changes by roughly ordering the drawing of the individual objects by textures and programs.