Speed of ComputeShader vs. PixelShader - directx

I've got a question regarding ComputeShader compared to PixelShader.
I want to do some processing on a buffer, and this is possible both with a pixel shader and a compute shader, and now I wonder if there is any advantage in either over the other one, specifically when it comes to speed. I've had issues with either getting to use just 8 bit values, but I should be able to work-around that.
Every data point in the output will be calculated from using in total 8 data points surrounding it (MxN matrix), so I'd think this would be perfect for a pixel shader, since the different outputs don't influence each other at all.
But I was unable to find any benchmarkings to compare the shaders, and now I wonder which one I should aim for. Only target is the speed.

From what i understand, shaders are shaders in the sense that they are just programs run by alot of threads on data. Therefore, in general there should not be any diffrence in terms of computing power/speed doing calculations in the pixel shader as opposed to the compute shader. However..
To do calculations on the pixelshader you have to massage your data so that it looks like image data, this means you have to draw a quad first of all, but also that your output must have the 'shape' of a pixel (float4 basically). This data must then be interpreted by you app into something useful
if you're using the computeshader you can completly control the number of threads to use where as for pixel shaders they have to be valid resolutions. Also you can input and output data in any format you like and take advantage of accelerated conversion using UAVs (i think)
i'd recommend using computeshaders since they are ment for doing general purpose computation and are alot easier to work with. Your over all application will probably be faster too, even if the actual shader computation time is about the same, just because you can avoid some of the hoops you have to jump through just through to get pixel shaders to do what you want.

Related

What is the correct way to store per-pixel persistence data in the Metal compute kernel?

I am trying to implement the MoG background subtraction algorithm based on the opencv cuda implementation
What I need is to maintain a set of gaussian parameter independently for each pixel location across multiple frame. Currently I am just allocating a single big MTLBuffer to do the job and on every frame, I have to invoke the commandEncoder.setBuffer API. Is there a better way? I read about imageblock but I am not sure if it is relevant.
Also, I would be really happy if you can spot any things that shouldn't be directly translated from cuda to metal.
Allocate an 8 bit texture and store intermediate values into the texture in your compute shader. Then after this texture is rendered, you can rebind it as an input texture to whatever other methods need to read from it in the rest of the renders. You can find a very detailed example of this sort of thing at this github example project of a parallel prefix sum on top of Metal. This example also shows how to write XCTest regression tests for your Metal shaders. Github MetalPrefixSum

Performance of chained Metal shaders versus a single shader?

Practically speaking, how much overhead does chaining shaders have compared to if a single shader is used to do the same work?
In other words, is it preferable to chain shaders versus developing one monster shader? Or does the overhead from chaining them dictate to use as few shaders as possible?
As an example, consider #warrenm sample "Image Processing" project. There is a adjust_saturation shader chained to a gaussian_blur_2d shader. Would combining both shaders into a single shader significantly improve performance, or would it practically be the same?
I would expect a significant amount of performance gain in your example of combining adjust_saturation to gaussian_blur_2d (assuming they do what their name's suggest).
From the GPU's point of view, both operations are pretty trivial in terms of the maths that need to be done, performance is going to be totally dominated by texture fetching and writing out results. I'd imagine that gaussian blur is doing bit more work because it presumably does multiple texture samples per output fragment. By combining the two shaders, you can eliminate entirely the texture fetching and writing cost of adjusting saturation.
I think by combining the two operations you could expect to make siginificant performance gains, somewhere around 10%-40% faster than chaining them. Bear in mind you might not see a difference in framerate because iOS is very active in managing the CPU/GPU clock speed, so it's really hard to measure things accurately.
It depends on the size of texture and the size of cache. If you absolutely have to optimize it, it probably worths to combine them into a single shader. If you want to reuse your code, it makes sense to create a set of simpler shaders and combine them (just like my VideoShader project, https://github.com/snakajima/vs-metal).
By the way, when you combine multiple shaders, you'd better to create a single command buffer and encode all your shaders into that command buffer (instead of creating a command buffer for each shader). It allows Metal to do a certain set of optimizations.

Better to change uniforms or change program?

Using webgl I need to perform 3 passes to render my scene. Each pass runs the same geometry and shaders but has differing values for some uniforms and textures.
I seem to have two choices. Have a single "program" and set all of the uniforms and textures for each pass. Or have 3 "programs" each containing the same shaders, and set all the necessary uniforms/shaders once per program, and then just switch programs for each pass. This means that I will do one useProgram call per pass instead of man setUniform calls for each pass.
Is this second technique likely to be faster as it will avoid very many setuniform calls, or is changing the program very expensive? I've done some trials but with the very simple geometry I have at the moment I don't see any difference in performance because setup costs overwhelm any differences.
Is there any reason to prefer one technique over the other?
Just send different values via glUniform if the shader programs are the same.
Switching between programs is generally slower than change value of uniform.
Anyway Uber Shader Program (with list of uniforms like useLighting, useAlphaMap) in most cases aren't good.
#gman
We are talking about WebGL (GLES 2.0) where we don't have UBO. (uniform buffer object)
#top
Summing try to avoid rebinding shader programs (but it's not end of the world) and don't create one uber shader!
When you have large amouts of textures to rebind, texture atlasing should be the fastest solution, so you don't need to rebind textures, don't need to rebind programs. Textures can be switched by modifying uniforms representing texCoord offsets.
Modifying such uniforms can be optimized even further:
You should consider moving frequently modified uniforms to attributes. Usualy their data source are provided using attribPointers but you can also use constant values when they are disabled. Instead of unformXXX() use attribXXX() functions to specify their constant values.
I think best example is light position. Normaly you'd have to specify uniform values for it every time light position changes to ALL programs that make use of it. In contrast, when using 'attributed' uniforms you can specify attribute value once globaly when your light moves.
-pros:
This method is best suited when you have many programs which would like to share uniforms, as we know we can't use uniform buffers in WebGL, it seams to be the only reasonable solution.
-cons:
Of course available size of such 'attributed' uniforms will be much smaller than using regular uniforms, but it still can speed things up a lot if you do it to some part of your uniforms.

Efficient pixel shader when only u-axis varies?

I'm writing a pixel shader that has the property where for a given quad the values returned only vary by u-axis value. I.e. for a fixed u, then the color output is constant as v varies.
The computation to calculate the color at a pixel is relatively expensive - i.e. does multiple samples per pixel / loops etc..
Is there a way to take advantage of the v-invariance property? If I was doing this on a CPU then you'd obviously just cache the values once calculated but guess that doesn't apply because of parallelism. It might be possible for me to move the texture generation to the CPU side and have the shader access a Texture1D but I'm not sure how fast that will be.
Is there a paradigm that fits this situation on GPU cards?
cheers
Storing your data in a 1D texture and sampling it in your pixel shader looks like a good solution. Your GPU will be able to use texture caching features, allowing it to make use of the fact that many of you pixels are using the same value from your 1D texture. This should be really fast, texture fetching and caching is one of the main reasons your gpu is so efficient at rendering.
It is commonpractice to make a trade-off between calculating the value in the pixel shader, or using a lookup table texture. You are doing complex calculations by the sound of it, so using a lookup texture with certainly improve performance.
Note that you could still generate this texture by the GPU, there is no need to move it to the CPU. Just render to this 1D texture using your existing shader code as a prepass.

DirectX world view matrix multiplications - GPU or CPU the place

I am new to directx, but have been surprised that most examples I have seen the world matrix and view matrix are multiplied as part of the vertex shader, rather than being multiplied by the CPU and the result being passed to the shader.
For rigid objects this means you multiply the same two matrices once for every single vertex of the object. I know that the GPU can do this in parallel over a number of vertices (don't really have an idea how many), but isn't this really inefficient, or am I just missing something? I am still new and clueless.
In general, you want to do it on the CPU. However, DirectX 9 has the concept of "preshaders", which means that this multiplication will be done on the CPU up-front. This has been removed for newer APIs, but it might be very well relevant for the examples you're looking at.
Moreover, modern GPUs are extremely fast when it comes to ALU operations compared to memory access. Having a modestly complex vertex shader (with a texture fetch maybe) means that the math required to do the matrix multiplication comes for free, so the authors might have not even bothered.
Anyway, the best practice is to pre-multiply everything constant on the CPU. Same applies for moving work from the pixel shaders into the vertex shaders (if something is constant across a triangle, don't compute it per-pixel.)
Well, that doesn't sound clueless to me at all, you are absolutely right!
I don't know exactly what examples you have been looking at, but in general you'd pass precalculated matrices as much as possible, that is what semantics like WORLDVIEW (and even more appropriate for simple shaders, WORLDVIEWPROJECTION) are for.
Exceptions could be cases where the shader code needs access to the separate matrices as well (but even then I'd usually pass the combined matrices as well)... or perhaps those examples where all about illustrating matrix multiplication. :-)

Resources