XNA: How to write to a texture using shaders - xna

Hey, I want to make a falling-sand-animation (powder-game, pyrosand, wxsand...) with shaders for practise.
To do so, I need an array of bytes (256x256) stored in a texture, every frame, this array is modified according to a set of rules (a simple for loop with some ifs in it).
Up to now I locked the texture, applied the rules and unlocked it every frame, but this seems to overhelm my cpu, so is there a way to modify (read, then write) a texture with shaders?
Any suggestions or tutorial-links are welcome.

You are looking for RenderTargets ... you can easily use a shader to draw to a texture, and then do whatever you'd like with that texture.
One thing to keep in mind is that you'll have to change your algorithm. writing shaders is an exercise in functional programming, where it sounds like you wrote it imperatively

Related

WebGL: How to interact between javascript and shaders, and how to use multiple shaders

I have seen demos on WebGL that
color rectangular surface
attach textures to the rectangles
draw wireframes
have semitransparent textures
What I do not understand is how to combine these effects into a single program, and how to interact with objects to change their look.
Suppose I want to create a scene with all the above, and have the ability to change the color of any rectangle, or change the texture.
I am trying to understand the organization of the code. Here are some short, related questions:
I can create a vertex buffer with corresponding color buffer. Can I have some rectangles with texture and some without?
If not, I have to create one vertex buffer for all objects with colors, and another with textures. Can I attach a different texture to each rectangle in a vector?
For a case with some rectangles with colors, and others with textures, it requires two different shader programs. All the demos I see have only one, but clearly more complicated programs have multiple. How do you switch between shaders?
How to draw wireframe on and off? Can it be combined with textures? In other words, is it possible to write a shader that can turn features like wireframe on and off with a flag, or does it take two different calls to two different shaders?
All the demos I have seen use an index buffer with triangles. Is Quads no longer supported in WebGL? Obviously for some things triangles would be needed, but if I have a bunch of rectangles it would be nice not to have to create an index of triangles.
For all three of the above scenarios, if I want to change the points, the color, the texture, or the transparency, am I correct in understanding the glSubBuffer will allow replacing data currently in the buffer with new data.
Is it reasonable to have a single object maintaining these kinds of objects and updating color and textures, or is this not a good design?
The question you ask is not just about WebGL, but also about OpenGL and 3D.
The most used way to interact is setting attributes at the start and uniforms at the start and on the run.
In general, answer to all of your questions is "use engine".
Imagine it like you have javascript, CPU based lang, then you have WebGL, which is like a library of stuff for JS that allows low level comunication with GPU (remember, low level), and then you have shader which is GPU program you must provide, but it works only with specific data.
Do anything that is more then "simple" requires a tool, that will allow you to skip using WebGL directly (and very often also write shaders directly). The tool we call engine. Engine usually binds together some set of abilities and skips the others (difference betwen 2D and 3D engine for example). Engine functions call some WebGL preset functions with specific order, so you must not ever touch WebGL API again. Engine also provides very complicated logic to build only single pair, or few pairs of shaders, based just on few simple engine api calls. The reason is that during entire program, swapping shader program cost is heavy.
Your questions
I can create a vertex buffer with corresponding color buffer. Can I
have some rectangles with texture and some without? If not, I have to
create one vertex buffer for all objects with colors, and another with
textures. Can I attach a different texture to each rectangle in a
vector?
Lets have a buffer, we call vertex buffer. We put various data in vertex buffer. Data doesnt go as individuals, but as sets. Each unique data in set, we call attribute. The attribute can has any meaning for its vertex that vertex shader or fragment shader code decides.
If we have buffer full of data for triangles, it is possible to set for example attribute that says if specific vertex should texture the triangle or not and do the texturing logic in the shader. Anyway I think that data size of attributes for each vertex must be equal (so the textured triangles will eat same size as nontextured).
For a case with some rectangles with colors, and others with textures,
it requires two different shader programs. All the demos I see have
only one, but clearly more complicated programs have multiple. How do
you switch between shaders?
Not true, even very complicated programs might have only one pair of shaders (one WebGL program). But still it is possible to change program on the run:
https://www.khronos.org/registry/webgl/specs/latest/1.0/#5.14.9
WebGL API function useProgram
How to draw wireframe on and off? Can it be combined with textures? In
other words, is it possible to write a shader that can turn features
like wireframe on and off with a flag, or does it take two different
calls to two different shaders?
WebGL API allows to draw in wireframe mode. It is shader program independent option. You can switch it with each draw call. Anyway it is also possible to write shader that will draw as wireframe and control it with flag (flag might be both, uniform or attribute based).
All the demos I have seen use an index buffer with triangles. Is Quads
no longer supported in WebGL? Obviously for some things triangles
would be needed, but if I have a bunch of rectangles it would be nice
not to have to create an index of triangles.
WebGL supports only Quads and triangles. I guess it is because without quads, shaders are more simple.
For all three of the above scenarios, if I want to change the points,
the color, the texture, or the transparency, am I correct in
understanding the glSubBuffer will allow replacing data currently in
the buffer with new data.
I would say it is rare to update buffer data on the run. It slows a program a lot. glSubBuffer is not in WebGL (different name???). Anyway dont use it ;)
Is it reasonable to have a single object maintaining these kinds of
objects and updating color and textures, or is this not a good design?
Yes, it is called Scene graph and is widely used and might be also combined with other techniques like display list.

Better to change uniforms or change program?

Using webgl I need to perform 3 passes to render my scene. Each pass runs the same geometry and shaders but has differing values for some uniforms and textures.
I seem to have two choices. Have a single "program" and set all of the uniforms and textures for each pass. Or have 3 "programs" each containing the same shaders, and set all the necessary uniforms/shaders once per program, and then just switch programs for each pass. This means that I will do one useProgram call per pass instead of man setUniform calls for each pass.
Is this second technique likely to be faster as it will avoid very many setuniform calls, or is changing the program very expensive? I've done some trials but with the very simple geometry I have at the moment I don't see any difference in performance because setup costs overwhelm any differences.
Is there any reason to prefer one technique over the other?
Just send different values via glUniform if the shader programs are the same.
Switching between programs is generally slower than change value of uniform.
Anyway Uber Shader Program (with list of uniforms like useLighting, useAlphaMap) in most cases aren't good.
#gman
We are talking about WebGL (GLES 2.0) where we don't have UBO. (uniform buffer object)
#top
Summing try to avoid rebinding shader programs (but it's not end of the world) and don't create one uber shader!
When you have large amouts of textures to rebind, texture atlasing should be the fastest solution, so you don't need to rebind textures, don't need to rebind programs. Textures can be switched by modifying uniforms representing texCoord offsets.
Modifying such uniforms can be optimized even further:
You should consider moving frequently modified uniforms to attributes. Usualy their data source are provided using attribPointers but you can also use constant values when they are disabled. Instead of unformXXX() use attribXXX() functions to specify their constant values.
I think best example is light position. Normaly you'd have to specify uniform values for it every time light position changes to ALL programs that make use of it. In contrast, when using 'attributed' uniforms you can specify attribute value once globaly when your light moves.
-pros:
This method is best suited when you have many programs which would like to share uniforms, as we know we can't use uniform buffers in WebGL, it seams to be the only reasonable solution.
-cons:
Of course available size of such 'attributed' uniforms will be much smaller than using regular uniforms, but it still can speed things up a lot if you do it to some part of your uniforms.

GLKit's GLKBaseEffect and custom shaders

I've been researching this problem I have and I can't seem to understand it well enough to solve it so I thought I might as well throw it out there and the intelligent bunch might have some ideas. :P
Basically I have been working on a iPhone project for a while where I have the luxury to use all the newest frameworks and target 5.1. So I've been using GLKit and the GLKBaseEffect which have been working just fine for me. The reason I started out with GLKBaseEffect rather then writing my own shaders is that I don't know glsl well. However the requirements have become more precise and the base effect just doesn't seem to cut it any longer.
Since I am already doing all my transforms using the base effect I would prefer if I could keep my base effect intact but add glsl-type shaders on top if that makes any sense.
My old approach look something like this (this is in a loop rendering all objects, where an object contain such things as transforms, a mesh and some other less important things for this problem, such as textures, materials and so on)
ObjectBase *obj = [ResourceManager.shared getObjectNamed:name inScene:sceneName];
GLKMatrix4 modelview = effect.transform.modelviewMatrix;
effect.transform.modelviewMatrix = GLKMatrix4Multiply(effect.transform.modelviewMatrix, obj.transform);
[effect prepareToDraw];
[obj render];
effect.transform.modelviewMatrix = modelView;
Here we fetch an object to render and transform (i.e. translate, rotate and scale) the object then we render it, where the rendering itself fetches the mesh for the object, bind the buffers and render it.
So far so good.
What I would like to do however is that during the [obj render]; call I would like the object to also do something like glUseProgram(someProgram); adding more specialized shader code.
I guess one could argue that I am trying to use the base effect for my vertex shaders and want to use "normal" shaders for my fragment shaders. At least that's what I think I want to do.
I have been trying some things.
I tried to create just the fragment shaders and glUseProgramon it, however it said that I need one vertex and one fragment shader when setting up and compiling it. I've also tried to create an empty vertex shader, which didn't turn out very well, I don't know what happens with that, but I am guessing that it overrule the base effect.
I am leaning toward, in the end, accept that it's probably best to throw out the base effects and just write my own shaders all the way. I just feel like it's a lot of work out the window, so I wanted to see how much of it I can save.
I do understand that my understanding of shaders are the part that gives me the most problems, so please be patient with that fact.
I just wanted to give my conclusions for anyone interested in them.
What I've done is actually thrown out the GLBaseEffect all together and implemented my own shader code.
My biggest problem were that I didn't really understand that it's all or nothing so to speak.
Please I might be wrong, so any corrections to where I am wrong will be greatly appreciated, I really don't want to fool anyone reading this.
What I found out during my endeavors is a couple of key-points:
GLKBaseEffect, is meant to mimic the fixed-function pipeline as seen in earlier versions of OpenGLES. Hence it is wrapping the common shader code so you don't really have to care to much about it. You will have basic functionality but it's not really very extendible.
You can still use the neat features of GLKit such as texture loader, the math library and so on if you write your own shader-code. So if you want something more complicated or customizable (bump mapping, toon shading and so on) it is totally worth rewriting the boiler-plate code needed to render properly. What I did at first was that I used the GLKBaseEffect to orient in the scene since it's quite comfortable and easy to use. However when I wanted to do more (tangent-space normal mapping) I kind of got stuck since I couldn't add to the shader program handled by the GLKBaseEffect.
Shaders are really not as scary as I always thought! I just had no idea what it really meant, and I'm surprised that I've read so much about them and still hadn't understood that basically shaders are programs REPLACING the fixed functionality pipeline. Simple as that.
That's enough rant I guess, just wanted to follow up and add what bits and pieces I've collected this far.
Just as you discovered, you can't just use a fragment shader and leave behind the vertex shader. This is because both have different tasks. Vertex shaders deal with the per-vertex aspects: calculating the vertex data, texture (uv's) etc and finally drawing the faces (triangles). Fragment shaders deal with what exactly will be drawn at each pixel on the screen (or in the viewport). When you provide only a fragment shader, you are not telling what your vertex data is, rather you are only telling OpenGL to do something on the pixels. And these pixels hold nothing/gibberish (I am not sure which) since your vertex shader did not do anything.
When using GLKEffect, a call to the [yourEffect prepareToDraw] method takes care of the shaders etc.
If you just wish to use a stock shader pair, why not use the one provided in the XCode OpenGL game template? When you run it, it has two cubes, one rendered using GLKit, and other the normal way. Though I think it will not be enough for most effects. In case you wish to know more about shaders, you can have a look at the NeHe GLSL introduction article. It is about GLSL and how you can write and use shaders in your code. You might want to have a look at Diney Bomfim's All About Shaders articles and this page.
Using GLKit is nice in most cases, since it saves you from writing lots of useless, repetitive code. For example, you do not have to go through so many image formats with different color codings and bits per pixel (per format) and all when you can just use GLKTextureLoader.

How should I optimize drawing a large, dynamic number of collections of vertices?

...or am I insane to even try?
As a novice to using bare vertices for 3d graphics, I haven't ever worked with vertex buffers and the like before. I am guessing that I should use a dynamic buffer because my game deals with manipulating, adding and deleting primitives. But how would I go about doing that?
So far I have stored my indices in a Triangle.cs class. Triangles are stored in Quads (which contain the vertices that correspond to their indices), quads are stored in blocks. In my draw method, I iterate through each block, each quad in each block, and finally each triangle, apply the appropriate texture to my effect, then call DrawUserIndexedPrimitives to draw the vertices stored in the triangle.
I'd like to use a vertex buffer because this method cannot support the scale I am going for. I am assuming it to be dynamic. Since my vertices and indices are stored in a collection of separate classes, though, can I still effectively use a buffer? Is using separate buffers for each quad silly (I'm guessing it is)? Is it feasible and effective for me to dump vertices into the buffer the first time a quad is drawn and then store where those vertices were so that I can apply that offset to that triangle's indices for successive draws? Is there a feasible way to handle removing vertices from the buffer in this scenario (perhaps event-based shifting of index offsets in triangles)?
I apologize that these questions may be either far too novicely or too confusing/vague. I'd be happy to provide clarification. But as I've said, I'm new to this and I may not even know what I'm talking about...
I can't exactly tell what you're trying to do, but using a seperate buffer for every quad is very silly.
The golden rule in graphics programming is batch, batch, batch. This means to pack as much stuff into a single DrawUserIndexedPrimitives call as possible, your graphics card will love you for it.
In your case, put all of your verticies and indicies into one vertex buffer and index buffer (you might need to use more, I have no idea how many verticies we're talking about). Whenever the user changes one of the primatives, regenerate the entire buffer. If you really have a lot of primatives, split them up into multiple buffers and on only regenerate the ones you need when the user changes something.
The most important thing is to minimize the amount of 'DrawUserIndexedPrimitives' calls, those things have a lot of overhead, you could easily make your game on the order of 20x faster.
Graphics cards are pipelines, they like being given a big chunk of data for them to eat away at. What you're doing by giving it one triangle at a time is like forcing a large-scale car factory to only make one car at a time. Where they can't start on building the next car before the last one is finished.
Anyway good luck, and feel free to ask any questions.

Calculation of vertex normals in DirectX

As a learning experience, I'm writing an Immediate mode managed DirectX 9 application.
I'm manually calculating Vertex normals across all triangles in a scene to allow smooth Gouraud shading.
This works as expected, but I'm guessing this is not the most efficient approach. Is it possible to get the GPU to do this for me?
You could in theory generate the vertex normals inside the vertex shader. That involves computation every single time you render a mesh using that shader though, so why not generate them in advance.
If you mean you want to generate them in advance of rendering, but use the GPU instead of the CPU, I would say that it's not worth the bother of speeding up something you are only going to do once. Besides, I'm not sure if DX9 has a way to get computed vertex information back from a shader (DX10 does).
All in all, the best thing to do in most cases is the traditional: compute vertex normals in the program that saves the data files that contain the meshes - do it as a pre-computation step. Usually you have them if the mesh came from a 3d package like Max or Maya, because there is artistic information in the normals, unless you know the whole mesh is supposed to be perfectly smooth (or faceted), it's not computable in the general case.

Resources