I have searched the net for a good example on this subject, to no avail. Hence, I ask the Stackoverflow community. If there are any examples on rendering to a texture array with Direct3D. This preferably by selecting layers to render to with the geometry shader. Both depth and color should be possible to render.
Related
I have seen demos on WebGL that
color rectangular surface
attach textures to the rectangles
draw wireframes
have semitransparent textures
What I do not understand is how to combine these effects into a single program, and how to interact with objects to change their look.
Suppose I want to create a scene with all the above, and have the ability to change the color of any rectangle, or change the texture.
I am trying to understand the organization of the code. Here are some short, related questions:
I can create a vertex buffer with corresponding color buffer. Can I have some rectangles with texture and some without?
If not, I have to create one vertex buffer for all objects with colors, and another with textures. Can I attach a different texture to each rectangle in a vector?
For a case with some rectangles with colors, and others with textures, it requires two different shader programs. All the demos I see have only one, but clearly more complicated programs have multiple. How do you switch between shaders?
How to draw wireframe on and off? Can it be combined with textures? In other words, is it possible to write a shader that can turn features like wireframe on and off with a flag, or does it take two different calls to two different shaders?
All the demos I have seen use an index buffer with triangles. Is Quads no longer supported in WebGL? Obviously for some things triangles would be needed, but if I have a bunch of rectangles it would be nice not to have to create an index of triangles.
For all three of the above scenarios, if I want to change the points, the color, the texture, or the transparency, am I correct in understanding the glSubBuffer will allow replacing data currently in the buffer with new data.
Is it reasonable to have a single object maintaining these kinds of objects and updating color and textures, or is this not a good design?
The question you ask is not just about WebGL, but also about OpenGL and 3D.
The most used way to interact is setting attributes at the start and uniforms at the start and on the run.
In general, answer to all of your questions is "use engine".
Imagine it like you have javascript, CPU based lang, then you have WebGL, which is like a library of stuff for JS that allows low level comunication with GPU (remember, low level), and then you have shader which is GPU program you must provide, but it works only with specific data.
Do anything that is more then "simple" requires a tool, that will allow you to skip using WebGL directly (and very often also write shaders directly). The tool we call engine. Engine usually binds together some set of abilities and skips the others (difference betwen 2D and 3D engine for example). Engine functions call some WebGL preset functions with specific order, so you must not ever touch WebGL API again. Engine also provides very complicated logic to build only single pair, or few pairs of shaders, based just on few simple engine api calls. The reason is that during entire program, swapping shader program cost is heavy.
Your questions
I can create a vertex buffer with corresponding color buffer. Can I
have some rectangles with texture and some without? If not, I have to
create one vertex buffer for all objects with colors, and another with
textures. Can I attach a different texture to each rectangle in a
vector?
Lets have a buffer, we call vertex buffer. We put various data in vertex buffer. Data doesnt go as individuals, but as sets. Each unique data in set, we call attribute. The attribute can has any meaning for its vertex that vertex shader or fragment shader code decides.
If we have buffer full of data for triangles, it is possible to set for example attribute that says if specific vertex should texture the triangle or not and do the texturing logic in the shader. Anyway I think that data size of attributes for each vertex must be equal (so the textured triangles will eat same size as nontextured).
For a case with some rectangles with colors, and others with textures,
it requires two different shader programs. All the demos I see have
only one, but clearly more complicated programs have multiple. How do
you switch between shaders?
Not true, even very complicated programs might have only one pair of shaders (one WebGL program). But still it is possible to change program on the run:
https://www.khronos.org/registry/webgl/specs/latest/1.0/#5.14.9
WebGL API function useProgram
How to draw wireframe on and off? Can it be combined with textures? In
other words, is it possible to write a shader that can turn features
like wireframe on and off with a flag, or does it take two different
calls to two different shaders?
WebGL API allows to draw in wireframe mode. It is shader program independent option. You can switch it with each draw call. Anyway it is also possible to write shader that will draw as wireframe and control it with flag (flag might be both, uniform or attribute based).
All the demos I have seen use an index buffer with triangles. Is Quads
no longer supported in WebGL? Obviously for some things triangles
would be needed, but if I have a bunch of rectangles it would be nice
not to have to create an index of triangles.
WebGL supports only Quads and triangles. I guess it is because without quads, shaders are more simple.
For all three of the above scenarios, if I want to change the points,
the color, the texture, or the transparency, am I correct in
understanding the glSubBuffer will allow replacing data currently in
the buffer with new data.
I would say it is rare to update buffer data on the run. It slows a program a lot. glSubBuffer is not in WebGL (different name???). Anyway dont use it ;)
Is it reasonable to have a single object maintaining these kinds of
objects and updating color and textures, or is this not a good design?
Yes, it is called Scene graph and is widely used and might be also combined with other techniques like display list.
I'm making an iOS app and I want to be able to render with individual "layers" so that I can do blending between them and use shaders on each individually before blending them all together and rendering to the screen.
I understand that I will be rendering to Textures and then rendering these textures on top of each other in the framebuffer, but I am not understanding clearly what code needs to be written to follow this procedure. In another answer I found what I want to do, but I don't know what code accomplishes this task: How to achieve multi-layered drawing with OpenGL ES on iOS? (For example how do I "Bind texture 1, then draw it"? What does it mean to "Attach texture 1"?)
I've also looked at Apple's documentation regarding this technique but it isn't very clear about the steps or code for the actual rendering part of the process.
How would I go about doing this? (hopefully with code examples of each step because I haven't understood spotty instructions that expect me to just know what is needed for each step)
Here is an example of what I want to do with this. The spheres would be rendered into a "layer" or Texture2D which I would then pass through the shader, then render on top of a already partially rendered scene. I don't know exactly what kind of openGL code could do that.
You're looking at wrong place. To use OpenGL, you need to study OpenGL, not anything else. Apple doesn't provide any OpenGL documentation because it's an open standard, so the specs are freely published. Apple assumes you're already familiar with it.
OpenGL ES 2.0 spec
manual pages
I think you are having trouble because you don't have understanding of GL specific terms. The spec describes them very well and clearly. So, please read the spec. That will save your time A LOT. Or you will keep the trouble.
Also, I like to introduce a site which has very nice conceptual description of OpenGL pipeline.
http://www.songho.ca/opengl/
This site is targeting desktop GL, and some API may be different a little. Please focus on conceptual understanding. For example, here's an illustration from the site.
For more tutorials, google with proper keyword like OpenGL ES 2.0 tutorials (or how-tos). Here's an example link, and would be helpful. There're also many more tutorials. If spec is too boring, it's also good to have some fun with tutors.
Update
I like to say one more. IMO, the OpenGL is all about drawing triangles. Everything is ultimately converted into triangles in 3D space to represent some shape. Anything else all exists only for optimization. And in most cases, GL chooses batch processing for major optimization strategy. Because overhead of each drawing call is not affordable for most games.
It's hard to start OpenGL ES because it's an optimized version of desktop GL, so all convenient or easy drawing features are stripped off. This is same even on recent version of desktop GL.
So there's no such drawOneTriangle function. Instead GL has something like
make a buffer,
put list of many triangles there
select the buffer for next drawing.
draw all triangles in current buffer at once
delete the buffer.
By using buffer, you don't need to dispatch duplicated data to GPU from CPU. And GL uses this approach everywhere. For example, you don't have such drawOneTriangleWithTexture function to use textures. Instead, you have to
make a buffer
put list of many pixels there (bitmap)
select the buffer for next drawing.
draw all triangles with the texture pixel data in current buffers.
delete the buffer.
Everything overly complex stuffs on GL are all exists for optimization. This may look weird at first, but there're usually very good reasons for the design.
Update 2
Now I think you're looking for render to texture feature. (well actually you already mentioned this…)
You can use a rendered image as a texture source. To do this,
you need to bind a frame-buffer with texture object rather then render-buffer object using some functions like glFramebufferTexture.
Once you render to a texture, and switch frame-buffer to final buffer, and bind the texture you drawn and others, and perform the final drawing. You need two frame buffers: one for render-to-texture, and one for final output.
In iOS it is easy to setup OpenGL ES 2.0 render to textures and then use those textures for post processing passes or as textures for subsequent rendering passes. That seems like a fairly common approach across OpenGL implementations. All good.
According to Apple's OpenGL ES Programming Guide for iOS (see pages 28 and 29) you can also create and draw to multiple offscreen framebuffer objects. They suggest that you would do this in order to perform offscreen image processing. However, I can't find and description of how you would access the buffers for image processing or any other purpose after rendering to them.
Can these offscreen buffers be used with non-OpenGL frameworks for image processing? Can these buffers be read back by the CPU?
Does anyone have any pointers or examples?
Image processing is one possible use for offscreen framebuffer objects (FBOs), but there are other applications for this.
For image processing, you most typically would render to a FBO that is backed by a texture. Once you'd rendered to that texture, you could pull that into a second stage of your image processing pipeline or have it be used as a texture in some part of your 3-D scene. As an example of this, my GPUImage open source framework uses a texture-backed offscreen FBO for each filter stage applied to incoming images or video frames. The next stage then pulls in that texture and renders the filtered result to its own texture-backed FBO.
As I said, there can be other applications for offscreen rendering. These included deferred lighting (which Apple has a fairly impressive example of in the 2011 WWDC OpenGL ES videos) and calculation of cached lookup values for later reference. I use the latter as an optimization in my Molecules application to accelerate the mapping of ambient occlusion lighting textures on the surface of my atom spheres. Rather than performing a set of calculations for each fragment of each atom, I render them once for a generic sphere and then look up the results in the atom fragment shader.
Is there any point in using multiple render targets? Couldn't one just draw to one render target and store that in a texture before clearing the target and using it again?
The key thing to understand about MRT is that you're not drawing the exact same data out to all the render targets.
A pixel shader can only output 4 floating point values, typically you'll use these 4 values to generate a colour at that given point. However you may want to output depth data or normal data instead, so you use those 4 floating point values to represent other information you might need.
The advantage is with MRT you only need to draw the scene once and output to the various render targets, so in one render pass you can output to one render target which will recieve the diffuse colour data, another render target will recieve the normal data, and a third render target will recieve depth data. See below to get a better understanding of what I mean:
It's really a case of the RGBA values becoming other things, for example the RGB because X,Y,Z of the normal of the polygon being drawn.
There are some catches with using MRT, such as all your render targets have to have the same bit depth and you do start pushing your GPUs texture fillrate with it but overall the advantages outway the pitfalls.
Multiple render targets are used when you want to render one piece of geometry and collect different outputs into separate render targets.
Today this technology is primarily used to implement deferred shading. In deferred shading the multiple render targets store lighting information, such as surface normal, specular color and specular exponent, as well as depth and diffuse color information. The set of combined render targets is referred to as a G-Buffer.
See 6800 Leagues Under the Sea (Hargreaves & Harris 2004) for a primer of Deferred Shading and G-Buffers.
I'm relatively new to WebGL, and OpenGL too for that matter, but in recent days I've filled up most my time writing a little game for it. However, when I wanted to implement something like heat waves, or any sort of distortion, I was left stuck.
Now, I can make a texture ripple using the fragment shader, but I feel like I'm missing something when it comes to distorting the content behind an object. Is there any way to grab the color of a pixel that's already been rendered within the fragment shader?
I've tried rendering to a texture and then having the texture of the object be that, but it appears if you choose to render your scene to a texture, you cannot render it to the screen also. And beyond that, if you want to render to a texture, that texture must be a power of two (which many screen resolutions do not quite fit into)
Any help would be appreciated.
You're going to have to render to a texture and draw that texture onto the screen while distorting it. Also, there's no requirement that framebuffer objects must be of a power-of-two size in OpenGL ES 2.0 (which is the graphics API WebGL uses). But non-power-of-two textures can't have mipmapping or texture-wrapping.
I believe you can modify individual canvas pixels directly. Might be a good way to ripple a small area, but might not be gpu-accelerated.