In a straightforward procedure of using textures in WebGL (and I believe OpenGL), it makes sense to do something like the following:
Activate the texture unit w/ gl.activeTexture()
Bind the texture w/ gl.bindTexture()
Setup the parameters w/ gl.texParameteri() and friends
Upload the data w/ gl.texImage2D()
Assign the correct unit to the sampler w/ gl.uniform1i()
Draw
This is also the approach taken by various books and tutorials for learning the technique.
However - in more advanced usage where a texture may be re-used for different shaders or it's desirable to have a split between loading and rendering, it seems like step 1 (assigning the texture unit) might be unnecessary until render time.
Does the texture unit really affect the setup of the texture itself?
In other words - is the following approach ok?
PREP (for each texture, before any render calls)
Do not activate the texture unit (will always be default 0 here)
Bind the texture w/ gl.bindTexture()
Setup the parameters w/ gl.texParameteri() and friends
Upload the data w/ gl.texImage2D()
RENDER (on each tick)
Activate the desired texture unit w/ gl.activeTexture()
Bind the texture w/ gl.bindTexture()
Assign the correct unit to the sampler w/ gl.uniform1i()
Draw
The active texture unit does, in general, not have any influence on the texture preparation process. You just have to make sure that the same texture unit is set during the whole process.
This can also be seen by the OpenGL 4.5 Direct State Access API where you don't have to bind the texture at all for preparation.
Note, that you could also avoid setting the sampler uniform (gl.uniform1i) in each frame unless you are using the same shader with different texture units. In todays OpenGL, I would also advice to use layout (binding = x) in the shader instead of setting the sampler uniform from application code.
Edit: To explain what was meant by "make sure that the same texture unit is set during the whole process":
The chosen texture unit does not have a direct influence. But all commands like texParameteri or texImage2D operate on the current texture unit. What you shouldn't do is something like this:
gl.activeTexture(X)
gl.bindTexture(T1);
gl.texParameteri(...)
gl.activeTexture(Y);
gl.texImage2D(...);
because gl.texImage2D would not operate on the T1 texture anymore since T1 is only bound to texture unit X, but not to texture unit Y. Basically, you can choose any texture unit X you want for the setup process, but you should not change the texture unit (without rebinding the texture) in between.
Related
Trying to put together a puzzle:
Is it OK to use MTLRenderCommandEncoder not for rendering but for computing tasks?
If not, does it possible to do vertex shader job using MTLComputeCommandEncoder?
Use case:
Apply simultaneously 2x zoom effect & shake effect (with 10% of width offset) to the stored video file.
You can process buffers (including vertex buffers) in compute shaders. There are tons of techniques that are using it: particle systems, custom tessellation from Unity in SIGGRAPH'22, cloth simulation and many others.
In addition to compute shaders in OpenGL there is transform feedback that allows you to use part of rendering pipeline (before rasterization). However, Metal does not provide API for it.
I assume color attachments exist in other API's other than Metal (I know OpenGL for sure), but I'm new to graphics programming and I'd like to know what exactly a color attachment is conceptually. All the drawing I've done involves setting properties on the first in an array of color attachments and then making a render pass. Is a color attachment literally just a buffer? Would the sole point of using multiple in a render pass be to draw the same image to multiple buffers / textures?
Edit: Pipeline states have arrays of color attachments as well, as I just recalled. If they are essentially buffers, what does that have to do with setting pipeline state?
Seeing as you have some knowledge of OpenGL, I'll put it in those terms:
A Colour attachment is a texture which attaches to a frame buffer as a render target, used for off-screen rendering.
Colour attachments are used in several techniques, including reflection, refraction, and deferred shading.
In terms of a graphics pipeline, buffers with attachments will tend to be either sources of texture data, or end point render targets.
When you change the bound buffers you change the state of the pipeline, as computer graphics hardware is state based. You queue up state changes for the entire pipeline(use shader x, bind buffer y, set uniform z), execute those changes, then observe the result as rendered to the screen.
Is it possible to process an MTLTexture in-place without osx_ReadWriteTextureTier2?
It seems like I can set two texture arguments to be the same texture. Is this supported behavior?
Specifically, I don't mind not having texture caching update after a write. I just want to in-place (and sparsely) modify a 3d texture. It's memory prohibitive to have two textures. And it's computationally expensive to copy the entire texture, especially when I might only be updating a small portion of it.
Per the documentation, regardless of feature availability, it is invalid to declare two separate texture arguments (one read, one write) in a function signature and then set the same texture for both.
Any Mac that supports osx_GPUFamily1_v2 supports function texture read-writes (by declaring the texture with access::read_write).
The distinction between "Tier 1" (which has no explicit constant) and osx_ReadWriteTextureTier2 is that the latter supports additional pixel formats for read-write textures.
If you determine that your target Macs don't support the kind of texture read-writes you need (because you need to deploy to OS X 10.11 or because you're using an incompatible pixel format for the tier of machine you're deploying to), you could operate on your texture one plane at a time, reading from your 3D texture, writing to a 2D texture, and then blitting the result back into the corresponding region in your 3D texture. It's more work, but it'll use much less than double the memory.
I have seen demos on WebGL that
color rectangular surface
attach textures to the rectangles
draw wireframes
have semitransparent textures
What I do not understand is how to combine these effects into a single program, and how to interact with objects to change their look.
Suppose I want to create a scene with all the above, and have the ability to change the color of any rectangle, or change the texture.
I am trying to understand the organization of the code. Here are some short, related questions:
I can create a vertex buffer with corresponding color buffer. Can I have some rectangles with texture and some without?
If not, I have to create one vertex buffer for all objects with colors, and another with textures. Can I attach a different texture to each rectangle in a vector?
For a case with some rectangles with colors, and others with textures, it requires two different shader programs. All the demos I see have only one, but clearly more complicated programs have multiple. How do you switch between shaders?
How to draw wireframe on and off? Can it be combined with textures? In other words, is it possible to write a shader that can turn features like wireframe on and off with a flag, or does it take two different calls to two different shaders?
All the demos I have seen use an index buffer with triangles. Is Quads no longer supported in WebGL? Obviously for some things triangles would be needed, but if I have a bunch of rectangles it would be nice not to have to create an index of triangles.
For all three of the above scenarios, if I want to change the points, the color, the texture, or the transparency, am I correct in understanding the glSubBuffer will allow replacing data currently in the buffer with new data.
Is it reasonable to have a single object maintaining these kinds of objects and updating color and textures, or is this not a good design?
The question you ask is not just about WebGL, but also about OpenGL and 3D.
The most used way to interact is setting attributes at the start and uniforms at the start and on the run.
In general, answer to all of your questions is "use engine".
Imagine it like you have javascript, CPU based lang, then you have WebGL, which is like a library of stuff for JS that allows low level comunication with GPU (remember, low level), and then you have shader which is GPU program you must provide, but it works only with specific data.
Do anything that is more then "simple" requires a tool, that will allow you to skip using WebGL directly (and very often also write shaders directly). The tool we call engine. Engine usually binds together some set of abilities and skips the others (difference betwen 2D and 3D engine for example). Engine functions call some WebGL preset functions with specific order, so you must not ever touch WebGL API again. Engine also provides very complicated logic to build only single pair, or few pairs of shaders, based just on few simple engine api calls. The reason is that during entire program, swapping shader program cost is heavy.
Your questions
I can create a vertex buffer with corresponding color buffer. Can I
have some rectangles with texture and some without? If not, I have to
create one vertex buffer for all objects with colors, and another with
textures. Can I attach a different texture to each rectangle in a
vector?
Lets have a buffer, we call vertex buffer. We put various data in vertex buffer. Data doesnt go as individuals, but as sets. Each unique data in set, we call attribute. The attribute can has any meaning for its vertex that vertex shader or fragment shader code decides.
If we have buffer full of data for triangles, it is possible to set for example attribute that says if specific vertex should texture the triangle or not and do the texturing logic in the shader. Anyway I think that data size of attributes for each vertex must be equal (so the textured triangles will eat same size as nontextured).
For a case with some rectangles with colors, and others with textures,
it requires two different shader programs. All the demos I see have
only one, but clearly more complicated programs have multiple. How do
you switch between shaders?
Not true, even very complicated programs might have only one pair of shaders (one WebGL program). But still it is possible to change program on the run:
https://www.khronos.org/registry/webgl/specs/latest/1.0/#5.14.9
WebGL API function useProgram
How to draw wireframe on and off? Can it be combined with textures? In
other words, is it possible to write a shader that can turn features
like wireframe on and off with a flag, or does it take two different
calls to two different shaders?
WebGL API allows to draw in wireframe mode. It is shader program independent option. You can switch it with each draw call. Anyway it is also possible to write shader that will draw as wireframe and control it with flag (flag might be both, uniform or attribute based).
All the demos I have seen use an index buffer with triangles. Is Quads
no longer supported in WebGL? Obviously for some things triangles
would be needed, but if I have a bunch of rectangles it would be nice
not to have to create an index of triangles.
WebGL supports only Quads and triangles. I guess it is because without quads, shaders are more simple.
For all three of the above scenarios, if I want to change the points,
the color, the texture, or the transparency, am I correct in
understanding the glSubBuffer will allow replacing data currently in
the buffer with new data.
I would say it is rare to update buffer data on the run. It slows a program a lot. glSubBuffer is not in WebGL (different name???). Anyway dont use it ;)
Is it reasonable to have a single object maintaining these kinds of
objects and updating color and textures, or is this not a good design?
Yes, it is called Scene graph and is widely used and might be also combined with other techniques like display list.
Using webgl I need to perform 3 passes to render my scene. Each pass runs the same geometry and shaders but has differing values for some uniforms and textures.
I seem to have two choices. Have a single "program" and set all of the uniforms and textures for each pass. Or have 3 "programs" each containing the same shaders, and set all the necessary uniforms/shaders once per program, and then just switch programs for each pass. This means that I will do one useProgram call per pass instead of man setUniform calls for each pass.
Is this second technique likely to be faster as it will avoid very many setuniform calls, or is changing the program very expensive? I've done some trials but with the very simple geometry I have at the moment I don't see any difference in performance because setup costs overwhelm any differences.
Is there any reason to prefer one technique over the other?
Just send different values via glUniform if the shader programs are the same.
Switching between programs is generally slower than change value of uniform.
Anyway Uber Shader Program (with list of uniforms like useLighting, useAlphaMap) in most cases aren't good.
#gman
We are talking about WebGL (GLES 2.0) where we don't have UBO. (uniform buffer object)
#top
Summing try to avoid rebinding shader programs (but it's not end of the world) and don't create one uber shader!
When you have large amouts of textures to rebind, texture atlasing should be the fastest solution, so you don't need to rebind textures, don't need to rebind programs. Textures can be switched by modifying uniforms representing texCoord offsets.
Modifying such uniforms can be optimized even further:
You should consider moving frequently modified uniforms to attributes. Usualy their data source are provided using attribPointers but you can also use constant values when they are disabled. Instead of unformXXX() use attribXXX() functions to specify their constant values.
I think best example is light position. Normaly you'd have to specify uniform values for it every time light position changes to ALL programs that make use of it. In contrast, when using 'attributed' uniforms you can specify attribute value once globaly when your light moves.
-pros:
This method is best suited when you have many programs which would like to share uniforms, as we know we can't use uniform buffers in WebGL, it seams to be the only reasonable solution.
-cons:
Of course available size of such 'attributed' uniforms will be much smaller than using regular uniforms, but it still can speed things up a lot if you do it to some part of your uniforms.