What is a color attachment in Metal? - metal

I assume color attachments exist in other API's other than Metal (I know OpenGL for sure), but I'm new to graphics programming and I'd like to know what exactly a color attachment is conceptually. All the drawing I've done involves setting properties on the first in an array of color attachments and then making a render pass. Is a color attachment literally just a buffer? Would the sole point of using multiple in a render pass be to draw the same image to multiple buffers / textures?
Edit: Pipeline states have arrays of color attachments as well, as I just recalled. If they are essentially buffers, what does that have to do with setting pipeline state?

Seeing as you have some knowledge of OpenGL, I'll put it in those terms:
A Colour attachment is a texture which attaches to a frame buffer as a render target, used for off-screen rendering.
Colour attachments are used in several techniques, including reflection, refraction, and deferred shading.
In terms of a graphics pipeline, buffers with attachments will tend to be either sources of texture data, or end point render targets.
When you change the bound buffers you change the state of the pipeline, as computer graphics hardware is state based. You queue up state changes for the entire pipeline(use shader x, bind buffer y, set uniform z), execute those changes, then observe the result as rendered to the screen.

Related

Texture units for preparing textures

In a straightforward procedure of using textures in WebGL (and I believe OpenGL), it makes sense to do something like the following:
Activate the texture unit w/ gl.activeTexture()
Bind the texture w/ gl.bindTexture()
Setup the parameters w/ gl.texParameteri() and friends
Upload the data w/ gl.texImage2D()
Assign the correct unit to the sampler w/ gl.uniform1i()
Draw
This is also the approach taken by various books and tutorials for learning the technique.
However - in more advanced usage where a texture may be re-used for different shaders or it's desirable to have a split between loading and rendering, it seems like step 1 (assigning the texture unit) might be unnecessary until render time.
Does the texture unit really affect the setup of the texture itself?
In other words - is the following approach ok?
PREP (for each texture, before any render calls)
Do not activate the texture unit (will always be default 0 here)
Bind the texture w/ gl.bindTexture()
Setup the parameters w/ gl.texParameteri() and friends
Upload the data w/ gl.texImage2D()
RENDER (on each tick)
Activate the desired texture unit w/ gl.activeTexture()
Bind the texture w/ gl.bindTexture()
Assign the correct unit to the sampler w/ gl.uniform1i()
Draw
The active texture unit does, in general, not have any influence on the texture preparation process. You just have to make sure that the same texture unit is set during the whole process.
This can also be seen by the OpenGL 4.5 Direct State Access API where you don't have to bind the texture at all for preparation.
Note, that you could also avoid setting the sampler uniform (gl.uniform1i) in each frame unless you are using the same shader with different texture units. In todays OpenGL, I would also advice to use layout (binding = x) in the shader instead of setting the sampler uniform from application code.
Edit: To explain what was meant by "make sure that the same texture unit is set during the whole process":
The chosen texture unit does not have a direct influence. But all commands like texParameteri or texImage2D operate on the current texture unit. What you shouldn't do is something like this:
gl.activeTexture(X)
gl.bindTexture(T1);
gl.texParameteri(...)
gl.activeTexture(Y);
gl.texImage2D(...);
because gl.texImage2D would not operate on the T1 texture anymore since T1 is only bound to texture unit X, but not to texture unit Y. Basically, you can choose any texture unit X you want for the setup process, but you should not change the texture unit (without rebinding the texture) in between.

WebGL: How to interact between javascript and shaders, and how to use multiple shaders

I have seen demos on WebGL that
color rectangular surface
attach textures to the rectangles
draw wireframes
have semitransparent textures
What I do not understand is how to combine these effects into a single program, and how to interact with objects to change their look.
Suppose I want to create a scene with all the above, and have the ability to change the color of any rectangle, or change the texture.
I am trying to understand the organization of the code. Here are some short, related questions:
I can create a vertex buffer with corresponding color buffer. Can I have some rectangles with texture and some without?
If not, I have to create one vertex buffer for all objects with colors, and another with textures. Can I attach a different texture to each rectangle in a vector?
For a case with some rectangles with colors, and others with textures, it requires two different shader programs. All the demos I see have only one, but clearly more complicated programs have multiple. How do you switch between shaders?
How to draw wireframe on and off? Can it be combined with textures? In other words, is it possible to write a shader that can turn features like wireframe on and off with a flag, or does it take two different calls to two different shaders?
All the demos I have seen use an index buffer with triangles. Is Quads no longer supported in WebGL? Obviously for some things triangles would be needed, but if I have a bunch of rectangles it would be nice not to have to create an index of triangles.
For all three of the above scenarios, if I want to change the points, the color, the texture, or the transparency, am I correct in understanding the glSubBuffer will allow replacing data currently in the buffer with new data.
Is it reasonable to have a single object maintaining these kinds of objects and updating color and textures, or is this not a good design?
The question you ask is not just about WebGL, but also about OpenGL and 3D.
The most used way to interact is setting attributes at the start and uniforms at the start and on the run.
In general, answer to all of your questions is "use engine".
Imagine it like you have javascript, CPU based lang, then you have WebGL, which is like a library of stuff for JS that allows low level comunication with GPU (remember, low level), and then you have shader which is GPU program you must provide, but it works only with specific data.
Do anything that is more then "simple" requires a tool, that will allow you to skip using WebGL directly (and very often also write shaders directly). The tool we call engine. Engine usually binds together some set of abilities and skips the others (difference betwen 2D and 3D engine for example). Engine functions call some WebGL preset functions with specific order, so you must not ever touch WebGL API again. Engine also provides very complicated logic to build only single pair, or few pairs of shaders, based just on few simple engine api calls. The reason is that during entire program, swapping shader program cost is heavy.
Your questions
I can create a vertex buffer with corresponding color buffer. Can I
have some rectangles with texture and some without? If not, I have to
create one vertex buffer for all objects with colors, and another with
textures. Can I attach a different texture to each rectangle in a
vector?
Lets have a buffer, we call vertex buffer. We put various data in vertex buffer. Data doesnt go as individuals, but as sets. Each unique data in set, we call attribute. The attribute can has any meaning for its vertex that vertex shader or fragment shader code decides.
If we have buffer full of data for triangles, it is possible to set for example attribute that says if specific vertex should texture the triangle or not and do the texturing logic in the shader. Anyway I think that data size of attributes for each vertex must be equal (so the textured triangles will eat same size as nontextured).
For a case with some rectangles with colors, and others with textures,
it requires two different shader programs. All the demos I see have
only one, but clearly more complicated programs have multiple. How do
you switch between shaders?
Not true, even very complicated programs might have only one pair of shaders (one WebGL program). But still it is possible to change program on the run:
https://www.khronos.org/registry/webgl/specs/latest/1.0/#5.14.9
WebGL API function useProgram
How to draw wireframe on and off? Can it be combined with textures? In
other words, is it possible to write a shader that can turn features
like wireframe on and off with a flag, or does it take two different
calls to two different shaders?
WebGL API allows to draw in wireframe mode. It is shader program independent option. You can switch it with each draw call. Anyway it is also possible to write shader that will draw as wireframe and control it with flag (flag might be both, uniform or attribute based).
All the demos I have seen use an index buffer with triangles. Is Quads
no longer supported in WebGL? Obviously for some things triangles
would be needed, but if I have a bunch of rectangles it would be nice
not to have to create an index of triangles.
WebGL supports only Quads and triangles. I guess it is because without quads, shaders are more simple.
For all three of the above scenarios, if I want to change the points,
the color, the texture, or the transparency, am I correct in
understanding the glSubBuffer will allow replacing data currently in
the buffer with new data.
I would say it is rare to update buffer data on the run. It slows a program a lot. glSubBuffer is not in WebGL (different name???). Anyway dont use it ;)
Is it reasonable to have a single object maintaining these kinds of
objects and updating color and textures, or is this not a good design?
Yes, it is called Scene graph and is widely used and might be also combined with other techniques like display list.

Saving opengl object to disk for later usage, including texture

I'm trying to create an openGL ES objects at run time, meaning let the user select the texture for certain objects, and later use the created object again with the same texture chosen (The user can select where to place the texture, kind of masking the texture and using the masked part for the object's texture).
For this, saving the mesh (Vertices) will not do.
Is there anyway to save the entire thing?
Im using openGL ES 2.0.
What you see onscreen is just the result of a bunch of inputs (VBOs, textures, other attribs and uniforms) that you passed to OpenGL from the Objective-C program you have running on the CPU. You must already have a shader set up that takes inputs based on the users' touches to allow the user to pick where to place the texture in the first place. So, whatever the values you passed to the shader last are what you want to save.
Just save the value of those attribs and uniforms that need to persist to the disk, along with the texture and vertices, and then use that data to reconstitute the object.
OpenGL is just a drawing API. Sort of like brushes, pencils and a palette you use to draw on the canvas given to your program by the operating system.
What you ask for is like asking the brush to understand the picture of a face drawn on a canvas. OpenGL can't do it. Period.
What you actually should do is ask yourself: "What inputs did I combine in my program in which way, to actually draw that image? What's the minimal set of parameters required to recreate those steps?" And then you write those to a file.

OpenGL photoshop overlay blend mode

Im trying to implement a particle system (using OpenGL 2.0 ES), where each particle is made up of a quad with a simple texture
the red pixels are transparent. Each particle will have a random alpha value from 50% to 100%
Now the tricky part is i like each particle to have a blendmode much like Photoshop "overlay" i tried many different combinations with the glBlendFunc() but without luck.
I dont understand how i could implement this in a fragment shader, since i need infomations about the current color of the fragment. So that i can calculate a new color based on the current and texture color.
I also thought about using a frame buffer object, but i guess i would need to re-render my frame-buffer-object into a texture, for each particle since each particle every frame, since i need the calculated fragment color when particles overlap each other.
Ive found math' and other infomations regrading the Overlay calculation but i have a hard time figuring out which direction i could go to implement this.
http://www.pegtop.net/delphi/articles/blendmodes/
Photoshop blending mode to OpenGL ES without shaders
Im hoping to have a effect like this:
You can get information about the current fragment color in the framebuffer on an iOS device. Programmable blending has been available through the EXT_shader_framebuffer_fetch extension since iOS 6.0 (on every device supported by that release). Just declare that extension in your fragment shader (by putting the directive #extension GL_EXT_shader_framebuffer_fetch : require at the top) and you'll get current fragment data in gl_LastFragData[0].
And then, yes, you can use that in the fragment shader to implement any blending mode you like, including all the Photoshop-style ones. Here's an example of a Difference blend:
// compute srcColor earlier in shader or get from varying
gl_FragColor = abs(srcColor - gl_LastFragData[0]);
You can also use this extension for effects that don't blend two colors. For example, you can convert an entire scene to grayscale -- render it normally, then draw a quad with a shader that reads the last fragment data and processes it:
mediump float luminance = dot(gl_LastFragData[0], vec4(0.30,0.59,0.11,0.0));
gl_FragColor = vec4(luminance, luminance, luminance, 1.0);
You can do all sorts of blending modes in GLSL without framebuffer fetch, but that requires rendering to multiple textures, then drawing a quad with a shader that blends the textures. Compared to framebuffer fetch, that's an extra draw call and a lot of schlepping pixels back and forth between shared and tile memory -- this method is a lot faster.
On top of that, there's no saying that framebuffer data has to be color... if you're using multiple render targets in OpenGL ES 3.0, you can read data from one and use it to compute data that you write to another. (Note that the extension works differently in GLSL 3.0, though. The above examples are GLSL 1.0, which you can still use in an ES3 context. See the spec for how to use framebuffer fetch in a #version 300 es shader.)
I suspect you want this configuration:
Source: GL_SRC_ALPHA
Destination: GL_ONE.
Equation: GL_ADD
If not, it might be helpful if you could explain the math of the filter you're hoping to get.
[EDIT: the answer below is true for OpenGL and OpenGL ES pretty much everywhere except iOS since 6.0. See rickster's answer for information about EXT_shader_framebuffer_fetch which, in ES 3.0 terms, allows a target buffer to be flagged as inout, and introduces a corresponding built-in variable under ES 2.0. iOS 6.0 is over a year old at the time of writing so there's no particular excuse for my ignorance; I've decided not to delete the answer because it's potentially valid to those finding this question based on its opengl-es, opengl-es-2.0 and shader tags.]
To confirm briefly:
the OpenGL blend modes are implemented in hardware and occur after the fragment shader has concluded;
you can't programmatically specify a blend mode;
you're right that the only workaround is to ping pong, swapping the target buffer and a source texture for each piece of geometry (so you draw from the first to the second, then back from the second to the first, etc).
Per Wikipedia and the link you provided, Photoshop's overlay mode is defined so that the output pixel from a background value of a and a foreground colour of b, f(a, b) is 2ab if a < 0.5 and 1 - 2(1 - a)(1 - b) otherwise.
So the blend mode changes per pixel depending on the colour already in the colour buffer. And each successive draw's decision depends on the state the colour buffer was left in by the previous.
So there's no way you can avoid writing that as a ping pong.
The closest you're going to get without all that expensive buffer swapping is probably, as Sorin suggests, to try to produce something similar using purely additive blending. You could juice that a little by adding a final ping-pong stage that converts all values from their linear scale to the S-curve that you'd see if you overlaid the same colour onto itself. That should give you the big variation where multiple circles overlap.

What are iOS OpenGL ES 2.0 offscreen framebuffer objects used for?

In iOS it is easy to setup OpenGL ES 2.0 render to textures and then use those textures for post processing passes or as textures for subsequent rendering passes. That seems like a fairly common approach across OpenGL implementations. All good.
According to Apple's OpenGL ES Programming Guide for iOS (see pages 28 and 29) you can also create and draw to multiple offscreen framebuffer objects. They suggest that you would do this in order to perform offscreen image processing. However, I can't find and description of how you would access the buffers for image processing or any other purpose after rendering to them.
Can these offscreen buffers be used with non-OpenGL frameworks for image processing? Can these buffers be read back by the CPU?
Does anyone have any pointers or examples?
Image processing is one possible use for offscreen framebuffer objects (FBOs), but there are other applications for this.
For image processing, you most typically would render to a FBO that is backed by a texture. Once you'd rendered to that texture, you could pull that into a second stage of your image processing pipeline or have it be used as a texture in some part of your 3-D scene. As an example of this, my GPUImage open source framework uses a texture-backed offscreen FBO for each filter stage applied to incoming images or video frames. The next stage then pulls in that texture and renders the filtered result to its own texture-backed FBO.
As I said, there can be other applications for offscreen rendering. These included deferred lighting (which Apple has a fairly impressive example of in the 2011 WWDC OpenGL ES videos) and calculation of cached lookup values for later reference. I use the latter as an optimization in my Molecules application to accelerate the mapping of ambient occlusion lighting textures on the surface of my atom spheres. Rather than performing a set of calculations for each fragment of each atom, I render them once for a generic sphere and then look up the results in the atom fragment shader.

Resources