Webgl Blending is blending to white - webgl

I have written a library for webgl, it has many tests and runs those tests perfectly fine including the one I have for blending.
I've set the test to do a standard blending of Src_Alpha, One_minus_Src_Alpha and the test properly renders the result:
This image is produced by intersecting 1000 lines all with an alpha value of 0.2. Notice how the image does not produce washed out white colors at any given point.
This is also the current blend state as produced by using the webgl Spector plugin for chrome:
Blend State
BLEND: true
BLEND_COLOR: 0, 0, 0, 0
BLEND_DST_ALPHA: ONE_MINUS_SRC_ALPHA
BLEND_DST_RGB: ONE_MINUS_SRC_ALPHA
BLEND_EQUATION_ALPHA: FUNC_ADD
BLEND_EQUATION_RGB: FUNC_ADD
BLEND_SRC_ALPHA: SRC_ALPHA
BLEND_SRC_RGB: SRC_ALPHA
Now I use this same library and render the same edges in a layout and they render like so:
These edges have an opacity of 0.2. And this is the Spector blend mode:
Blend State
BLEND: true
BLEND_COLOR: 0, 0, 0, 0
BLEND_DST_ALPHA: ONE_MINUS_SRC_ALPHA
BLEND_DST_RGB: ONE_MINUS_SRC_ALPHA
BLEND_EQUATION_ALPHA: FUNC_ADD
BLEND_EQUATION_RGB: FUNC_ADD
BLEND_SRC_ALPHA: SRC_ALPHA
BLEND_SRC_RGB: SRC_ALPHA
I am beating my head on a table trying to figure out what the difference between the two scenarios could be.
The shader logic simply hands a color on the vertex to the fragment shader, so there are no premultiplied alphas.
I just need any thoughts of what else can be affecting the blending in such murderous fashion. I can post any extra information needed.
EDIT: To show same exact test in this environment, here is the wheel rendering and adding to white washout somehow:

It seems there was potentially some bad undefined behavior in my library:
I was grabbing the gl context twice: canvas.getContext(...) and each one had the potential to have different properties for things like premultiplied alpha and alpha setting attributes for the context.
When I fixed that issue this bizarre blending error went away. So I will assume the premultiply alpha property between the two context grabs was somehow inconsistent.

Related

Alpha blending of black visibly goes through white color

I'm trying to fade out an object of the scene, but I noticed it fades first gaining value nearly to white, before disappearing due to alpha channel being 0.
For a test, I set a square that's entirely black (0, 0, 0) and then linearly interpolate alpha channel from 1 to 0.
This is the rectangle.
Same rectangle but when alpha value is 0.1 that is vec4(0, 0, 0, 0.1). It's brighter than the background itself.
Blending mode used:
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)
As far as I understand this mode should just lerp between the background pixel and the newly created source pixel. I just don't see any angle where the output pixel becomes brighter, when mixing anything with (0,0,0).
EDIT:
After doing some testing I feel I need to clarify a few more things.
This is WebGL, and I'm drawing into a canvas on a website. I don't know how it works but it looks as if every draw call gl.drawElements() was drawn to a separate buffer and possibly later on composited into a single image. When debugging I can see my square drawn into an entirely white buffer, this is where the colour might come from.
But this means that blending doesn't happen with the backbuffer, but some buffer I didn't know existed. How do I blend into the back buffer? Do I have to avoid browser composition by rendering to a texture and only then drawing it to the canvas?
EDIT 2:
I managed to get the expected result by setting separate blending for alpha and colour as follows:
gl.blendFuncSeparate(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA, gl.ONE, gl.ONE);
But I'd rather leave this question open hoping that someone could clarify why it didn't work in the first place.
Issue is very well described here:
https://stackoverflow.com/a/35459524/9956631
My color was blended with Canvas background. As I understand, it overwrites alpha, so you leave a seethrough part of canvas where your mesh is. So why my blendFuncSeparate() fixed the issue is because I was leaving DST alpha intact.
To turn it off, you can disable alpha when fetching the glContext. To get OpenGL-like rendering you should also disable premultipliedAlpha:
canvas.getContext('webgl',
{
premultipliedAlpha: false,
alpha: false
})!;
Edit:
To make sure my assumption is right, I set a test.
behind the canvas I've placed a label. Then, on top of that I draw my canvas, with my (0, 0, 0, 0.5) color square on top. Just like this:
<label style="
position: absolute;
width: 400px;
height: 400px;
top:445px;
left:500px;
z-index: -1; // Behind...
font-size: 80px;">LMAO</label>
<canvas id="glCanvas" style="z-index: 2;" width="1200" height="1200"></canvas>
As you can see, label is visible where the square is rendered. So this means, it is being blended with what's behind the canvas instead of current contents of the canvas (as one might assume).

Blending in Metal: alpha set to 0 is still opaque

I'm having trouble setting up blending in Metal. Even when starting with the Hello Triangle example provided by Apple, using the following code
pipelineStateDescriptor.colorAttachments[0].blendingEnabled = YES;
pipelineStateDescriptor.colorAttachments[0].sourceAlphaBlendFactor = MTLBlendFactorZero;
pipelineStateDescriptor.colorAttachments[0].destinationAlphaBlendFactor = MTLBlendFactorZero;
and the fragment function
fragment float4 fragmentShader(RasterizerData in [[stage_in]]) {
return float4(in.color.rgb, 0);
}
the triangle still draws completely opaque. What I want to achieve in the end is blending between two shapes by using different blending factors, but I thought I would start with a simple example to understand what is going on. What am I missing?
sourceAlphaBlendFactor and destinationAlphaBlendFactor are to do with constructing a blend for the alpha channel. i.e. they control the alpha that will be written into your destination buffer, which will not really be visible to you. You are probably more interested in the RGB that is written into the frame buffer.
Try setting values for sourceRGBBlendFactor and destinationRGBBlendFactor instead. For traditional alpha blending set sourceRGBBlendFactor to MTLBlendFactorSourceAlpha and set destinationRGBBlendFactor to MTLBlendFactorOneMinusSourceAlpha

SceneKit - How to use different blend modes - e.g. additive blending? [duplicate]

I can't see an obvious way to change the blending function (glBlendFunc) for a scene kit node or geometry - it doesn't seem to be part of the material, and it isn't very obvious from the scene kit documentation how it organises render passes.
Do I need to make a Render delegate for the node which just changes the GLblending mode, or do I need to somehow set up different render passes etc. (It's not obvious from the documentation how I even control things like render passes)?
Will It Blend? - SceneKit
Yes! In iOS 9 and OS X 10.11 (currently in beta), blendMode is an attribute on materials, so you can render any SceneKit content with additive, multiplicative, or other kinds of blending.
But for while you're still supporting earlier OS versions... SceneKit in iOS 8.x and OS X 10.8 through 10.10 doesn't offer API for blend modes.
There are a couple of options you can look at for working around this.
1. Set the GL state yourself
If you call glBlendFunc and friends before SceneKit draws, SceneKit will render using the blend state you've selected. The trick is setting the state at an appropriate time for drawing your blended content and leaving the state as SceneKit expects for un-blended content.
If you set your GL state in renderer:willRenderScene:atTime: and unset it in renderer:didRenderScene:atTime:, you'll apply blending to the entire scene. Probably not what you want. And you can't use a node renderer delegate for only the node you want blended because then SceneKit won't render your node content.
If you can find a good way to wedge those calls in, though, they should work. Try rendering related nodes with a custom program and set your state in handleBindingOfSymbol:usingBlock:, maybe?
2. Use Programmable Blending (iOS only)
The graphics hardware in iOS devices supports reading the color value of a destination fragment in the shader. You can combine this value with the color you intend to write in any number of ways — for example, you can create Photoshop-style blend modes.
In SceneKit, you can use this with a fragment shader modifier — read from gl_LastFragData and write to _output. The example here uses that to do a simple additive blend.
#pragma transparent
#extension GL_EXT_shader_framebuffer_fetch : require
#pragma body
_output.color = gl_LastFragData[0] + _output.color;
From what I can tell after several hours of experimenting, there is no way to actually set the blend mode used to render a piece of geometry, or to control the overall blend mode used to render a pass using SCNTechnique.
SceneKit appears to only have two different blending modes - one where blending is off - if it considers the material opaque, and a "transparent" blending mode (GL_ONE, GL_ONE_MINUS_SRC_ALPHA) when it considers a material transparent. This is bad news if you want to render things like glows, because it doesn't seem possible to get anything like a (GL_ONE, GL_ONE) blend mode you'd want for rendering light beams or glows.
However, I've found a hack to get around this which doesn't give you proper control over blending, but which works if you're wanting to render glowing things like light beams:
Because SceneKit uses GL_ONE, GL_ONE_MINUS_SRC_ALPHA blending mode all you should have to do is render your geometry with an alpha channel of 0. Unfortunately, it's not that simple because the default SceneKit shader discards fragments with an alpha channel of 0, so nothing will actually get rendered. A quick-and-dirty workaround is to use a diffuse colour map which has an alpha channel of 1 (assuming an 8 bit per channel map with values from 1-255). Because the alpha channel is nearly 0, pretty much all of the background image will show through. This mostly works, but because the alpha isn't quite zero it will still produce noticeable artefacts in bright areas.
So to work around this problem you can just use a standard texture map with a solid alpha chanel, but attach a shader modifier to "SCNShaderModifierEntryPointFragment" which simply sets the alpha channel of the output colour to zero. This works because fragment shader modifiers come after the zero-alpha culling.
here's that shader modifier in its entirety :
#pragma transparent
#pragma body
_output.color.a = 0;
note the "#pragma transparent" declaration in the first line - this is necessary to force SceneKit to use its transparent blending mode even when it otherwise wouldn't.
This is not a complete solution, because it's not real control over blending - it's only a useful hack for producing light beam glows etc - and the shading process certainly isn't as optimal as it could be, but it works well for this case.

Alpha blending in Direct3D 9. Some of the primitives aren't rendering behind the texture

I enabled alpha blending in my game, but some of the primitives aren't rendering behind the transparent texture.
Here are my render states:
d3ddevice->SetRenderState(D3DRS_LIGHTING, true);
d3ddevice->SetRenderState(D3DRS_CULLMODE, D3DCULL_CW);
d3ddevice->SetRenderState(D3DRS_ZENABLE, D3DZB_TRUE);
d3ddevice->SetRenderState(D3DRS_AMBIENT, D3DCOLOR_XRGB(15, 15, 15));
d3ddevice->SetRenderState(D3DRS_NORMALIZENORMALS, TRUE);
d3ddevice->SetRenderState(D3DRS_ALPHABLENDENABLE, TRUE);
d3ddevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA);
d3ddevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA);
d3ddevice->SetRenderState(D3DRS_BLENDOP, D3DBLENDOP_ADD);
d3ddevice->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_MODULATE);
d3ddevice->SetTextureStageState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE);
d3ddevice->SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_MODULATE);
d3ddevice->SetTextureStageState(0, D3DTSS_ALPHAARG1, D3DTA_TEXTURE);
d3ddevice->SetTextureStageState( 0, D3DTSS_TEXTURETRANSFORMFLAGS, D3DTTFF_DISABLE );
d3ddevice->SetSamplerState(0, D3DSAMP_MINFILTER, D3DTEXF_NONE);
d3ddevice->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_NONE);
d3ddevice->SetSamplerState(0, D3DSAMP_MIPFILTER, D3DTEXF_NONE);
What happens here is that the transparent objects still get written to the depth buffer and therefore block objects which lie behind them from being rendered. There are (at least) two ways to solve this.
Sort all transparent objects in your scene such that they are rendered from back to front. You need to do this in your code, D3D won't handle this for you. Here is an MSDN article on this.
It looks that all objects that you are rendering are either fully transparent or fully opaque at any given point. In this case, there is a simpler way to deal with transparency: you can use alpha testing to discard all fragments which have 0 alpha (or alpha below a certain threshold, say 128). Here is an article on how to do this. If you are using shaders, you can use the discard command to discard all transparent fragments.

Render to multisample buffer and resolved frame buffer independently

So im doing some graph drawing using GL_LINE_STRIP's and i am using a multisampled buffer so the lines dont look so jagged. the problem is i have some lines in the background of the graph that act as a legend. The multisampling kind of screws the lines up cause they are meant to be exactly 1 pixel thick, but because of the multisampling, it will sometimes put the line spread over 2 pixels that are slightly dimmer than the original colour, making the lines look different to each other.
Is it possible to render those legend lines directly to the resolved frame buffer, then have the multisampled stuff drawn on top? this will effectively not multisample the background legend lines, but multisample the graph lines.
Is this possible? I just want to know before i dive into this and later find out you cant do this. If you have some demo code to show me that would be great as well
It would be much easier if the legend came last: You could just resolve the MSAA buffers into the view framebuffer and then normally render the legend into the resolved buffer afterwards. But the other way won't be possible, since multisample resolution will just overwrite any previous contents of the target framebuffer, it won't do any blending or depth testing.
The only way to actually render the MSAA stuff on top yould be to first resolve them into another FBO and draw that FBO's texture on top of the legend. But for the legend to not get completely overwritten, you will have to use alpha blending. So you basically clear the MSAA buffers to an alpha of 0 before rendering, then render the graph into it. Then you resolve those buffers and draw the resulting texture on top of the legend, using alpha blending to only overwrite the parts where the graph was actually drawn.

Resources