Alpha blending of black visibly goes through white color - webgl

I'm trying to fade out an object of the scene, but I noticed it fades first gaining value nearly to white, before disappearing due to alpha channel being 0.
For a test, I set a square that's entirely black (0, 0, 0) and then linearly interpolate alpha channel from 1 to 0.
This is the rectangle.
Same rectangle but when alpha value is 0.1 that is vec4(0, 0, 0, 0.1). It's brighter than the background itself.
Blending mode used:
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA)
As far as I understand this mode should just lerp between the background pixel and the newly created source pixel. I just don't see any angle where the output pixel becomes brighter, when mixing anything with (0,0,0).
EDIT:
After doing some testing I feel I need to clarify a few more things.
This is WebGL, and I'm drawing into a canvas on a website. I don't know how it works but it looks as if every draw call gl.drawElements() was drawn to a separate buffer and possibly later on composited into a single image. When debugging I can see my square drawn into an entirely white buffer, this is where the colour might come from.
But this means that blending doesn't happen with the backbuffer, but some buffer I didn't know existed. How do I blend into the back buffer? Do I have to avoid browser composition by rendering to a texture and only then drawing it to the canvas?
EDIT 2:
I managed to get the expected result by setting separate blending for alpha and colour as follows:
gl.blendFuncSeparate(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA, gl.ONE, gl.ONE);
But I'd rather leave this question open hoping that someone could clarify why it didn't work in the first place.

Issue is very well described here:
https://stackoverflow.com/a/35459524/9956631
My color was blended with Canvas background. As I understand, it overwrites alpha, so you leave a seethrough part of canvas where your mesh is. So why my blendFuncSeparate() fixed the issue is because I was leaving DST alpha intact.
To turn it off, you can disable alpha when fetching the glContext. To get OpenGL-like rendering you should also disable premultipliedAlpha:
canvas.getContext('webgl',
{
premultipliedAlpha: false,
alpha: false
})!;
Edit:
To make sure my assumption is right, I set a test.
behind the canvas I've placed a label. Then, on top of that I draw my canvas, with my (0, 0, 0, 0.5) color square on top. Just like this:
<label style="
position: absolute;
width: 400px;
height: 400px;
top:445px;
left:500px;
z-index: -1; // Behind...
font-size: 80px;">LMAO</label>
<canvas id="glCanvas" style="z-index: 2;" width="1200" height="1200"></canvas>
As you can see, label is visible where the square is rendered. So this means, it is being blended with what's behind the canvas instead of current contents of the canvas (as one might assume).

Related

Webgl Blending is blending to white

I have written a library for webgl, it has many tests and runs those tests perfectly fine including the one I have for blending.
I've set the test to do a standard blending of Src_Alpha, One_minus_Src_Alpha and the test properly renders the result:
This image is produced by intersecting 1000 lines all with an alpha value of 0.2. Notice how the image does not produce washed out white colors at any given point.
This is also the current blend state as produced by using the webgl Spector plugin for chrome:
Blend State
BLEND: true
BLEND_COLOR: 0, 0, 0, 0
BLEND_DST_ALPHA: ONE_MINUS_SRC_ALPHA
BLEND_DST_RGB: ONE_MINUS_SRC_ALPHA
BLEND_EQUATION_ALPHA: FUNC_ADD
BLEND_EQUATION_RGB: FUNC_ADD
BLEND_SRC_ALPHA: SRC_ALPHA
BLEND_SRC_RGB: SRC_ALPHA
Now I use this same library and render the same edges in a layout and they render like so:
These edges have an opacity of 0.2. And this is the Spector blend mode:
Blend State
BLEND: true
BLEND_COLOR: 0, 0, 0, 0
BLEND_DST_ALPHA: ONE_MINUS_SRC_ALPHA
BLEND_DST_RGB: ONE_MINUS_SRC_ALPHA
BLEND_EQUATION_ALPHA: FUNC_ADD
BLEND_EQUATION_RGB: FUNC_ADD
BLEND_SRC_ALPHA: SRC_ALPHA
BLEND_SRC_RGB: SRC_ALPHA
I am beating my head on a table trying to figure out what the difference between the two scenarios could be.
The shader logic simply hands a color on the vertex to the fragment shader, so there are no premultiplied alphas.
I just need any thoughts of what else can be affecting the blending in such murderous fashion. I can post any extra information needed.
EDIT: To show same exact test in this environment, here is the wheel rendering and adding to white washout somehow:
It seems there was potentially some bad undefined behavior in my library:
I was grabbing the gl context twice: canvas.getContext(...) and each one had the potential to have different properties for things like premultiplied alpha and alpha setting attributes for the context.
When I fixed that issue this bizarre blending error went away. So I will assume the premultiply alpha property between the two context grabs was somehow inconsistent.

How do I provide offsets for the output texture or framebuffer in WebGL?

If I want to use a certain region in my input texture and not the entire thing, I could use gl.texSubImage2D() and provide x and y offsets.
What would be the equivalent in the output texture? Given that in my fragment shader I do not have control of what texels i'm writing to.
Would a call to gl.viewPort() do the trick? Do I need to change canvas dimensions for that?
As #J. van Langen points you gl.scissor will work. You need to enable the scissor test with gl.enable(gl.SCISSOR_TEST) then set the rectangle to clip by with gl.scissor(x, y, width, height)
Example:
const gl = document.querySelector("canvas").getContext("webgl");
gl.enable(gl.SCISSOR_TEST);
gl.scissor(50, 25, 150, 75);
gl.clearColor(1, 0, 0, 1);
gl.clear(gl.COLOR_BUFFER_BIT); // notice the entire canvas is not cleared
canvas { border: 1px solid black; }
<canvas></canvas>
Then again it depends on your definition of "offset". The scissor rectangle just clips, it does not offset (which is usually a translation). As you mentioned, you can use gl.viewport to offset. gl.viewport does not affect gl.clear. It only affects how vertices assigned to gl_Position get translated back into screen coordinates and how those vertices get clipped.
It's important to note it clips vertices, not pixels, so a 10 pixel point (as in gl_PointSize = 10.0) drawn at the edge of the viewport will draw partially outside the viewport. Therefore usually if you set the viewport to something smaller than the full size of whatever you are drawing do you'd also enable and set the scissor rectangle as well.

How do you re-stroke a path to another color with exact same result?

I'm developing an app which involves drawing lines. Every times the user moves the finger, that point is added to an path and also added to the CGContext as the example below.
CGContextMoveToPoint(cacheContext, point1.x, point1.y);
CGContextAddCurveToPoint(cacheContext, ctrl1_x, ctrl1_y, ctrl2_x, ctrl2_y, point2.x, point2.y);
CGPathMoveToPoint(path, NULL, point1.x, point1.y);
CGPathAddCurveToPoint(path, NULL, ctrl1_x, ctrl1_y, ctrl2_x, ctrl2_y, point2.x, point2.y);
Now when I want to add it and stroke it in black I use the following code
CGContextSetStrokeColorWithColor([UIColor blackcolor].CGColor)
CGContextAddPath(cacheContext,path);
CGContextStrokePath(cacheContext);
However the line that gets stroked this time will be a bit smaller then the one that was drawn before. This will result in a slight border around the stroked path. So my question is: How can I get the stroked path to be identical to the path that was drawn into the CGcontext?
The issue is due to anti-aliasing. The path is a geometric ideal. The bitmap generated by stroking the path with a given width, color, etc. is imperfect. The ideal shape covers some pixels completely, but only covers others partially.
The result without anti-aliasing (and assuming an opaque color) is to fully paint pixels which mostly lie within the ideal shape and don't touch the pixels which mostly lie outside of it. That leaves visible jaggies on anything other than vertical or horizontal lines. If you later draw the same path with the same stroke parameters again, exactly the same pixels will be affected and, since they are being fully painted, you can completely replace the old drawing with the new.
With anti-aliasing, any pixel which is only partially within the ideal shape is not completely painted with the new color. Rather, the stroke color is applied in proportion to the percentage of the pixel which is within the ideal shape. The color that was already in that pixel is retained in inverse proportion. For example, a pixel which is 43% within the ideal shape will get a color which is 43% of the stroke color plus 57% of the prior color.
That means that stroking the path a second time with a different color will not completely replace the color from a previous stroke. If you fill a bitmap with white and then stroke a path in red, some of the pixels along the edge will mix a little red with a little of the white to give light red or pink. If you then stroke that path in blue, the pixels along the edge will mix a little blue with a little of the color that was there, which is a light red or pink. That will give a magenta-ish color.
You can disable anti-aliasing using CGContextSetShouldAntialias(), but then you risk getting jaggies. You would have to do this around both strokings of the path.
Alternatively, you can clear the context to some background color before redrawing the path. But for that, you need to be able to completely redraw everything you want to appear.

Sketch App Paint Blending OpenGLES

Can anyone suggest why my low opacity painting does this weird blending, while the SketchBookX app does it perfect?
In both images attached the vertical strokes on the left are done at full opacity, the strokes on the right are done at low opacity. The top image is mine and as you can see the strokes on the right at low opacity turn a orange-red color and don't blend/mesh with the full opacity strokes. But the SketchBookX app blends perfectly and maintains the same color.
I'm using glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA) and have tried many variations with no luck, so I'm starting to think there are other things that are giving me this problem.
Do I need to handle this problem in the fragment shader? I currently have this,gl_FragColor = color * rotatedTexture; I'm using PNGs for brush textures.
UPDATE: Im getting the same results without using a texture. gl_FragColor = color;
I want it to be like mixing ink, not like mixing light :)

Firemonkey semi-transparent Image3D is sometimes opaque

I create a FireMonkey app with 3 semi-transparent tImage3D's.
Here's the code and the screen. All seems well.
procedure TForm1.Form3DCreate(Sender: TObject);
// create a new semi-transparent timage3d
// object with color and Z position.
procedure NewImage ( const nColor : tColor;
const nZ : integer );
begin
// create the image
with tImage3D . Create ( self ) do
begin
// put it on the screen
Parent := self;
// set the size
Width := 10;
Height := 10;
// set the image to a single pixel.
Bitmap . Width := 1;
Bitmap . Height := 1;
// set the Alpha to $80 to make it
// semi-transparent
Bitmap . Pixels [ 0, 0 ] := $80000000 + nColor;
// set the z position
Position . Z := nZ;
end;
end;
begin
NewImage ( claRed, +10 );
NewImage ( claGreen, 0 );
NewImage ( claBlue, -10 );
end;
Now reverse the order. Now they are opaque.
begin
NewImage ( claRed, -10 );
NewImage ( claGreen, 0 );
NewImage ( claBlue, +10 );
end;
What am I missing?
FireMonkey (as of now) doesn’t support rendering semi-transparent objects in 3D.
FireMonkey only supports blending of semi-transparent objects (either through the Opacity property or because of their texture, for instance a semi-transparent PNG image), but blending alone is not enough to get it right in 3D with a Z-Buffer (which is what FMX, and most 3D apps are using).
For a technical explanation, you can read about Transparency sorting, the article is about OpenGL, but applies to DirectX too.
So to get correct rendering, you need to have your semi-transparent objects sorted back-to-front from the camera's point of view.
You can get more details and some code in this post to work-around the issue:
Rendering semi-transparent object in FireMonkey
but keep in mind it'll just be a workaround.
Ideally this should be handled by the FireMonkey scene-graph, as it is rendering-dependent, otherwise, you end up having to change the scene-graph structure, which can have various other side-effects, and is even more problematic if you have more than one camera looking at the same scene.
Also, the sorting approach will only work with convex objects that don’t intersect, and for which you don’t have triple-overlap, as in:
For which there exists no correct sorting (none of the elements is in front of the others).
As you already discovered you have to draw transparent objects from back to front.
When drawing a transparent object, the object is drawn, and blended with the pixels that are behind it.
So this happens when you draw from back to front:
You draw the red image, it is blended with the white background. You can tell by the "pink" instead of pure red colour that it is blended with the white background. Then you draw the green image, it is blended with the already drawn white background and red image. Finally you draw the blue image, which is blended with the already drawn objects.
But now we draw from front to back:
We draw the red plane first. It is blended with the white background which you can see because it is pink instead of red. Now we draw the green plane. It is blended with the white background, you can tell by the colour, it is not pure, deep, green. But, the renderer sees that a part falls behind the red plane, so it doesn't draw that part. But, you think: the red plane is transparent, the renderer should draw behind that red plane! Well no, the renderer only keeps track of the depth / z-order of the pixels in the z-buffer / depth-buffer, it doesn't know if that pixel is transparent or not. The same story goes for the blue plane, only the part that is not obscured by other objects is drawn.
What is this depth buffer you speak of?
In the depth-buffer the depth of every pixel is stored. When you draw a pixel at 2,2 with a z of 1, the depth-buffer at 2,2 is updated with the value 1. Now when you draw a line from 1,2 to 3,2 with a z of 3, the renderer will only draw the pixels where the depth-buffer has a value of >= 3. So pixel 1,2 is drawn (and the depth-buffer at 1,2 set to 3). Pixel 2,2 is not drawn, because the depth-buffer indicates that that pixel is already drawn with a lesser depth (1 vs 3). Pixel 3,2 is drawn and the depth-buffer at 3,2 is set to 3.
So the depth-buffer is used to keep track of the z-order of every pixel to prevent overwriting that pixel with a pixel that is farther away.
If you want to draw transparent objects the right way, see this answer.
Excerpt from that answer:
First draw opaque objects.
Disable depth-buffer writes (so the depth-buffer is not updated), but keep depth-buffer checking enabled.
Draw transparent objects. Because the depth-buffer is not updated you don't have the problem of transparent objects obscuring each other. Because depth-buffer checking is enabled, you don't draw behind opaque objects.
I don't know if FireMonkey supports disabling depth-buffer writes, you have to find out yourself.

Resources