How can I set webgl opacity without losing png transparency? - webgl

I use webgl to draw 2d pictures, with two triangles, faster than with canvas 2d context. I keep things very simple, because I only use it to draw pictures, no 3d.
Now I am trying to add opacity to my fragment shader, because webgl context don't have .globalalpha property.
Here is my code :
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
uniform sampler2D u_image;
varying vec2 v_texCoord;
uniform float alpha;
void main()
{
gl_FragColor = vec4(texture2D(u_image, vec2(v_texCoord.s, v_texCoord.t)).rgb, alpha);
}
</script>
...
webglalpha = gl.getUniformLocation(gl.program, "alpha");
...
gl.uniform1f(webglalpha, d_opacity);
d_opacity is the opacity value my drawing function gets as an argument.
The opacity changes, but now my transparent pngs get a black background, though the png transparency used to work before, when I was still using
gl_FragColor = texture2D(u_image, vec2(v_texCoord.s, v_texCoord.t));
How can I set opacity without losing the pngs transparency ?

You need to multiply the overriding alpha by the original alpha in the texture:
void main()
{
vec4 color = texture2D(u_image, vec2(v_texCoord.s, v_texCoord.t));
gl_FragColor = vec4(color.rgb, alpha * color.a);
}
For example, when texel A has a PNG opacity of 0.5 and texel B a PNG opacity of 0.3, you want these to become 0.25 and 0.15 respectively when you set your global alpha to 0.5 and not both 0.5.

Related

Position scaled video texture over image texture background

I have this working scaled masked video texture over an image texture background. However it is positioned in the bottom left corner. I tried some tricks multiplying the coords but it doesn't seem to make much difference. I'll probably have to make alot of the values changeable uniforms but hardcoded ok for now.
What values can be used to change the video texture coords to display in the top right or bottom right corner ?
The video is a webcam stream with bodypix data providing the mask.
The alpha in mix is from bodypix data and needs to be calculated at 255 to properly display.
Fragment example
precision mediump float;
uniform sampler2D background;
uniform sampler2D frame;
uniform sampler2D mask;
uniform float texWidth;
uniform float texHeight;
void main(void) {
vec2 texCoord = gl_FragCoord.xy / vec2(texWidth,texHeight);
vec2 frameuv = texCoord * vec2(texWidth, texHeight) / vec2(200.0, 200.0);
vec4 texel0 = texture2D(background, texCoord);
vec4 frameTex = texture2D(frame, frameuv.xy);
vec4 maskTex = texture2D(mask, frameuv.xy);
gl_FragColor = mix(texel0, frameTex, step(frameuv.x, 1.0) * step(frameuv.y, 1.0) * maskTex.a * 255.);
}
https://jsfiddle.net/danrossi303/82tpoy94/3/

Is it possible to invert the mask for GPUImageMaskFilter?

I am masking a photo with a frame mask image. The mask was created for Core Graphics which is inverted.
Black means "visible" and white means "fully transparent". The opposite of Photoshop masks.
Is it possible to invert the mask for the filter so the effect is reversed?
It is very easy to change rgb color to inverse.
In a Fragment shader, you can do this exprestion
uniform sampler2D tex0 ;
uniform sampler2D tex1; // mask
void main()
{
vec4 nowcolor = texture2D(texture1, uv);
vec4 newmask = vec4(1.0,1.0,1.0,1.0) - nowcolor;
// and use the new mask here instead the old mask texture
}

iOs OpenGL ES 2.0 adding textures with low opacity on device and on simulator

I have a problem with multiple drawing of textures in my program.
Blending mode is
glBlendFuncSeparate(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA, GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glBlendEquation(GL_FUNC_ADD);
The value of a-channel is passed into the shader from cpu-code.
precision highp float;
uniform sampler2D tShape;
uniform vec4 vColor;
uniform float sOpacity;
varying vec4 texCoords;\n"
void main() {
float a = texture2D(tShape, texCoords.xy).x * sOpacity;
gl_FragColor = vec4(vColor.rgb, a);
}
It's calculated previously with
O = pow(O, 1.3);
for the best visual effect.
I draw with color (0; 0; 0) on the black transparent canvas (0;0;0;0), but with very low opacity:
0.03 -> 0.01048
0.06 -> 0.0258
0.09 -> 0.0437
0.12 -> 0.0635
...
I expect, that maximal value of point's color will be (0;0;0;1) (black, no transparent) after multiple drawings as on the simulator:
but it isn't so on the device:
Do you have any ideas, why is it so?
UPDATE:
Also manual blending works incorrect too (and with difference from standard).
glBlendFunc(GL_ONE, GL_ZERO);
Fragment shader code:
#extension GL_EXT_shader_framebuffer_fetch : require
precision highp float;
uniform sampler2D tShape;
uniform vec4 vColor;
uniform float sOpacity;
varying vec4 texCoords;
void main() {
float a = texture2D(tShape, texCoords.xy).x * sOpacity;
gl_FragColor = vec4(vColor.rgb * a, a) + (gl_LastFragData[0] * (1.0 - a));
}
Result on the simulator:
Result on the device:
I'm trying to understand your approach so I wrote down some equations:
This is how a new drawing is performed (If didn't make any mistake):
color = {pencil_shape}*sourceAlpha + {old_paint}*(1-sourceAlpha)
alpha = {pencil_shape} + {old_paint}*(1-sourceAlpha)
So basically you alpha is getting closer to 1 on each frame, and you color is blended each time based on the src alpha in the *pencil_shape*.
Questions:
Do you intend to use the alpha in the output image for anything?
Is your *pencil_shape* all black (0, 0, 0, 0)? (besides the cornes where I suppose it has some antialiasing effect)
After some experiments I've understood, that this problem is in supported precision of device . So on the iPad Air this problem appears less than on iPad 4, 3.

GLSL Border Shader

I'm attempting to write a shader to put a border around text. It compiles and runs ok, but the border is grainy and aliased. Any suggestions for making it better please?
This is the first shader I have written so don't have much knowledge about the API. Could any example code be commented please? I don't want to just copy and paste an answer; I'd like to understand it.
Additionally, I have a problem when I make the text smaller which is best explained by this image. Any suggestions on fixing this too please?
The edge detection could be better too. So far I take advantage of the fact that the antialiased texture I use for input has edges with 0.0 < alpha < 1.0
So far I have this:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 v_texCoord;
uniform sampler2D u_texture;
uniform vec4 u_borderColor;
uniform float u_width;
uniform float u_height;
void main()
{
mediump vec4 total = vec4(0.0);
float alpha = texture2D(u_texture, v_texCoord).a;
//If somewhere between complete transparent and completely opaque
if (alpha > 0.0 && alpha < 1.0)
{
total.rgb = u_borderColor.rgb;
total.a *= u_borderColor.a;
gl_FragColor = u_borderColor;
}
else
{
gl_FragColor = texture2D(u_texture, v_texCoord);
}
}
Why aren't you setting the value of gl_FragColor to total? Alpha is no more maintained correctly....
Next, total.a = 1 - alpha; (from outer to inner)
or a variant like:
total.a = abs(0.5 - alpha); (blend both sides)
would probably make more sense, when blending.
In the code i see that your setting gl_FragColor = u_borderColor which is (0,0,0,0) from total if alpha is >0 <1. Firstly change your total to some other contrast color and your code work on for a image. but text has different bends which makes it grainy so need to change it the rendering which cannot be said without code.
Do suggest if i am wrong
Thank you

Opengles fragment shader achieve the effect

I want to achieve a smooth merge effect of the image on center cut. The centre cut i achieved from the below code.
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main(){
vec4 CurrentColor = vec4(0.0);
if(textureCoordinate.y < 0.5){
CurrentColor = texture2D(videoFrame,vec2(textureCoordinate.x,(textureCoordinate.y-0.125)));
} else{
CurrentColor = texture2D(videoFrame,vec2(textureCoordinate.x,(textureCoordinate.y+0.125)));
}
gl_fragColor = CurrentColor;
}
The above code gives the effect to below image.
Actual:
Centre cut:
Desired Output:
What i want is the sharp cut should not be there, there should be smooth gradient merge of both halves.
Do you want an actual blur there, or just linear blend? Because blurring involves a blurring kernel, whereas a blend would be simple interpolation between those two, depending on the y-coordinate.
This is the code for a linear blend.
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main(){
float steepness = 20; /* controls the width of the blending zone, larger values => shaper gradient */
vec4 a = texture2D(videoFrame,vec2(textureCoordinate.x,(textureCoordinate.y-0.125)));
vec4 b = texture2D(videoFrame,vec2(textureCoordinate.x,(textureCoordinate.y+0.125)));
/* EDIT: Added a clamp to the smoothstep parameter -- should not be neccessary though */
vec4 final = smoothstep(a, b, clamp((y-0.5)*steepness, 0., 1.)); /* there's also mix instead of smoothstep, try both */
gl_FragColor = final;
}
Doing an actual blur is a bit more complicated, as you've to apply that blurring kernel. Basically it involves two nested loops, iterating over the neighbouring texels and summing them up according to some distribution (most flexible by supplying that distribution through an additional texture which also allowed to add some bokeh).

Resources