This works great for green screen, my background is green and using this code it makes green = alpha.
lowp vec4 textureColor = texture2D(u_samplers2D[0], vTextu);
lowp float rbAverage = textureColor.r * 0.5 + textureColor.b * 0.5;
lowp float gDelta = textureColor.g - rbAverage;
textureColor.a = 1.0 - smoothstep(0.0, 0.25, gDelta);
textureColor.a = textureColor.a * textureColor.a * textureColor.a;
gl_FragColor = textureColor;
How do I change the code so that it use a black background instead of green? I'm thinking I could get values for the dark red, green, blues then use that as the alpha? Any pointers would be kind.
You can calculate how close value to black, simplest way is to take maximum of the r g and b values, then set some threshold when you consider color opaque, and map your alpha values to [0..1] in pseudo code:
float brightness = max(textureColor.r, textureColor.g, textureColor.b);
float threshold = 0.1;
float alpha = brightness > threshold ? 1 : (threshold - brightness) / threshold;
gl_FragColor = vec4(textureColor.rgb, alpha);
lowp vec4 textureColor = texture2D(u_samplers2D, vTextu);
lowp float gtemp = smoothstep(0.0, 0.5, textureColor.r);
gl_FragColor = vec4(1.0, 1.0, 1.0, gtemp);
This works for my situation. I was using a greyscale image with black background and just needed the white elements. The problem I had was that I was applying the alpha to grey pixels, but now Im using white and applying the alpha to that. I can adjust the smoothstep values to change the effect. And also add another alpha to fade in or out the image.
Related
How to do color compositing in webgl? I am not sure what to search for exactly maybe I am missing a word. I want to be able to draw a color or texture as overlay. For example, here I tried to add black overlay with 0.2 opacity but instead of using color directly I am substracting opacity. If I want to use different color as overlay, I will have to use different work around for it. How can I overlay/composite different colors or textures without using any work around? Shadertoy link
void mainImage( out vec4 fragColor, in vec2 fragCoord ) {
vec2 uv = fragCoord/iResolution.xy;
vec4 col = texture(iChannel0, uv);
vec4 overlay = vec4(0., 0., 0., 0.2);
col -= overlay.a;
fragColor = col;
}
If you want to darken the texture color towards black, you could use a simple multiplication:
vec4 col = texture(iChannel0, uv);
col *= 0.8; // 20% darker
Or if you'd like to use a color, you could also multiply those
vec4 col = texture(iChannel0, uv);
vec4 tint = vec4(1.0, 0.5, 0.0, 1.0); // Orange
col *= tint;
There are a lot of ways to blend colors together. Another alternative to create smooth gradients between colors is by using the mix() function. There is no one answer.
This question already has an answer here:
Alpha rendering difference between OpenGL and WebGL
(1 answer)
Closed 5 years ago.
Sorry for such a long title.
This is a followup on my old question, which led me to drawing a beveled circle looking like this:
If you look closely at this image, you can see the pixelation around the edge.
I wanted to smooth this circle with anti-aliasing, and I ended up with this:
If you look closely at this image, you can see a white border around the edge.
I'd like to remove this white border if possible, but my shader is a mess at this point:
#extension GL_OES_standard_derivatives : enable
void main(void) {
lowp vec2 cxy = 2.0 * gl_PointCoord - 1.0;
lowp float radius = dot(cxy, cxy);
const lowp vec3 ambient = vec3(0.5, 0.2, 0.1);
const lowp vec3 lightDiffuse = vec3(1, 0.5, 0.2);
lowp vec3 normal = vec3(cxy, sqrt(1.0 - radius));
lowp vec3 lightDir = normalize(vec3(0, -1, -0.5));
lowp float color = max(dot(normal, lightDir), 0.0);
lowp float delta = fwidth(radius);
lowp float alpha = 1.0 - smoothstep(1.0 - delta, 1.0 + delta, radius);
gl_FragColor = vec4(ambient + lightDiffuse * color, alpha);
}
If anyone could figure out how to remove the white border, that'd be fantastic!
In WebGL the default is to use pre-multiplied alpha. (see WebGL and Alpha)
When initializing the canvas, I has to be specified no alpha:
gl = document.getElementById("canvas").getContext("webgl", { alpha: false });
see further the answers to the questions:
WebGL: How to correctly blend alpha channel png
Alpha rendering difference between OpenGL and WebGL
I am trying to display sharp contours from a texture in WebGL.
I pass a texture to my fragment shaders then I use local derivatives to display the contours/outline, however, it is not smooth as I would expect it to.
Just printing the texture without processing works as expected:
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
vec4 color = texture2D(uTextureFilled, texc);
gl_FragColor = color;
With local derivatives, it misses some edges:
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
vec4 color = texture2D(uTextureFilled, texc);
float maxColor = length(color.rgb);
gl_FragColor.r = abs(dFdx(maxColor));
gl_FragColor.g = abs(dFdy(maxColor));
gl_FragColor.a = 1.;
In theory, your code is right.
But in practice most GPUs are computing derivatives on blocks of 2x2 pixels.
So for all 4 pixels of such block the dFdX and dFdY values will be the same.
(detailed explanation here)
This will cause some kind of aliasing and you will miss some pixels for the contour of the shape randomly (this happens when the transition from black to the shape color occurs at the border of a 2x2 block).
To fix this, and get the real per pixel derivative, you can instead compute it yourself, this would look like this :
// get tex coordinates
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
// compute the U & V step needed to read neighbor pixels
// for that you need to pass the texture dimensions to the shader,
// so let's say those are texWidth and texHeight
float step_u = 1.0 / texWidth;
float step_v = 1.0 / texHeight;
// read current pixel
vec4 centerPixel = texture2D(uTextureFilled, texc);
// read nearest right pixel & nearest bottom pixel
vec4 rightPixel = texture2D(uTextureFilled, texc + vec2(step_u, 0.0));
vec4 bottomPixel = texture2D(uTextureFilled, texc + vec2(0.0, step_v));
// now manually compute the derivatives
float _dFdX = length(rightPixel - centerPixel) / step_u;
float _dFdY = length(bottomPixel - centerPixel) / step_v;
// display
gl_FragColor.r = _dFdX;
gl_FragColor.g = _dFdY;
gl_FragColor.a = 1.;
A few important things :
texture should not use mipmaps
texture min & mag filtering should be set to GL_NEAREST
texture clamp mode should be set to clamp (not repeat)
And here is a ShaderToy sample, demonstrating this :
I am currently using this fragment shader in WebGL to apply highlights/shadows adjustments to photo textures.
The shader itself was pulled directly from the excellent GPUImage library for iOS.
uniform sampler2D inputImageTexture;
varying highp vec2 textureCoordinate;
uniform lowp float shadows;
uniform lowp float highlights;
const mediump vec3 luminanceWeighting = vec3(0.3, 0.3, 0.3);
void main()
{
lowp vec4 source = texture2D(inputImageTexture, textureCoordinate);
mediump float luminance = dot(source.rgb, luminanceWeighting);
mediump float shadow = clamp((pow(luminance, 1.0/(shadows+1.0)) + (-0.76)*pow(luminance, 2.0/(shadows+1.0))) - luminance, 0.0, 1.0);
mediump float highlight = clamp((1.0 - (pow(1.0-luminance, 1.0/(2.0-highlights)) + (-0.8)*pow(1.0-luminance, 2.0/(2.0-highlights)))) - luminance, -1.0, 0.0);
lowp vec3 result = vec3(0.0, 0.0, 0.0) + ((luminance + shadow + highlight) - 0.0) * ((source.rgb - vec3(0.0, 0.0, 0.0))/(luminance - 0.0));
gl_FragColor = vec4(result.rgb, source.a);
}
This shader as it stands, will only reduce highlights on a scale of 0.0 - 1.0. However I would like it to also brighten the highlights on a scale of 1.0-2.0.
With the aim of having a complete filter that reduces the images highlights when the highlights uniform is less than 1.0 and increases the intensity of the highlights when it is above 1.0. The same goes for the darkness shadows uniform
Highlights:
0.0(duller) ---- 1.0 (default - original pixel values) ----- 2.0 (brighter)
I have tried simply changing the clamp on the highlights variable to 0.0,2.0, and although this does indeed increase the brightness of the highlights when the uniform is above 1.0 it also seriously messes up the colors.
My understanding of image processing and constructing fragment shaders is extremely weak at best as you my be able to tell.
I'm just hoping someone can point me in the right direction.
EDIT:
Here are some example screenshots:-
The current filter with highlights set to 1.00 (basically the source image)
The current filter with highlights set to 0.00 as you can see the highlights get flattened/removed.
And finally here is what happens when I change the clamp in the fragment shader to allow values above 1.00 and set the highlights value to 2.00
I simply wish to be able to boost the highlights, making them brighter/more defined. i.e. the opposite of setting the value to 0.00
I don't really understand the shadow and highlight equations, but I can see that they are set up to never enhance shadows and highlights, but rather to wash them out. So we need a secondary step for enhancement.
For the highlights, I think to handle brighter colors, you need to blend towards white instead of adding something, so you don't get hue-shifts. I used a basic contrast equation to pick out the highlights, and then cubed it to clip out the midtones and shadows. The whiteTarget is just pulling out the top half of the 0.0-2.0 range to use as a multiplier to determine the strength of the brightening effect.
For the shadows, we are changing our range from 0.0-1.0 (where 0 is unchanged and 1 is washed out) to 0.0-2.0 (where 1 is unchanged and 2 is washed out). Therefore, the +1.0's in the shadow equation should be removed. Then for the 0.0-1.0 range, I just copied what I did for the highlights, except blending toward black. Maybe that can be optimized to avoid a mix function (not sure).
So here is my unoptimized version of the shader, set up so both shadows and highlights are on 0.0-2.0 scales, with 1.0 being the nominal. You might want to play around with those lines where I cube the luminance, and also with the value I used for contrast (currently 1.5), but it seems pretty good to me the way it is now--I adjusted it to try to avoid any ugly overlap between shadows and highlight ranges when the input parameters are at the two extremes.
uniform sampler2D inputImageTexture;
varying highp vec2 textureCoordinate;
uniform lowp float shadows;
uniform lowp float highlights;
const mediump vec3 luminanceWeighting = vec3(0.3, 0.3, 0.3);
void main()
{
lowp vec4 source = texture2D(inputImageTexture, textureCoordinate);
mediump float luminance = dot(source.rgb, luminanceWeighting);
//(shadows+1.0) changed to just shadows:
mediump float shadow = clamp((pow(luminance, 1.0/shadows) + (-0.76)*pow(luminance, 2.0/shadows)) - luminance, 0.0, 1.0);
mediump float highlight = clamp((1.0 - (pow(1.0-luminance, 1.0/(2.0-highlights)) + (-0.8)*pow(1.0-luminance, 2.0/(2.0-highlights)))) - luminance, -1.0, 0.0);
lowp vec3 result = vec3(0.0, 0.0, 0.0) + ((luminance + shadow + highlight) - 0.0) * ((source.rgb - vec3(0.0, 0.0, 0.0))/(luminance - 0.0));
// blend toward white if highlights is more than 1
mediump float contrastedLuminance = ((luminance - 0.5) * 1.5) + 0.5;
mediump float whiteInterp = contrastedLuminance*contrastedLuminance*contrastedLuminance;
mediump float whiteTarget = clamp(highlights, 1.0, 2.0) - 1.0;
result = mix(result, vec3(1.0), whiteInterp*whiteTarget);
// blend toward black if shadows is less than 1
mediump float invContrastedLuminance = 1.0 - contrastedLuminance;
mediump float blackInterp = invContrastedLuminance*invContrastedLuminance*invContrastedLuminance;
mediump float blackTarget = 1.0 - clamp(shadows, 0.0, 1.0);
result = mix(result, vec3(0.0), blackInterp*blackTarget);
gl_FragColor = vec4(result, source.a);
}
By the way, any idea why the original result line keeps adding 0's to everything? Seems like it could be simplified to
vec3 result = (luminance + shadow + highlight) * source.rgb / luminance;
But maybe it's a trick to cast to lowp within the calculation instead of after the calculation. Just a guess.
I'm trying to create a fragment shader to recolor a 2D grayscale sprite but leave white and near-white fragments intact (ie: don't recolor pure white fragments, and only slightly recolor near-white fragments). I'm not sure how to do this without using a conditional branch which results in poor performance on certain hardware.
The existing shader in the game engine just performs a simple multiplication:
#ifdef GL_ES
precision lowp float;
#endif
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
uniform sampler2D CC_Texture0;
void main()
{
vec4 texColor = texture2D(CC_Texture0, v_texCoord);
gl_FragColor = texColor * v_fragmentColor;
}
I think that in order to avoid the conditional, I need some sort of continuous mathematical function that will recolor fragments with RGB values greater than, say, (0.9, 0.9, 0.9) less than it would for fragments which are less than (0.9, 0.9, 0.9).
Any help would be great!
I would do something like this: Calculate the fully-recolored pixel, then mix with the original based on a function. Here's an idea:
vec4 texColor = texture2D(CC_Texture0, v_texCoord);
const vec4 kLumWeights = vec4(.2126, .7152, .0722, 0.0); // Rec. 709 luminance weights
float luminance = dot (texColor, kLumWeights);
vec4 recolored = texColor * v_fragmentColor;
const float kThreshold = 0.8;
float mixAmount = (luminance - kThreshold) / (1.0 - kThreshold); // Everything below kThreshold becomes 0, and from kThreshold to 1.0 becomes 0 to 1.0
mixAmount = clamp (mixAmount, 0.0, 1.0);
gl_FragColor = mix (recolored, texColor, mixAmount);
Let me know if that works.