OpenGL/GLSE alpha masking - ios

I'm implementing a paint app by using OpenGL/GLSL.
There is a feature where a user draws a "mask" by using brush with a pattern image, meantime the background changes according to the brush position. Take a look at the video to understand: video
I used CALayer's mask (iOS stuff) to achieve this effect (on the video). But this implementation is very costly, fps is pretty low. So I decided to use OpenGL for that.
For OpenGL implementation, I use the Stencil buffer for masking, i.e.:
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 1, 0);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
// Draw mask (brush pattern)
glStencilFunc(GL_EQUAL, 1, 255);
// Draw gradient background
// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
The problem: Stencil buffer doesn't work with alpha, that's why I can't use semi-transparent patterns for the brushes.
The question: How can I achieve that effect from video by using OpenGL/GLSL but without Stencil buffer?

Since your background is already generated (from comments) then you can simply use 2 textures in the shader to draw a each of the segments. You will need to redraw all of them until user lifts up his finger though.
So assume you have a texture that has a white footprint on it with alpha channel footprintTextureID and a background texture "backgroundTextureID". You need to bind both of a textures using activeTexture 1 and 2 and pass the 2 as uniforms in the shader.
Now in your vertex shader you will need to generate the relative texture coordinates from the position. There should be a line similar to gl_Position = computedPosition; so you need to add another varying value:
backgroundTextureCoordinates = vec2((computedPosition.x+1.0)*0.5, (computedPosition.y+1.0)*0.5);
or if you need to flip vertically
backgroundTextureCoordinates = vec2((computedPosition.x+1.0)*0.5, (-computedPosition.y+1.0)*0.5):
(The reason for this equation is that the output vertices are in interval [-1,1] but the textures use [0,1]: [-1,1]+1 = [0,2] then [0,2]*0.5 = [0,1]).
Ok so assuming you bound all of these correctly you now only need to multiply the colors in fragment shader to get the blended color:
uniform sampler2D footprintTexture;
varying lowp vec2 footprintTextureCoordinate;
uniform sampler2D backgroundTexture;
varying lowp vec2 backgroundTextureCoordinates;
void main() {
lowp vec4 footprintColor = texture2D(footprintTexture, footprintTextureCoordinate);
lowp vec4 backgroundColor = texture2D(backgroundTexture, backgroundTextureCoordinates);
gl_FragColor = footprintColor*backgroundColor;
}
If you wanted you could multiply with alpha value from the footprint but that only loses the flexibility. Until the footprint texture is white it makes no difference so it is your choice.

Stencil is a boolean on/off test, so as you say it can't cope with alpha.
The only GL technique which works with alpha is the blending, but due to the color change between frames you can't simply flatten this into a single layer in a single pass.
To my mind it sounds like you need to maintain multiple independent layers in off-screen buffers, and then blend them together per frame to form what is shown on screen. This gives you complete independence for how you update each layer per frame.

Related

How do I draw a polygon in an info-beamer node.lua program?

I have started experimenting with the info-beamer software for Raspberry Pi. It appears to have support for display PNGs, text, and video, but when I see GLSL primitives, my first instinct is to draw a texture-mapped polygon.
Unfortunately, I can't find the documentation that would allow me to draw so much as a single triangle using the shaders. I have made a few toys using GLSL, so I'm familiar with the pipeline of setting transform matrices and drawing triangles that are filtered by the vertex and fragment shaders.
I have grepped around in info-beamer-nodes on GitHub for examples of GL drawing, but the relevant examples have so far escaped my notice.
How do I use info-beamer's GLSL shaders on arbitrary UV mapped polygons?
Based on the comment by the author of info-beamer it is clear that functions to draw arbitrary triangles are not available in info-beamer 0.9.1.
The specific effect I was going to attempt was a rectangle that faded to transparent at the margins. Fortunately the 30c3-room/ example in the info-beamer-nodes sources illustrates a technique where we draw an image as a rectangle that is filtered by the GL fragment shader. The 1x1 white PNG is a perfectly reasonable template whose color can be replaced by the calculations of the shader in my application.
While arbitrary triangles are not available, UV-mapped rectangles (and rotated rectangles) are supported and are suitable for many use cases.
I used the following shader:
uniform sampler2D Texture;
varying vec2 TexCoord;
uniform float margin_h;
uniform float margin_v;
void main()
{
float q = min((1.0-TexCoord.s)/margin_h, TexCoord.s/margin_h);
float r = min((1.0-TexCoord.t)/margin_v, TexCoord.t/margin_v);
float p = min(q,r);
gl_FragColor = vec4(0,0,0,p);
}
and this LUA in my node.render()
y = phase * 30 + center.y
shader:use {
margin_h=0.03;
margin_v=0.2;
}
white:draw(x-20,y-20,x+700,y+70)
shader:deactivate()
font:write(x, y, "bacon "..(phase), 50, 1,1,0,1)

Why is a texture coordinate of 1.0 getting beyond the edge of the texture?

I'm doing a color lookup using a texture to apply an effect to a picture. My lookup is a gradient map using the luminance of the fragment of the first texture, then looking that up on a second texture. The 2nd texture is 256x256 with gradients going horizontally and several different gradients top to bottom. So 32 horizontal stripes each 8 pixels tall. My lookup on the x is the luminance, on the y it's a gradient and I target the center of the stripe to avoid crossover.
My fragment shader looks like this:
lowp vec4 source = texture2D(u_textureSampler, v_fragmentTexCoord0);
float luminance = 1.0 - dot(source.rgb, W);
lowp vec2 texPos;
texPos.x = clamp(luminance, 0.0, 1.0);
// the y value selects which gradient to use by supplying a T value
// this would be more efficient in the vertex shader
texPos.y = clamp(u_value4, 0.0, 1.0);
lowp vec4 newColor1 = texture2D(u_textureSampler2, texPos);
It works good but I was getting distortion in the whitest parts of the whites and the blackest part of the blacks. Basically it looked like it grabbed that newColor from a completely different place on texture2, or possibly was just getting nothing for those fragments. I added the clamps in the shader to try to keep it from getting outside the edge of the lookup texture but that didn't help. Am I not using clamp correctly?
Finally I considered that it might have something to do with my source texture or the way it's loaded. I ended up fixing it by adding:
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
So.. WHY?
It's a little annoying to have to clamp the textures because it means I have to write an exception in my code when I'm loading lookup tables..
If my textPos.x and .y are clamped to 0-1.. how is it pulling a sample beyond the edge?
Also.. do I have to use the above clamp call when creating the texture or can I call it when I'm about to use the texture?
This is correct behavior of texture sampler.
Let me explain this. When you use textures with GL_LINEAR sampling GPU will take an average color of pixel blended with nearby pixels (that's why you don't see pixelation as with GL_NEAREST mode - pixels are blurred instead).
And with GL_REPEAT mode texture coordinates will wrap from 0 to 1 and vice versa, blending with nearby pixels (i.e. in extreme coordinates it will blend with opposite side of texture). GL_CLAMP_TO_EDGE prevents this wrapping behavior, and pixels won't blend with pixels from opposite side of texture.
Hope my explanation is clear.

Blending issue porting from OpenGLES 1.0 to 2.0 (iOS)

I'm porting a very simple piece of code from OpenGLES 1.0 to OpenGLES 2.0.
In the original version, I have blending enabled with
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_ADD);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I'm using the same code in my ES 2.0 implementation as I need to blend the newly rendered quads with what was in the render buffer (I'm retaining the render buffer, I can't re-render the scene).
I'm using a texture (alpha values displaying a radial gradient from center to the outside, alpha goes from 1 to 0) that serves as an alpha mask, containing only white pixels with different alpha values. I give my vertices the same color say red with alpha of 100/255. My background is transparent black. Below that, I have a plain white surface (UIView). I render 4 quads.
Result with OpenGLES 1.0 (desired result)
My observations tells me that the fragment shader should simply be:
gl_FragColor = DestinationColor * texture2D(Texture, TexCoordOut);
(I got to that conclusion by trying different values for my vertices and the texture. That's also what I've read on some tutorials.)
I'm trying to write some OpenGL 2.0 code (including vertex + fragment shaders) that would give me the exact same result as in OpenGLES 1.0, nothing more, nothing less. I don't need/want to do any kind of blending in the fragment shader except applying the vertex color on the texture. Using the simple shader, here's the result I got:
I tried pretty much every combination of *, +, mix I could think of but I couldn't reproduce the same result. This is the closest I got so far, but that's definitely not the right one (and that doesn't make any sense either)
varying lowp vec4 DestinationColor;
varying lowp vec2 TexCoordOut;
uniform sampler2D Texture;
void main(void) {
lowp vec4 texture0Color = texture2D(Texture, TexCoordOut);
gl_FragColor.rgb = mix(texture0Color.rgb, DestinationColor.rgb, texture0Color.a);
gl_FragColor.a = texture0Color.a * DestinationColor.a;
}
This shader gives me the following:
By reading this and this, one can construct the blending function.
Since you're using glBlendFunc and not glBlendFuncSeparate, all the 4 channels are being blended. Using the GL_FUNC_ADD parameter sets the output O to O = sS + dD, where s and d are the blending parameters, and S and D are the source and destination colors.
The s and d parameters are set by the glBlendFunc. GL_ONE sets s to (1, 1, 1, 1), and GL_ONE_MINUS_SRC_ALPHA sets d to (1-As, 1-As, 1-As, 1-As), where As is the alpha value of the source. Therefore, your blend is doing this (in vector form):
O = S + (1-As) * D which in GLSL is O = mix(D, S, As), or:
gl_FragColor = mix(DestinationColor, TexCoordOut, As).
If the result doesn't look similar, then please verify that you're not using glBlend or any other OpenGL APIs that may change the appearance of your final result. If that doesn't help, please post a screenshot of the different outputs.
this can't be done easily with shaders since blending have to read current framebuffer state.
You can achieve this with rendering to texture and passing it to shader, if you can get color would be in framebuffer then you are ok.
your equation is:
gl_FragColor = wouldBeFramebufferColor + (1-wouldBeFramebufferColor.a) * texture0Color;
but why do you want to do it in shaders AFAIK blending was not removed in OpenGL ES 2.0
Stupid mistake. I only needed to normalize the vertex color in the vertex shader as I'm passing unsigned bytes in.

Does the input texture to a fragment shader change as the shader runs?

I'm trying to implement the Atkinson dithering algorithm in a fragment shader in GLSL using our own Brad Larson's GPUImage framework. (This might be one of those things that is impossible but I don't know enough to determine that yet so I'm just going ahead and doing it anyway.)
The Atkinson algo dithers grayscale images into pure black and white as seen on the original Macintosh. Basically, I need to investigate a few pixels around my pixel and determine how far away from pure black or white each is and use that to calculate a cumulative "error;" that error value plus the original value of the given pixel determines whether it should be black or white. The problem is that, as far as I could tell, the error value is (almost?) always zero or imperceptibly close to it. What I'm thinking might be happening is that the texture I'm sampling is the same one that I'm writing to, so that the error ends up being zero (or close to it) because most/all of the pixels I'm sampling are already black or white.
Is this correct, or are the textures that I'm sampling from and writing to distinct? If the former, is there a way to avoid that? If the latter, then might you be able to spot anything else wrong with this code? 'Cuz I'm stumped, and perhaps don't know how to debug it properly.
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp vec3 dimensions;
void main()
{
highp vec2 relevantPixels[6];
relevantPixels[0] = vec2(textureCoordinate.x, textureCoordinate.y - 2.0);
relevantPixels[1] = vec2(textureCoordinate.x - 1.0, textureCoordinate.y - 1.0);
relevantPixels[2] = vec2(textureCoordinate.x, textureCoordinate.y - 1.0);
relevantPixels[3] = vec2(textureCoordinate.x + 1.0, textureCoordinate.y - 1.0);
relevantPixels[4] = vec2(textureCoordinate.x - 2.0, textureCoordinate.y);
relevantPixels[5] = vec2(textureCoordinate.x - 1.0, textureCoordinate.y);
highp float err = 0.0;
for (mediump int i = 0; i < 6; i++) {
highp vec2 relevantPixel = relevantPixels[i];
// #todo Make sure we're not sampling a pixel out of scope. For now this
// doesn't seem to be a failure (?!).
lowp vec4 pointColor = texture2D(inputImageTexture, relevantPixel);
err += ((pointColor.r - step(.5, pointColor.r)) / 8.0);
}
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
lowp float hue = step(.5, textureColor.r + err);
gl_FragColor = vec4(hue, hue, hue, 1.0);
}
There are a few problems here, but the largest one is that Atkinson dithering can't be performed in an efficient manner within a fragment shader. This kind of dithering is a sequential process, being dependent on the results of fragments above and behind it. A fragment shader can only write to one fragment in OpenGL ES, not neighboring ones like is required in that Python implementation you point to.
For potential shader-friendly dither implementations, see the question "Floyd–Steinberg dithering alternatives for pixel shader."
You also normally can't write to and read from the same texture, although Apple did add some extensions in iOS 6.0 that let you write to a framebuffer and read from that written value in the same render pass.
As to why you're seeing odd error results, the coordinate system within a GPUImage filter is normalized to the range 0.0 - 1.0. When you try to offset a texture coordinate by adding 1.0, you're reading past the end of the texture (which is then clamped to the value at the edge by default). This is why you see me using texelWidth and texelHeight values as uniforms in other filters that require sampling from neighboring pixels. These are calculated as a fraction of the overall image width and height.
I'd also not recommend doing texture coordinate calculation within the fragment shader, as that will lead to a dependent texture read and really slow down the rendering. Move that up to the vertex shader, if you can.
Finally, to answer your title question, usually you can't modify a texture as it is being read, but the iOS texture cache mechanism sometimes allows you to overwrite texture values as a shader is working its way through a scene. This leads to bad tearing artifacts usually.
#GarrettAlbright For the 1-Bit Camera app I ended up with simply iterating over the image data using raw memory pointers and (rather) tightly optimized C code. I looked into NEON intrisics and the Accelerate framework, but any parallelism really screws up an algorithm of this nature so I didn't use it.
I also toyed around with the idea to do a decent enough aproximation of the error distribution on the GPU first, and then do the thresholding in another pass, but I never got anything but a rather ugly noise dither from those experiments. There are some papers around covering other ways of approaching diffusion dithering on the GPU.

Smooth color blending in OpenGL

I'm trying to achieve the following blending when the texture at one vertex merges with another:
Here's what I currently have:
I've enabled blending and am specifying the blending function as:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I can see that the image drawn in the paper app is made up of a small circle that merges with the same texture before and after and has some blending effect on the color and the alpha.
How do I achieve the desired effect?
UPDATE:
What I think is happening is that the intersected region of the two textures is getting the alpha channel to be modified (either additive or some other custom function) while the texture is not being drawn in the intersected region. The rest of the region has the rest of the texture drawn. Like so:
I'm not entirely sure of how to achieve this result, though.
You shouldn't need blending for this (and it won't work the way you want).
I think as long as you define your texture coordinate in the screen space, it should be seamless between two separate circles.
To do this, instead of using a texture coordinate passed through the vertex shader, just use the position of the fragment to sample the texture, plus or minus some scaling:
float texcoord = gl_FragCoord / vec2(xresolution_in_pixels, yresolution_in_pixels);`
gl_FragColor = glTexture2D(papertexture, texcoord);
If you don't have access to GLSL, you could do something instead with the stencil buffer. Just draw all your circles into the stencil buffer, use the combined region as a stencil mask, and then draw a fullscreen quad of your texture. The color will be seamlessly deposited at the union of all the circles.
You can achieve this effect with max blending for alpha. Or manual (blending off) with shader (OpenGL ES 2.0):
#extension GL_EXT_shader_framebuffer_fetch : require
precision highp float;
uniform sampler2D texture;
uniform vec4 color;
varying vec2 texCoords;
void main() {
float res_a =gl_LastFragData[0].a;
float a = texture2D(texture, texCoords).a;
res_a = max(a, res_a);
gl_FragColor = vec4(color.rgb * res_a, res_a);
}
Result:

Resources