WebGL fragement shader has discrete jump for low alpha values (0.045 to 0.044) - webgl

I am testing out some stuff in WebGL and I have the following simple fragment shader:
precision mediump float;
void main() {
gl_FragColor=vec4(1.,0.,0.,.044);
}
When I set the alpha value to 0.045, the triangles appear as expected (red with low level of opacity), but when I decrease to 0.044 they disappear. Ultimately I am trying to get the triangles to fade out gradually, but there is a discrete jump around these values.
I believe this is related to some precision error (i.e. https://developer.apple.com/forums/thread/125263), but not sure how to fix this.
Just for reference, I am using the follwoing:
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
Here is an example in JSFiddle where you can change the number in the fragment shader at the top of the javascript: https://jsfiddle.net/pasr2uqz/

Related

OpenGL/GLSE alpha masking

I'm implementing a paint app by using OpenGL/GLSL.
There is a feature where a user draws a "mask" by using brush with a pattern image, meantime the background changes according to the brush position. Take a look at the video to understand: video
I used CALayer's mask (iOS stuff) to achieve this effect (on the video). But this implementation is very costly, fps is pretty low. So I decided to use OpenGL for that.
For OpenGL implementation, I use the Stencil buffer for masking, i.e.:
glEnable(GL_STENCIL_TEST);
glStencilFunc(GL_ALWAYS, 1, 0);
glStencilOp(GL_KEEP, GL_KEEP, GL_REPLACE);
// Draw mask (brush pattern)
glStencilFunc(GL_EQUAL, 1, 255);
// Draw gradient background
// Display the buffer
glBindRenderbuffer(GL_RENDERBUFFER, viewRenderbuffer);
[context presentRenderbuffer:GL_RENDERBUFFER];
The problem: Stencil buffer doesn't work with alpha, that's why I can't use semi-transparent patterns for the brushes.
The question: How can I achieve that effect from video by using OpenGL/GLSL but without Stencil buffer?
Since your background is already generated (from comments) then you can simply use 2 textures in the shader to draw a each of the segments. You will need to redraw all of them until user lifts up his finger though.
So assume you have a texture that has a white footprint on it with alpha channel footprintTextureID and a background texture "backgroundTextureID". You need to bind both of a textures using activeTexture 1 and 2 and pass the 2 as uniforms in the shader.
Now in your vertex shader you will need to generate the relative texture coordinates from the position. There should be a line similar to gl_Position = computedPosition; so you need to add another varying value:
backgroundTextureCoordinates = vec2((computedPosition.x+1.0)*0.5, (computedPosition.y+1.0)*0.5);
or if you need to flip vertically
backgroundTextureCoordinates = vec2((computedPosition.x+1.0)*0.5, (-computedPosition.y+1.0)*0.5):
(The reason for this equation is that the output vertices are in interval [-1,1] but the textures use [0,1]: [-1,1]+1 = [0,2] then [0,2]*0.5 = [0,1]).
Ok so assuming you bound all of these correctly you now only need to multiply the colors in fragment shader to get the blended color:
uniform sampler2D footprintTexture;
varying lowp vec2 footprintTextureCoordinate;
uniform sampler2D backgroundTexture;
varying lowp vec2 backgroundTextureCoordinates;
void main() {
lowp vec4 footprintColor = texture2D(footprintTexture, footprintTextureCoordinate);
lowp vec4 backgroundColor = texture2D(backgroundTexture, backgroundTextureCoordinates);
gl_FragColor = footprintColor*backgroundColor;
}
If you wanted you could multiply with alpha value from the footprint but that only loses the flexibility. Until the footprint texture is white it makes no difference so it is your choice.
Stencil is a boolean on/off test, so as you say it can't cope with alpha.
The only GL technique which works with alpha is the blending, but due to the color change between frames you can't simply flatten this into a single layer in a single pass.
To my mind it sounds like you need to maintain multiple independent layers in off-screen buffers, and then blend them together per frame to form what is shown on screen. This gives you complete independence for how you update each layer per frame.

How to draw a star in iOS OpenGL ES 2.0

This question has been asked before but the quite a few years ago in my searches. The answer was always to use texture mapping but what I really want to do is represent the star as a single vertex - you may think I'm copping out with a simplistic method but in fact, a single point source of light actually looks pretty good and realistic. But I want to process that point of light with something like a gaussian blur too give it a little more body when zooming in or for brighter stars. I was going to texture map a gaussian blur image but if I understand things correctly I would then have to draw each star with 4 vertexes. Maybe not so difficult but I don't want to go there if I can just process a single vertex. Would a vertex-shader do this? Can GLKBaseEffects get me there? Any suggestions?
Thanks.
You can use point sprites.
Draw Calls
You use a texture containing the image of the star, and use the typical setup to bind a texture, bind it to a sampler uniform in the shader, etc.
You draw a single vertex for each star, with GL_POINTS as the primitive type passed as the first argument to glDrawArrays()/glDrawElements(). No texture coordinates are needed.
Vertex Shader
In the vertex shader, you transform the vertex as you normally would, and also set the built-in gl_PointSize variable:
uniform float PointSize;
attribute vec4 Position;
void main() {
gl_Position = ...; // Transform Position attribute;
gl_PointSize = PointSize;
}
For the example, I used a uniform for the point size, which means that all stars will have the same size. Depending on the desired effect, you could also calculate the size based on the distance, or use an additional vertex attribute to specify a different size for each star.
Fragment Shader
In the fragment shader, you can now access the built-in gl_PointCoord variable to get the relative coordinates of the fragment within the point sprite. If your point sprite is a simple texture image, you can use it directly as the texture coordinates.
uniform sampler2D SpriteTex;
void main() {
gl_FragColor = texture2D(SpriteTex, gl_PointCoord);
}
Additional Material
I answered a somewhat similar question here: Render large circular points in modern OpenGL. Since it was for desktop OpenGL, and not for a textured sprite, this seemed worth a separate answer. But some of the steps are shared, and might be explained in more detail in the other answer.
I've been busy educating myself on this and trying it but I'm getting strange results. It seems to work with regard to vertex transform - because I see the points moved out on the screen - but pointsize and colour are not being affected. The colour seems to be some sort of default yellow colour with some shading between vertices.
What bothers me too is that I get error messages on built-ins in the vertex shader. Here are the vertex/fragment code and the error messages:
#Vertex shader
precision mediump float;
precision lowp int;
attribute float Pointsize;
varying vec4 color_out;
void main()
{
gl_PointSize = Pointsize;
gl_Position = gl_ModelViewMatrix * gl_Vertex;
color_out = vec4(0.0, 1.0, 0.0, 1.0); // output only green for test
}
#Fragment shader
precision mediump float;
precision lowp int;
varying vec4 color_out;
void main()
{
gl_FragColor = color_out;
}
Here's the error message:
ERROR: 0:24: Use of undeclared identifier 'gl_ModelViewMatrix'
ERROR: 0:24: Use of undeclared identifier 'gl_Vertex'
ERROR: One or more attached shaders not successfully compiled
It seems the transform is being passed from my iOS code where I'm using GLKBaseEffects such as in the following lines:
self.effect.transform.modelviewMatrix = modelViewMatrix;
[self.effect prepareToDraw];
But I'm not sure exactly whats happening, especially with the shader compile errors.

Does the input texture to a fragment shader change as the shader runs?

I'm trying to implement the Atkinson dithering algorithm in a fragment shader in GLSL using our own Brad Larson's GPUImage framework. (This might be one of those things that is impossible but I don't know enough to determine that yet so I'm just going ahead and doing it anyway.)
The Atkinson algo dithers grayscale images into pure black and white as seen on the original Macintosh. Basically, I need to investigate a few pixels around my pixel and determine how far away from pure black or white each is and use that to calculate a cumulative "error;" that error value plus the original value of the given pixel determines whether it should be black or white. The problem is that, as far as I could tell, the error value is (almost?) always zero or imperceptibly close to it. What I'm thinking might be happening is that the texture I'm sampling is the same one that I'm writing to, so that the error ends up being zero (or close to it) because most/all of the pixels I'm sampling are already black or white.
Is this correct, or are the textures that I'm sampling from and writing to distinct? If the former, is there a way to avoid that? If the latter, then might you be able to spot anything else wrong with this code? 'Cuz I'm stumped, and perhaps don't know how to debug it properly.
varying highp vec2 textureCoordinate;
uniform sampler2D inputImageTexture;
uniform highp vec3 dimensions;
void main()
{
highp vec2 relevantPixels[6];
relevantPixels[0] = vec2(textureCoordinate.x, textureCoordinate.y - 2.0);
relevantPixels[1] = vec2(textureCoordinate.x - 1.0, textureCoordinate.y - 1.0);
relevantPixels[2] = vec2(textureCoordinate.x, textureCoordinate.y - 1.0);
relevantPixels[3] = vec2(textureCoordinate.x + 1.0, textureCoordinate.y - 1.0);
relevantPixels[4] = vec2(textureCoordinate.x - 2.0, textureCoordinate.y);
relevantPixels[5] = vec2(textureCoordinate.x - 1.0, textureCoordinate.y);
highp float err = 0.0;
for (mediump int i = 0; i < 6; i++) {
highp vec2 relevantPixel = relevantPixels[i];
// #todo Make sure we're not sampling a pixel out of scope. For now this
// doesn't seem to be a failure (?!).
lowp vec4 pointColor = texture2D(inputImageTexture, relevantPixel);
err += ((pointColor.r - step(.5, pointColor.r)) / 8.0);
}
lowp vec4 textureColor = texture2D(inputImageTexture, textureCoordinate);
lowp float hue = step(.5, textureColor.r + err);
gl_FragColor = vec4(hue, hue, hue, 1.0);
}
There are a few problems here, but the largest one is that Atkinson dithering can't be performed in an efficient manner within a fragment shader. This kind of dithering is a sequential process, being dependent on the results of fragments above and behind it. A fragment shader can only write to one fragment in OpenGL ES, not neighboring ones like is required in that Python implementation you point to.
For potential shader-friendly dither implementations, see the question "Floyd–Steinberg dithering alternatives for pixel shader."
You also normally can't write to and read from the same texture, although Apple did add some extensions in iOS 6.0 that let you write to a framebuffer and read from that written value in the same render pass.
As to why you're seeing odd error results, the coordinate system within a GPUImage filter is normalized to the range 0.0 - 1.0. When you try to offset a texture coordinate by adding 1.0, you're reading past the end of the texture (which is then clamped to the value at the edge by default). This is why you see me using texelWidth and texelHeight values as uniforms in other filters that require sampling from neighboring pixels. These are calculated as a fraction of the overall image width and height.
I'd also not recommend doing texture coordinate calculation within the fragment shader, as that will lead to a dependent texture read and really slow down the rendering. Move that up to the vertex shader, if you can.
Finally, to answer your title question, usually you can't modify a texture as it is being read, but the iOS texture cache mechanism sometimes allows you to overwrite texture values as a shader is working its way through a scene. This leads to bad tearing artifacts usually.
#GarrettAlbright For the 1-Bit Camera app I ended up with simply iterating over the image data using raw memory pointers and (rather) tightly optimized C code. I looked into NEON intrisics and the Accelerate framework, but any parallelism really screws up an algorithm of this nature so I didn't use it.
I also toyed around with the idea to do a decent enough aproximation of the error distribution on the GPU first, and then do the thresholding in another pass, but I never got anything but a rather ugly noise dither from those experiments. There are some papers around covering other ways of approaching diffusion dithering on the GPU.

Smooth color blending in OpenGL

I'm trying to achieve the following blending when the texture at one vertex merges with another:
Here's what I currently have:
I've enabled blending and am specifying the blending function as:
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
I can see that the image drawn in the paper app is made up of a small circle that merges with the same texture before and after and has some blending effect on the color and the alpha.
How do I achieve the desired effect?
UPDATE:
What I think is happening is that the intersected region of the two textures is getting the alpha channel to be modified (either additive or some other custom function) while the texture is not being drawn in the intersected region. The rest of the region has the rest of the texture drawn. Like so:
I'm not entirely sure of how to achieve this result, though.
You shouldn't need blending for this (and it won't work the way you want).
I think as long as you define your texture coordinate in the screen space, it should be seamless between two separate circles.
To do this, instead of using a texture coordinate passed through the vertex shader, just use the position of the fragment to sample the texture, plus or minus some scaling:
float texcoord = gl_FragCoord / vec2(xresolution_in_pixels, yresolution_in_pixels);`
gl_FragColor = glTexture2D(papertexture, texcoord);
If you don't have access to GLSL, you could do something instead with the stencil buffer. Just draw all your circles into the stencil buffer, use the combined region as a stencil mask, and then draw a fullscreen quad of your texture. The color will be seamlessly deposited at the union of all the circles.
You can achieve this effect with max blending for alpha. Or manual (blending off) with shader (OpenGL ES 2.0):
#extension GL_EXT_shader_framebuffer_fetch : require
precision highp float;
uniform sampler2D texture;
uniform vec4 color;
varying vec2 texCoords;
void main() {
float res_a =gl_LastFragData[0].a;
float a = texture2D(texture, texCoords).a;
res_a = max(a, res_a);
gl_FragColor = vec4(color.rgb * res_a, res_a);
}
Result:

What is a good way to manage state when swapping out Direct3D 11 shaders?

I currently have two shaders, a processing shader (both vertex and pixel) - which calculates lighting and projection transformations. This is then rendered to a texture. I then have my second shader, a postprocessing shader, which reads the rendered texture and outputs it to the screen (again both vertex and pixel shaders).
Once I've rendered my scene to a texture I swap the Immediate Context's Vertex and Pixel shaders with my postprocessing ones, but I'm not sure how I should manage the state (e.g. my texture parameters and my constant buffers). Swapping shaders and then manually resetting the constant buffers and textures twice each frame seems incredibly wasteful, and kind of defeats the point of constant buffers in the first place, but as far as I can see you can't set the data on the shader object, it has to be passed to the context.
What do other people suggest for fairly simple and efficient ways of managing variables and textures when swapping in and out shaders?
Since you've had no answers from D3D11 experts, I offer my limited D3D9 experience, for what it's worth.
... twice each frame seems incredibly wasteful ...
"Seems" is a suspicious word there. Have you measured the performance hit? Twice per frame doesn't sound too bad.
For textures, I assign my texture registers according to purpose. I use registers < s4 for material-specific textures, and >= s4 for render target textures which are assigned only once at the beginning of the game. So for example my main scene shader has the following:
sampler2D DiffuseMap : register(s0);
sampler2D DetailMap : register(s1);
sampler2D DepthMap : register(s4);
sampler2D ShadowMap : register(s5);
sampler2D LightMap : register(s6);
sampler2D PrevFrame : register(s7);
So my particle shader has a reduced set, but the DepthMap texture is still in the same slot...
sampler2D Tex : register(s0);
sampler2D DepthMap : register(s4);
I don't know how much of this is applicable to D3D11, but I hope it's of some use.

Resources