WebGL is there a way to load dynamic buffers in fragment shaders? - webgl

I have a fragment shader that can draw an arc based on a set of parameters. The idea was to make the shader resolution independent, so I pass the center of the arc and the bounding radii as pixel values on the screen. You can then just render the shader by setting your vertex positions in the shape of a square. This is the shader:
precision mediump float;
#define PI 3.14159265359
#define _2_PI 6.28318530718
#define PI_2 1.57079632679
// inputs
vec2 center = u_resolution / 2.;
vec2 R = vec2( 100., 80. );
float ang1 = 1.0 * PI;
float ang2 = 0.8 * PI;
vec3 color = vec3( 0., 1.0, 0. );
// prog vars
uniform vec2 u_resolution;
float smOOth = 1.3;
vec3 bkgd = vec3( 0.0 ); // will be a sampler
void main () {
// get the dist from the current pixel to the coord.
float r = distance( gl_FragCoord.xy, center );
if ( r < R.x && r > R.y ) {
// If we are in the radius, do some trig to find the angle and normalize
// to
float theta = -( atan( gl_FragCoord.y - center.y,
center.x - gl_FragCoord.x ) ) + PI;
// This is to make sure the angles are clipped at 2 pi, but if you pass
// the values already clipped, then you can safely delete this and make
// the code more efficinent.
ang1 = mod( ang1, _2_PI );
ang2 = mod( ang2, _2_PI );
float angSum = ang1 + ang2;
bool thetaCond;
vec2 thBound; // short for theta bounds: used to calculate smoothing
// at the edges of the circle.
if ( angSum > _2_PI ) {
thBound = vec2( ang2, angSum - _2_PI );
thetaCond = ( theta > ang2 && theta < _2_PI ) ||
( theta < thetaBounds.y );
} else {
thBound = vec2( ang2, angSum );
thetaCond = theta > ang2 && theta < angSum;
}
if ( thetaCond ) {
float angOpMult = 10000. / ( R.x - R.y ) / smOOth;
float opacity = smoothstep( 0.0, 1.0, ( R.x - r ) / smOOth ) -
smoothstep( 1.0, 0.0, ( r - R.y ) / smOOth ) -
smoothstep( 1.0, 0.0, ( theta - thBound.x )
* angOpMult ) -
smoothstep( 1.0, 0.0, ( thBound.y - theta )
* angOpMult );
gl_FragColor = vec4( mix( bkgd, color, opacity ), 1.0 );
} else
discard;
} else
discard;
}
I figured this way of drawing a circle would yield better quality circles and be less hassle than loading a bunch of vertices and drawing triangle fans, even though it probably isn't as efficient. This works fine, but I don't just want to draw one fixed circle. I want to draw any circle I would want on the screen. So I had an idea to set the 'inputs' to varyings and pass a buffer with parameters to each of the vertices of a given bounding square. So my vertex shader looks like this:
attribute vec2 a_square;
attribute vec2 a_center;
attribute vec2 a_R;
attribute float a_ang1;
attribute float a_ang2;
attribute vec3 a_color;
varying vec2 center;
varying vec2 R;
varying float ang1;
varying float ang2;
varying vec3 color;
void main () {
gl_Position = vec4( a_square, 0.0, 1.0 );
center = a_center;
R = a_R;
ang1 = a_ang1;
ang2 = a_ang2;
color = a_color;
}
'a_square' is just the vertex for the bounding square that the circle would sit in.
Next, I define a buffer for the inputs for one test circle (in JS). One of the problems with doing it this way is that the circle parameters have to be repeated for each vertex, and for a box, this means four times. 'pw' and 'ph' are the width and height of the canvas, respectively.
var circleData = new Float32Array( [
pw / 2, ph / 2,
440, 280,
Math.PI * 1.2, Math.PI * 0.2,
1000, 0, 0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
] );
Then I simply load my data into a gl buffer (circleBuffer) and bind the appropriate attributes to it.
gl.bindBuffer( gl.ARRAY_BUFFER, bkgd.circleBuffer );
gl.vertexAttribPointer( bkgd.aCenter, 2, gl.FLOAT, false, 0 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aCenter );
gl.vertexAttribPointer( bkgd.aR, 2, gl.FLOAT, false, 2 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aR );
gl.vertexAttribPointer( bkgd.aAng1, 1, gl.FLOAT, false, 4 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aAng1 );
gl.vertexAttribPointer( bkgd.aAng2, 1, gl.FLOAT, false, 5 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aAng2 );
gl.vertexAttribPointer( bkgd.aColor, 3, gl.FLOAT, false, 6 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aColor );
When I load my page, I do see a circle, but it seems to me that the radii are the only attributes that are actually reflecting any type of responsiveness. The angles, center, and color are not reflecting the values they are supposed to be, and I have absolutely no idea why the radii are the only things that are actually working.
Nonetheless, this seems to be an inefficient way to load arguments into a fragment shader to draw a circle, as I have to reload the values for every vertex of the box, and then the GPU interpolates those values for no reason. Is there a better way to pass something like an attribute buffer to a fragment shader, or in general to use a fragment shader in this way? Or should I just use vertices to draw my circle instead?

If you're only drawing circles you can use instanced drawing to not repeat the info.
See this Q&A: what does instancing do in webgl
Or this article
Instancing lets you use some data per instance, as in per circle.
You can also use a texture to store the per circle data or all data. See this Q&A: How to do batching without UBOs?
Whether either are more or less efficient depends on the GPU/driver/OS/Browser. If you need to draw 1000s of circles this might be efficient. Most apps draw a variety of things so would chose a more generic solution unless they had special needs to draw 1000s of circles.
Also it may not be efficient because you're still calling the fragment shader for every pixel that is in the square but not in the circle. That's 30% more calls to the fragment shader than using triangles and that assumes your code is drawing quads that fit the circles. It looks at a glance that your actual code is drawing full canvas quads which is terribly inefficient.

Related

draw a ellipse curve in fragment shader

I want an smooth curve in webgl so I tried to draw a rectangle in vertex shader and send positions to fragment shader in normalize size and Size parameter which is the real size of square
in vec2 Position;
in vec2 Size;
out vec2 vPosition;
out vec2 vSize;
void main() {
vSize = Size;
vPosition = Position
gl_Position = Position * Size
}
when size = [width, height] of square and is equal for every vertex and position = [
-1 , -1 ,
-1 , 1 ,
1 , -1 ,
1 , 1 ,
]
so my rectangle will be drawn in [2 * width , 2 * height] but I can do geometric operations in fragment shader with a 2 * 2 square which is normalized
but I have a problem with drawing ellipse(or circle with this sizes) in fragment shader when I want to make hollow circle with a thickness parameter it's thickness in horizontal direction is not same as vertical direction and I know it's because of that I'm using same size for horizontal and vertical directions(2,2) but in display they are different and this is the problem which make as you can see thickness in all of it is not same.
I want a geometry formula to calculate thickness in each direction then I can draw a hollow ellipse.
thanks in advance.
sorry for bad English
If you put heavy mathematical computing in your fragment shader, it will be slow.
The trick can be to use an approximation that can be visually acceptable.
Your problem is that the thickness is different on the vertical and the horizontal axis.
What you need is discarding fragments if the radius of the current point is greater than 1 and lower than radiusMin. Let uniWidth and uniHeight be the size of your rectangle.
* When y is null, on the horizontal axis, radiusMin = 1.0 - BORDER / uniWidth.
* When x is null, on the vertical axis, radiusMin = 1.0 - BORDER / uniHeight.
The trick is to interpolate between this two radii using the mix() function.
Take a look at my live example below to convince you that the result is not that bad.
Here is the fragment shader to do such a computation:
precision mediump float;
uniform float uniWidth;
uniform float uniHeight;
varying vec2 varCoords;
const float BORDER = 32.0;
void main() {
float x = varCoords.x;
float y = varCoords.y;
float radius = x * x + y * y;
if( radius > 1.0 ) discard;
radius = sqrt( radius );
float radiusH = 1.0 - BORDER / uniWidth;
float radiusV = 1.0 - BORDER / uniHeight;
float radiusAverage = (radiusH + radiusV) * 0.5;
float minRadius = 0.0;
x = abs( x );
y = abs( y );
if( x > y ) {
minRadius = mix( radiusH, radiusAverage, y / x );
}
else {
minRadius = mix( radiusV, radiusAverage, x / y );
}
if( radius < minRadius ) discard;
gl_FragColor = vec4(1, .5, 0, 1);
}
Here is a live example: https://jsfiddle.net/7rh2eog1/5/
There is a implicit formula for x and y which are in the blue area in hollow ellipse with thickness parameter, As we know the thickness and we have the size of our view we can make two ellipse with size1 = Size - vec2(thickness) and size2 = Size + vec2(thickness)
and then length(position/size1) < 1.0 < length(position/size2)

Edge/outline detection from texture in fragment shader

I am trying to display sharp contours from a texture in WebGL.
I pass a texture to my fragment shaders then I use local derivatives to display the contours/outline, however, it is not smooth as I would expect it to.
Just printing the texture without processing works as expected:
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
vec4 color = texture2D(uTextureFilled, texc);
gl_FragColor = color;
With local derivatives, it misses some edges:
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
vec4 color = texture2D(uTextureFilled, texc);
float maxColor = length(color.rgb);
gl_FragColor.r = abs(dFdx(maxColor));
gl_FragColor.g = abs(dFdy(maxColor));
gl_FragColor.a = 1.;
In theory, your code is right.
But in practice most GPUs are computing derivatives on blocks of 2x2 pixels.
So for all 4 pixels of such block the dFdX and dFdY values will be the same.
(detailed explanation here)
This will cause some kind of aliasing and you will miss some pixels for the contour of the shape randomly (this happens when the transition from black to the shape color occurs at the border of a 2x2 block).
To fix this, and get the real per pixel derivative, you can instead compute it yourself, this would look like this :
// get tex coordinates
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
// compute the U & V step needed to read neighbor pixels
// for that you need to pass the texture dimensions to the shader,
// so let's say those are texWidth and texHeight
float step_u = 1.0 / texWidth;
float step_v = 1.0 / texHeight;
// read current pixel
vec4 centerPixel = texture2D(uTextureFilled, texc);
// read nearest right pixel & nearest bottom pixel
vec4 rightPixel = texture2D(uTextureFilled, texc + vec2(step_u, 0.0));
vec4 bottomPixel = texture2D(uTextureFilled, texc + vec2(0.0, step_v));
// now manually compute the derivatives
float _dFdX = length(rightPixel - centerPixel) / step_u;
float _dFdY = length(bottomPixel - centerPixel) / step_v;
// display
gl_FragColor.r = _dFdX;
gl_FragColor.g = _dFdY;
gl_FragColor.a = 1.;
A few important things :
texture should not use mipmaps
texture min & mag filtering should be set to GL_NEAREST
texture clamp mode should be set to clamp (not repeat)
And here is a ShaderToy sample, demonstrating this :

How to draw 2 nested rectangles using ONLY 4 vertices in the buffer in Webgl?

I know how to use a uniform variable to move the rectangle around, but I don't know how to make it smaller or bigger to fit one into the other. Any help is appreciated. Thank you!
var vertices =
[
vec2(0.0, 0.0 ),
vec2(0.4, 0),
vec2(0, 0.4),
vec2(0.4, 0.4)
];
gl.viewport( 0, 0, canvas.width, canvas.height );
gl.clearColor( 0.9, 0.9, 0.9, 1.0 );
var program = initShaders( gl, "vertex-shader", "fragment-shader" );
gl.useProgram( program );
// Create a buffer for the vertex shader in the GPU.
var bufferId = gl.createBuffer();
// Tell the GPU to expect data for this buffer
gl.bindBuffer( gl.ARRAY_BUFFER, bufferId );
// Send data into the buffer.
gl.bufferData( gl.ARRAY_BUFFER, flatten(vertices), gl.STATIC_DRAW );
// Set up the buffer for use
var vPosition = gl.getAttribLocation( program, "myvPosition" );
// myvPosition (identified using vPosition) will correspond to 2 floats per vertex,
gl.vertexAttribPointer( vPosition, 2, gl.FLOAT, false, 0, 0 );
// Enable use of the vertex buffer with myvPosition
gl.enableVertexAttribArray( vPosition );
// Get an index to each uniform variable in the GPU's shader
var xIndex = gl.getUniformLocation( program, "xAdjust" );
var yIndex = gl.getUniformLocation( program, "yAdjust" );
var rIndex = gl.getUniformLocation( program, "red" );
var gIndex = gl.getUniformLocation( program, "green" );
var bIndex = gl.getUniformLocation( program, "blue" );
gl.uniform1f( xIndex, -0.25 ); // move to the left
gl.uniform1f( gIndex, 1.0 );
gl.clear( gl.COLOR_BUFFER_BIT ); // note new place to put clear
render();
gl.uniform1f( xIndex, +0.25 ); // move to the right
gl.uniform1f( rIndex, 1.0 );
render();
};
function render()
{
gl.drawArrays( gl.TRIANGLE_STRIP, 0, 4 );
}
for example: I can change the value in the gl.uniform1f(xIndex, ) to move the rectangle along x axis
Time to learn about transformation matrix. There is a lot of math, but I will try to explain it as simple as possible.
Lets pick new square 1x1:
var vertices =
[
vec2(0, 0),
vec2(1, 0),
vec2(0, 1),
vec2(1, 1)
];
Now if you would like to move it to the left by 1 (as you did), you want to add 1 to [x] of all your vertices. This look simple.
If you want to rotate it, it is much more complicated. Imagine your object would be from 50000 vertices and not just 4 => super complicated!
So people invented some procedure that is widely used. We create transformation matrix for each object we have. In 2D, matrix is 3x3. In 3D, matrix is 4x4.
How the matrix works? First you create vertices, then initalize matrix with
// js example
var model1M = mat3.create([
1, 0, 0,
0, 1, 0,
0, 0, 1]);
Which means "no transformation done" yet. Then you translate, rotate, scale your object by operations with matrix. Remember, transformation order is important!!
move & rotate != rotate & move
Once you want to render, you send matrix to the shader.
// this is how you send 1 float value
gl.uniform1f( xIndex, -0.25 ); // move to the left
// this is how we send 3x3 matrix
var mvmi = gl.getUniformLocation( program, "modelViewMatrix" );
gl.uniformMatrix3fv(mvmi, false, model1M);
And in shader:
// you have to modify what is in vec4
gl_Position = modelViewMatrix * vec4( position, 1.0 );
and its done.
Problem is mat3 doesnt exist in js. Math for transformations:
http://upload.wikimedia.org/wikipedia/commons/2/2c/2D_affine_transformation_matrix.svg
You need to implement all the math first. But easier is just download library for example http://glmatrix.net/ and include gl-matrix-min.js. Then follow documentation http://glmatrix.net/docs/2.2.0/symbols/mat3.html .
Simple cookbook:
var DEG_TO_RAD = 0.0174532925;
// create matrix, you dont have to type numbers in
var modelMatrix = mat3.create();
// move
mat3.translate(modelMatrix, modelMatrix, [-0.5, -0.5]);
// rotate by 45 degrees
mat3.rotate(modelMatrix, modelMatrix, 45*DEG_TO_RAD);
// make square smaller
mat3.scale(modelMatrix, modelMatrix, [0.4, 0.4]);

Adding projection matrix to opengl es point sprites particle effect vertex shader

I have been learning opengl es from the opengl es 2.0 programming guide. They have a particle effect that looks like an explosion. I am trying to enhance their example code by adding a mat4 projection matrix to the vertex shader, the shader compiles and works, but I am having problems getting the effect to position taking the projection into account. The code I have is as follows
const char* ParticleExplosionVertexShader = STRINGIFY (
uniform float u_time;
uniform vec3 u_centerPosition;
uniform mat4 Projection;
attribute float a_lifetime;
attribute vec3 a_startPosition;
attribute vec3 a_endPosition;
varying float v_lifetime;
void main()
{
if ( u_time <= a_lifetime )
{
gl_Position.xyz = a_startPosition + (u_time * a_endPosition);
gl_Position.xyz += u_centerPosition;
gl_Position.w = 1.0;
}
else
gl_Position = vec4( -1000, -1000, 0, 0 );
v_lifetime = 1.0 - ( u_time / a_lifetime );
v_lifetime = clamp ( v_lifetime, 0.0, 1.0 );
gl_PointSize = ( v_lifetime * v_lifetime ) * 40.0;
}
);
I am able to add the projection to the line without any errors, but unfortunately here its not really required as that code is placing the object of d=screen at the end of its lifetime
gl_Position = Projection * vec4( -1000, -1000, 0, 0 );
I have also tried changing the line
gl_Position.xyz += u_centerPosition;
to
gl_Position += Projection * u_centerPosition;
But I have had no luck getting it to place as I want it
Am I doing something wrong? Or is there a reason the book didn't have a projection matrix such as its not something someone should do with point sprites?
Any help or pointers to what I should look into will be appreciated
Thanks
Edit: Please let me know if you need more information from me
What about multiplying the whole gl_Position by modelview-projection matrix, as with any normal geometry?
Also, you will probably need to modify the line that calculates gl_PointSize, for example try to divide it by gl_Position.w (after multiplication by modelview-projection), otherwise the sprites will all have the same size (is that what you are trying to fix?).

OpenGLES Shadow Volume

I succesfully implement shadow volume on iOS.
However I got the following issue how can I clip the vertex position to the far plane like NV_depth_clamp is doing in GLSL? this is my vertex shader code:
void main( void ) {
highp vec3 eyepos = vec3( MODELVIEW * vec4( VERTEX, 1.0 ) );
normal = normalize( NORMALMATRIX * NORMAL );
highp vec3 ldir = normalize( LIGHTPOS - eyepos );
highp float ndotl = max( dot( normal, ldir ), 0.0 );
// How can I clip that to the far plane automatically!??!!?
if( ndotl > 0.0 ) gl_Position = PROJECTION * vec4( eyepos + ( ldir * -2000.0 ), 1.0 );
else gl_Position = PROJECTION * vec4( eyepos, 1.0 );
}
Second, while searching for the issue above, I found that the shadow volume zfail method (which is what I implement) is patented is that true? does that mean I can't use it in a commercial application on the App Store?
TIA!
Cheers, at the far clip plane, z/w = 1. So you need to transform both eyepos and ldir by projection, and then add as much ldir to eyepos so that it ends up at the far plane. This might be tricky though, because the far clip plane may clip your polygons if they lie exactly on it, so some tweaking might be required.

Resources