OpenGLES Shadow Volume - ios

I succesfully implement shadow volume on iOS.
However I got the following issue how can I clip the vertex position to the far plane like NV_depth_clamp is doing in GLSL? this is my vertex shader code:
void main( void ) {
highp vec3 eyepos = vec3( MODELVIEW * vec4( VERTEX, 1.0 ) );
normal = normalize( NORMALMATRIX * NORMAL );
highp vec3 ldir = normalize( LIGHTPOS - eyepos );
highp float ndotl = max( dot( normal, ldir ), 0.0 );
// How can I clip that to the far plane automatically!??!!?
if( ndotl > 0.0 ) gl_Position = PROJECTION * vec4( eyepos + ( ldir * -2000.0 ), 1.0 );
else gl_Position = PROJECTION * vec4( eyepos, 1.0 );
}
Second, while searching for the issue above, I found that the shadow volume zfail method (which is what I implement) is patented is that true? does that mean I can't use it in a commercial application on the App Store?
TIA!

Cheers, at the far clip plane, z/w = 1. So you need to transform both eyepos and ldir by projection, and then add as much ldir to eyepos so that it ends up at the far plane. This might be tricky though, because the far clip plane may clip your polygons if they lie exactly on it, so some tweaking might be required.

Related

WebGL is there a way to load dynamic buffers in fragment shaders?

I have a fragment shader that can draw an arc based on a set of parameters. The idea was to make the shader resolution independent, so I pass the center of the arc and the bounding radii as pixel values on the screen. You can then just render the shader by setting your vertex positions in the shape of a square. This is the shader:
precision mediump float;
#define PI 3.14159265359
#define _2_PI 6.28318530718
#define PI_2 1.57079632679
// inputs
vec2 center = u_resolution / 2.;
vec2 R = vec2( 100., 80. );
float ang1 = 1.0 * PI;
float ang2 = 0.8 * PI;
vec3 color = vec3( 0., 1.0, 0. );
// prog vars
uniform vec2 u_resolution;
float smOOth = 1.3;
vec3 bkgd = vec3( 0.0 ); // will be a sampler
void main () {
// get the dist from the current pixel to the coord.
float r = distance( gl_FragCoord.xy, center );
if ( r < R.x && r > R.y ) {
// If we are in the radius, do some trig to find the angle and normalize
// to
float theta = -( atan( gl_FragCoord.y - center.y,
center.x - gl_FragCoord.x ) ) + PI;
// This is to make sure the angles are clipped at 2 pi, but if you pass
// the values already clipped, then you can safely delete this and make
// the code more efficinent.
ang1 = mod( ang1, _2_PI );
ang2 = mod( ang2, _2_PI );
float angSum = ang1 + ang2;
bool thetaCond;
vec2 thBound; // short for theta bounds: used to calculate smoothing
// at the edges of the circle.
if ( angSum > _2_PI ) {
thBound = vec2( ang2, angSum - _2_PI );
thetaCond = ( theta > ang2 && theta < _2_PI ) ||
( theta < thetaBounds.y );
} else {
thBound = vec2( ang2, angSum );
thetaCond = theta > ang2 && theta < angSum;
}
if ( thetaCond ) {
float angOpMult = 10000. / ( R.x - R.y ) / smOOth;
float opacity = smoothstep( 0.0, 1.0, ( R.x - r ) / smOOth ) -
smoothstep( 1.0, 0.0, ( r - R.y ) / smOOth ) -
smoothstep( 1.0, 0.0, ( theta - thBound.x )
* angOpMult ) -
smoothstep( 1.0, 0.0, ( thBound.y - theta )
* angOpMult );
gl_FragColor = vec4( mix( bkgd, color, opacity ), 1.0 );
} else
discard;
} else
discard;
}
I figured this way of drawing a circle would yield better quality circles and be less hassle than loading a bunch of vertices and drawing triangle fans, even though it probably isn't as efficient. This works fine, but I don't just want to draw one fixed circle. I want to draw any circle I would want on the screen. So I had an idea to set the 'inputs' to varyings and pass a buffer with parameters to each of the vertices of a given bounding square. So my vertex shader looks like this:
attribute vec2 a_square;
attribute vec2 a_center;
attribute vec2 a_R;
attribute float a_ang1;
attribute float a_ang2;
attribute vec3 a_color;
varying vec2 center;
varying vec2 R;
varying float ang1;
varying float ang2;
varying vec3 color;
void main () {
gl_Position = vec4( a_square, 0.0, 1.0 );
center = a_center;
R = a_R;
ang1 = a_ang1;
ang2 = a_ang2;
color = a_color;
}
'a_square' is just the vertex for the bounding square that the circle would sit in.
Next, I define a buffer for the inputs for one test circle (in JS). One of the problems with doing it this way is that the circle parameters have to be repeated for each vertex, and for a box, this means four times. 'pw' and 'ph' are the width and height of the canvas, respectively.
var circleData = new Float32Array( [
pw / 2, ph / 2,
440, 280,
Math.PI * 1.2, Math.PI * 0.2,
1000, 0, 0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
pw/2,ph/2,440,280,Math.PI*1.2,Math.PI*0.2,1000,0,0,
] );
Then I simply load my data into a gl buffer (circleBuffer) and bind the appropriate attributes to it.
gl.bindBuffer( gl.ARRAY_BUFFER, bkgd.circleBuffer );
gl.vertexAttribPointer( bkgd.aCenter, 2, gl.FLOAT, false, 0 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aCenter );
gl.vertexAttribPointer( bkgd.aR, 2, gl.FLOAT, false, 2 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aR );
gl.vertexAttribPointer( bkgd.aAng1, 1, gl.FLOAT, false, 4 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aAng1 );
gl.vertexAttribPointer( bkgd.aAng2, 1, gl.FLOAT, false, 5 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aAng2 );
gl.vertexAttribPointer( bkgd.aColor, 3, gl.FLOAT, false, 6 * floatSiz, 9 * floatSiz );
gl.enableVertexAttribArray( bkgd.aColor );
When I load my page, I do see a circle, but it seems to me that the radii are the only attributes that are actually reflecting any type of responsiveness. The angles, center, and color are not reflecting the values they are supposed to be, and I have absolutely no idea why the radii are the only things that are actually working.
Nonetheless, this seems to be an inefficient way to load arguments into a fragment shader to draw a circle, as I have to reload the values for every vertex of the box, and then the GPU interpolates those values for no reason. Is there a better way to pass something like an attribute buffer to a fragment shader, or in general to use a fragment shader in this way? Or should I just use vertices to draw my circle instead?
If you're only drawing circles you can use instanced drawing to not repeat the info.
See this Q&A: what does instancing do in webgl
Or this article
Instancing lets you use some data per instance, as in per circle.
You can also use a texture to store the per circle data or all data. See this Q&A: How to do batching without UBOs?
Whether either are more or less efficient depends on the GPU/driver/OS/Browser. If you need to draw 1000s of circles this might be efficient. Most apps draw a variety of things so would chose a more generic solution unless they had special needs to draw 1000s of circles.
Also it may not be efficient because you're still calling the fragment shader for every pixel that is in the square but not in the circle. That's 30% more calls to the fragment shader than using triangles and that assumes your code is drawing quads that fit the circles. It looks at a glance that your actual code is drawing full canvas quads which is terribly inefficient.

converting pixels to clipspace

Instead of giving -1 to 1 values to my shaders, I would prefer giving them pixel values like for the 2D canvas context. So according to what I read, I did add a uniform variable which I set to the size of the canvas, and I divide.
But I must be missing something. The rendering is way too big...
gl_.resolutionLocation = gl.getUniformLocation( gl_.program , "u_resolution" );
gl.uniform4f(gl_.resolutionLocation , game.w , game.h , game.w , game.h );
My vertex shader :
attribute vec4 position;
attribute vec2 texcoord;
uniform vec4 u_resolution;
uniform mat4 u_matrix;
varying vec3 v_texcoord;
void main() {
vec4 zeroToOne = position / u_resolution ;
gl_Position = u_matrix * zeroToOne ;
v_texcoord = vec3(texcoord.xy, 1) * abs(position.x);
v_texcoord = v_texcoord/u_resolution.xyz ;
}
My fragment shader :
precision mediump float;
varying vec3 v_texcoord;
uniform sampler2D tex;
uniform float alpha;
void main()
{
gl_FragColor = texture2DProj(tex, v_texcoord);
gl_FragColor.rgb *= gl_FragColor.a ;
}
If you want to stay in pixels with code like the code you have then you'd want to apply the conversion to clip space after you've done everything in pixels.
In other words the code would be something like
rotatedPixelPosition = rotationMatrix * pixelPosition
clipSpacePosition = (rotatedPixelPosition / resolution) * 2.0 - 1.0;
So in other words you'd want
vec4 rotatedPosition = u_matrix * position;
vec2 zeroToOne = rotatedPosition.xy / u_resolution.xy;
vec2 zeroToTwo = zeroToOne * 2.0;
vec2 minusOneToPlusOne = zeroToTwo - 1.0;
vec2 clipspacePositiveYDown = minusOneToPlusOne * vec2(1, -1);
gl_Position = vec4(clipspacePositiveYDown, 0, 1);
If you do that and you set u_matrix to the identity then if position is in pixels you should see those positions at pixel positions. If u_matrix is strictly a rotation matrix the positions will rotate around the top left corner since rotation always happens around 0 and the conversion above puts 0 at the top left corner.
But really here's no reason to convert to from pixels to clip space by hand. You can instead convert and rotate all in the same matrix. This article covers that process. It starts with translate, rotation, scale, and converting from pixels to clip space with no matrices and converts it to something that does all of that combined using a single matrix.
Effectively
matrix = scaleYByMinusMatrix *
subtract1FromXYMatrix *
scaleXYBy2Matrix *
scaleXYBy1OverResolutionMatrix *
translationInPixelSpaceMatrix *
rotationInPixelSpaceMatrix *
scaleInPixelSpaceMatrix;
And then in your shader you only need
gl_Position = u_matrix * vec4(position, 0, 1);
Those top 4 matrixes are easy to compute as a single matrix, often called an orthographic projection in which case it simplifies to
matrix = projectionMatrix *
translationInPixelSpaceMatrix *
rotationInPixelSpaceMatrix *
scaleInPixelSpaceMatrix;
There's also this article which reproduces the matrix stack from canvas2D in WebGL

Edge/outline detection from texture in fragment shader

I am trying to display sharp contours from a texture in WebGL.
I pass a texture to my fragment shaders then I use local derivatives to display the contours/outline, however, it is not smooth as I would expect it to.
Just printing the texture without processing works as expected:
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
vec4 color = texture2D(uTextureFilled, texc);
gl_FragColor = color;
With local derivatives, it misses some edges:
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
vec4 color = texture2D(uTextureFilled, texc);
float maxColor = length(color.rgb);
gl_FragColor.r = abs(dFdx(maxColor));
gl_FragColor.g = abs(dFdy(maxColor));
gl_FragColor.a = 1.;
In theory, your code is right.
But in practice most GPUs are computing derivatives on blocks of 2x2 pixels.
So for all 4 pixels of such block the dFdX and dFdY values will be the same.
(detailed explanation here)
This will cause some kind of aliasing and you will miss some pixels for the contour of the shape randomly (this happens when the transition from black to the shape color occurs at the border of a 2x2 block).
To fix this, and get the real per pixel derivative, you can instead compute it yourself, this would look like this :
// get tex coordinates
vec2 texc = vec2(((vProjectedCoords.x / vProjectedCoords.w) + 1.0 ) / 2.0,
((vProjectedCoords.y / vProjectedCoords.w) + 1.0 ) / 2.0 );
// compute the U & V step needed to read neighbor pixels
// for that you need to pass the texture dimensions to the shader,
// so let's say those are texWidth and texHeight
float step_u = 1.0 / texWidth;
float step_v = 1.0 / texHeight;
// read current pixel
vec4 centerPixel = texture2D(uTextureFilled, texc);
// read nearest right pixel & nearest bottom pixel
vec4 rightPixel = texture2D(uTextureFilled, texc + vec2(step_u, 0.0));
vec4 bottomPixel = texture2D(uTextureFilled, texc + vec2(0.0, step_v));
// now manually compute the derivatives
float _dFdX = length(rightPixel - centerPixel) / step_u;
float _dFdY = length(bottomPixel - centerPixel) / step_v;
// display
gl_FragColor.r = _dFdX;
gl_FragColor.g = _dFdY;
gl_FragColor.a = 1.;
A few important things :
texture should not use mipmaps
texture min & mag filtering should be set to GL_NEAREST
texture clamp mode should be set to clamp (not repeat)
And here is a ShaderToy sample, demonstrating this :

spherical mapping artefacts

I have some problems with sphere map texturing in webgl.
My texture:
Now I texturize a sphere. Everything is OK, if the sphere is in front of the camera:
The sphere is a unit-sphere (r = 1), defined with longitudes and latitudes.
But i get some artefacts, if i translate the sphere in x-direction -2.5 (without rotating the camera):
This image is without mipmapping. And the following image is with mipmapping:
Vertices and normals seems to be ok.
vertex-shader:
precision highp float;
uniform mat4 mvMatrix; // Matrix zum Transformieren des Verex vom model-space in den view-space
uniform mat4 mvpMatrix; // Matrix zum Transformieren des Vertex vom model-space in den clip-space
uniform mat3 mvNMatrix; // Matrix zum Transformieren der Vertex-Normale vom model-space in den view-space
attribute vec4 mV; // Vertex im model-space
attribute vec3 mVN; // Vertex-Normale im model-space
varying vec2 vN;
void main(void)
{
vec3 e = normalize( vec3( mvMatrix * mV ) );
vec3 n = normalize( mvNMatrix * mVN );
vec3 r = reflect( e, n );
//float d = dot(n, e);
//vec3 r = e - 2.0 * d * n;
float m = 2.0 * sqrt(
pow( r.x, 2.0 ) +
pow( r.y, 2.0 ) +
pow( r.z + 1.0, 2.0 )
);
vN.s = r.x / m + 0.5;
vN.t = r.y / m + 0.5;
gl_Position = mvpMatrix * mV;
}
And fragment shader:
precision highp float;
uniform sampler2D uSampler;
varying vec2 vN;
void main(void)
{
vec3 base = texture2D( uSampler, vN ).rgb;
gl_FragColor = vec4( base, 1.0 );
}
Does anybody know, why i get these artefacts? I am working with windows and firefox.
Your surface normals are backward, or you're culling the wrong side in general. You've done something like azimuthal projection, but you're rendering the inside of the sphere instead of the outside, so you're seeing greater than 180 degrees, including the 'bridge' separating forward and back. See also: map projections, which shows that azimuthal mapping is conformal everywhere but excludes at least the true equator with line of sight.

Adding projection matrix to opengl es point sprites particle effect vertex shader

I have been learning opengl es from the opengl es 2.0 programming guide. They have a particle effect that looks like an explosion. I am trying to enhance their example code by adding a mat4 projection matrix to the vertex shader, the shader compiles and works, but I am having problems getting the effect to position taking the projection into account. The code I have is as follows
const char* ParticleExplosionVertexShader = STRINGIFY (
uniform float u_time;
uniform vec3 u_centerPosition;
uniform mat4 Projection;
attribute float a_lifetime;
attribute vec3 a_startPosition;
attribute vec3 a_endPosition;
varying float v_lifetime;
void main()
{
if ( u_time <= a_lifetime )
{
gl_Position.xyz = a_startPosition + (u_time * a_endPosition);
gl_Position.xyz += u_centerPosition;
gl_Position.w = 1.0;
}
else
gl_Position = vec4( -1000, -1000, 0, 0 );
v_lifetime = 1.0 - ( u_time / a_lifetime );
v_lifetime = clamp ( v_lifetime, 0.0, 1.0 );
gl_PointSize = ( v_lifetime * v_lifetime ) * 40.0;
}
);
I am able to add the projection to the line without any errors, but unfortunately here its not really required as that code is placing the object of d=screen at the end of its lifetime
gl_Position = Projection * vec4( -1000, -1000, 0, 0 );
I have also tried changing the line
gl_Position.xyz += u_centerPosition;
to
gl_Position += Projection * u_centerPosition;
But I have had no luck getting it to place as I want it
Am I doing something wrong? Or is there a reason the book didn't have a projection matrix such as its not something someone should do with point sprites?
Any help or pointers to what I should look into will be appreciated
Thanks
Edit: Please let me know if you need more information from me
What about multiplying the whole gl_Position by modelview-projection matrix, as with any normal geometry?
Also, you will probably need to modify the line that calculates gl_PointSize, for example try to divide it by gl_Position.w (after multiplication by modelview-projection), otherwise the sprites will all have the same size (is that what you are trying to fix?).

Resources