OpenGL ES 2.0 Point light shader - ios

I'm currently trying to write a shader that should include a simple point light in OpenGL ES 2.0, but it's not quite working.
I built my own small SceneGraph and each Object (currently only Boxes) can have its own translation/rotation/scale and rendering works fine. Each of the boxes assigns its own modelView and normals matrix and all of them use the same projection matrix.
For each object I pass the matrices and the light position to the shader as a uniform.
If the Object does not rotate the light works fine, but as soon as the Object rotates the light seems to rotate with the object instead of staying at the same position.
Here is some Code.
First the creating the matrices:
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 0.1f, 100.0f);
GLKMatrix4 modelViewMatrix = GLKMatrix4MakeTranslation(0.0f, 0.0f, 0.0f);
Each of the nodes computes an own transformation matrix containing the translation/rotation/scale and multiplies it with the modelViewMatrix:
modelViewMatrix = GLKMatrix4Multiply(modelViewMatrix, transformation);
This matrix is passed to the shader and after the object has been rendered the old matrix is recovered.
The normal matrix is calculated as follows:
GLKMatrix3InvertAndTranspose(GLKMatrix4GetMatrix3(modelViewMatrix), NULL);
Vertex-Shader:
attribute vec4 Position;
attribute vec2 TexCoordIn;
attribute vec3 Normal;
uniform mat4 modelViewProjectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat3 normalMatrix;
uniform vec3 lightPosition;
varying vec2 TexCoordOut;
varying vec3 n, PointToLight;
void main(void) {
gl_Position = modelViewProjectionMatrix * Position;
n = normalMatrix * Normal;
PointToLight = ((modelViewMatrix * vec4(lightPosition,1.0)) - (modelViewMatrix * Position)).xyz;
// Pass texCoord
TexCoordOut = TexCoordIn;
}
Fragment-Shader:
varying lowp vec2 TexCoordOut;
varying highp vec3 n, PointToLight;
uniform sampler2D Texture;
void main(void) {
gl_FragColor = texture2D(Texture, TexCoordOut);
highp vec3 nn = normalize(n);
highp vec3 L = normalize(PointToLight);
lowp float NdotL = clamp(dot(n, L), -0.8, 1.0);
gl_FragColor *= (NdotL+1.)/2.;
}
I guess the PointToLight is computed wrong, but I can't figure out what's going wrong.

I finally figured out what went wrong.
Instead of multiplying the lightPosition with the modelViewMatrix, I just need to multiply it with the viewMatrix, which only contains the transformations of the camera and not the transformations for the box:
PointToLight = ((viewMatrix * vec4(lightPosition,1.0)) - (viewMatrix * modelMatrix * Position)).xyz;
Now it works fine.

Related

converting pixels to clipspace

Instead of giving -1 to 1 values to my shaders, I would prefer giving them pixel values like for the 2D canvas context. So according to what I read, I did add a uniform variable which I set to the size of the canvas, and I divide.
But I must be missing something. The rendering is way too big...
gl_.resolutionLocation = gl.getUniformLocation( gl_.program , "u_resolution" );
gl.uniform4f(gl_.resolutionLocation , game.w , game.h , game.w , game.h );
My vertex shader :
attribute vec4 position;
attribute vec2 texcoord;
uniform vec4 u_resolution;
uniform mat4 u_matrix;
varying vec3 v_texcoord;
void main() {
vec4 zeroToOne = position / u_resolution ;
gl_Position = u_matrix * zeroToOne ;
v_texcoord = vec3(texcoord.xy, 1) * abs(position.x);
v_texcoord = v_texcoord/u_resolution.xyz ;
}
My fragment shader :
precision mediump float;
varying vec3 v_texcoord;
uniform sampler2D tex;
uniform float alpha;
void main()
{
gl_FragColor = texture2DProj(tex, v_texcoord);
gl_FragColor.rgb *= gl_FragColor.a ;
}
If you want to stay in pixels with code like the code you have then you'd want to apply the conversion to clip space after you've done everything in pixels.
In other words the code would be something like
rotatedPixelPosition = rotationMatrix * pixelPosition
clipSpacePosition = (rotatedPixelPosition / resolution) * 2.0 - 1.0;
So in other words you'd want
vec4 rotatedPosition = u_matrix * position;
vec2 zeroToOne = rotatedPosition.xy / u_resolution.xy;
vec2 zeroToTwo = zeroToOne * 2.0;
vec2 minusOneToPlusOne = zeroToTwo - 1.0;
vec2 clipspacePositiveYDown = minusOneToPlusOne * vec2(1, -1);
gl_Position = vec4(clipspacePositiveYDown, 0, 1);
If you do that and you set u_matrix to the identity then if position is in pixels you should see those positions at pixel positions. If u_matrix is strictly a rotation matrix the positions will rotate around the top left corner since rotation always happens around 0 and the conversion above puts 0 at the top left corner.
But really here's no reason to convert to from pixels to clip space by hand. You can instead convert and rotate all in the same matrix. This article covers that process. It starts with translate, rotation, scale, and converting from pixels to clip space with no matrices and converts it to something that does all of that combined using a single matrix.
Effectively
matrix = scaleYByMinusMatrix *
subtract1FromXYMatrix *
scaleXYBy2Matrix *
scaleXYBy1OverResolutionMatrix *
translationInPixelSpaceMatrix *
rotationInPixelSpaceMatrix *
scaleInPixelSpaceMatrix;
And then in your shader you only need
gl_Position = u_matrix * vec4(position, 0, 1);
Those top 4 matrixes are easy to compute as a single matrix, often called an orthographic projection in which case it simplifies to
matrix = projectionMatrix *
translationInPixelSpaceMatrix *
rotationInPixelSpaceMatrix *
scaleInPixelSpaceMatrix;
There's also this article which reproduces the matrix stack from canvas2D in WebGL

WebGL custom shadow mapping shader code not working

I've seen different methods for shadow mapping all around the internet but I've only seen one method for mapping the shadows cast by a point light source, i.e. Cube mapping. Even though I've heard of it I've never seen an actual explanation of it.
I started writing this code before I had heard of cube mapping. My goal with this code was to map the shadow depths from spherical coordinates to a 2D texture.
I've simplified the coloring of the fragments for now in order to better visualize what's happening.
But, basically the models are a sphere of radius 2.0 at coordinates (0.0, 0.0, -5.0) and a hyperboloid of height 1.0 at (0.0, 0.0, -2.0) with the light source at (0.0, 0.0, 8.0).
If I scale(written in the code) the depth values by an inverse factor of less than 9.6 they both appear completely colored as the ambient color. Greater than 9.6 and they slowly become normally textured. I tried to make an example in jsfiddle but I couldn't get textures to work.
The method isn't working all together and I'm lost.
<script id="shadow-vs" type="x-shader/x-vertex">
attribute vec3 aVertexPosition;
varying float vDepth;
uniform vec3 uLightLocation;
uniform mat4 uMMatrix;
void main(void){
const float I_PI = 0.318309886183790671537767; //Inverse pi
vec4 aPos = uMMatrix * vec4(aVertexPosition, 1.0); //The actual position of the vertex
vec3 position = aPos.xyz - uLightLocation; //The position of the vertex relative to the light source i.e. "the vector"
float len = length(position);
float theta = 2.0 * acos(position.y/len) * I_PI - 1.0; //The angle of the vector from the xz plane bound between -1 and 1
float phi = atan(position.z, position.x) * I_PI; //The angle of the vector on the xz plane bound between -1 and 1
vDepth = len; //Divided by some scale. The depth of the vertex from the light source
gl_Position = vec4(phi, theta, len, 1.0);
}
</script>
<script id="shadow-fs" type="x-shader/x-fragment">
precision mediump float;
varying float vDepth;
void main(void){
gl_FragColor = vec4(vDepth, 0.0, 0.0, 1.0); //Records the depth in the red channel of the fragment color
}
</script>
<script id="shader-vs" type="x-shader/x-vertex">
attribute vec3 aVertexPosition;
attribute vec3 aVertexNormal;
attribute vec2 aTextureCoord;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
uniform mat4 uMMatrix;
uniform mat3 uNMatrix;
varying vec2 vTextureCoord;
varying vec3 vTransformedNormal;
varying vec4 vPosition;
varying vec4 aPos;
void main(void) {
aPos = uMMatrix * vec4(aVertexPosition, 1.0); //The actual position of the vertex
vPosition = uMVMatrix * uMMatrix * vec4(aVertexPosition, 1.0);
gl_Position = uPMatrix * vPosition;
vTextureCoord = aTextureCoord;
vTransformedNormal = normalize(uNMatrix * mat3(uMMatrix) * aVertexNormal);
}
</script>
<script id="shader-fs" type="x-shader/x-fragment">
precision mediump float;
varying vec2 vTextureCoord;
varying vec3 vTransformedNormal;
varying vec4 vPosition;
varying vec4 aPos;
uniform sampler2D uSampler;
uniform sampler2D uShSampler;
uniform vec3 uLightLocation;
uniform vec3 uAmbientColor;
uniform vec4 uLightColor;
void main(void) {
const float I_PI = 0.318309886183790671537767;
vec3 position = aPos.xyz - uLightLocation; //The position of the vertex relative to the light source i.e. "the vector"
float len = length(position);
float theta = acos(position.y/len) * I_PI; //The angle of the vector from the xz axis bound between 0 and 1
float phi = 0.5 + 0.5 * atan(position.z, position.x) * I_PI; //The angle of the vector on the xz axis bound between 0 and 1
float posDepth = len; //Divided by some scale. The depth of the vertex from the light source
vec4 shadowMap = texture2D(uShSampler, vec2(phi, theta)); //The color at the texture coordinates of the current vertex
float shadowDepth = shadowMap.r; //The depth of the vertex closest to the light source
if (posDepth > shadowDepth){ //Check if this vertex is further away from the light source than the closest vertex
gl_FragColor = vec4(uAmbientColor, 1.0);
}
else{
gl_FragColor = texture2D(uSampler, vec2(vTextureCoord.s, vTextureCoord.t));
}
}
</script>

WebGl Spotlight Shader

I have been trying to program a basic webgl spotlight shader, but no matter how hard I try, I cannot get the spotlights position to be relative to the world.The code that I am currently using is below, I have tried almost every coordinate frame that I can to get this working, but no matter what I do, I only get partially correct results.
For example if I switch to world coordinates the spotlights position will be correct, but it will only reflect off one object, or if I use the view space the light will work but it's position is relative to the camera.
In it's current state, the spotlight seems to be relative to each objects frame. (Not sure why.) Any help in solving helping this issue is greatly appreciated.
Vertex Shader:
attribute vec4 vPosition;
attribute vec4 vNormal;
attribute vec2 vTexCoord;
varying vec3 L, E, N,D;
varying vec2 fTexCoord;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
uniform mat4 NormalMatrix;
uniform mat4 View;
uniform vec4 lightPosition;
uniform vec4 lightDirection;
uniform vec3 Eye;
void main(){
L= (modelViewMatrix * lightPosition).xyz; //Light position in eye coordinates
E = (modelViewMatrix * vPosition).xyz; //Vertex position in eye coordinates.
//Normal position in eye coordinate. Transpose(Inverse(modelViewMatrix) * vNormal.
N=(NormalMatrix * vNormal).xyz;
D=lightDirection.xyz;//Light direction
fTexCoord=vTexCoord;
gl_Position = projectionMatrix * modelViewMatrix * vPosition;
}
Fragment Shader:
precision mediump float;
uniform vec4 lDiffuseColor;
uniform vec4 lSpecular;
uniform vec4 lAmbientColor;
uniform float lShininess;
varying vec3 L,E,N,D;
const float lExponent=2.0;
const float lCutoff=0.867;
vec3 lWeight=vec3(0,0,0);
void main(){
vec3 vtoLS=normalize(L - E);//Vector to light source from vertex.
float Ks=pow(max(dot(normalize(N),vtoLS),0.0),lShininess);
vec3 specular=Ks* lSpecular.xyz;
float diffuseWeight=max(dot(normalize(N), -vtoLS),0.0);
vec3 diffuse=diffuseWeight * lDiffuseColor.xyz;
if(diffuseWeight >0.0){
float lEffect= dot(normalize(D),normalize(-vtoLS));
if(lEffect > lCutoff){
lEffect= pow(lEffect,Ks);
vec3 reflection= normalize(reflect(-vtoLS,normalize(N)));
vec3 vEye=-normalize(E);
float rdotv=max(dot(reflection,vEye),0.0);
float specularWeight=pow(rdotv,lShininess);
lWeight= (lEffect * diffuse.xyz + lEffect * specular.xyz) + vec3(0.5,0,0);
}
}
lWeight+=lAmbientColor.xyz;
gl_FragColor=vec4(lWeight.rgb,1);
}
Current Output: http://sta.sh/012uh5hwwlse

How can I take advantage of lookup tables in my Blinn-Phong lighting shader?

I'm experimenting with some lighting techniques on iOS and I've been able to produce some effects that I'm pleased with by taking advantage of iOS' OpenGL ES extensions for depth lookup textures and a relatively simple Blinn-Phong shader:
The above shows 20 Suzanne monkeys being rendered at full-screen retina with multi-sampling and the following shader. I'm doing multi-sampling because it is only adding 1ms per frame. My current average render time is 30ms total (iPad 3), which is far too slow for 60fps.
Vertex shader:
//Position
uniform mat4 mvpMatrix;
attribute vec4 position;
uniform mat4 depthMVPMatrix;
uniform mat4 vpMatrix;
//Shadow out
varying vec3 ShadowCoord;
//Lighting
attribute vec3 normal;
varying vec3 normalOut;
uniform mat3 normalMatrix;
varying vec3 vertPos;
uniform vec4 lightColor;
uniform vec3 lightPosition;
void main() {
gl_Position = mvpMatrix * position;
//Used for handling shadows
ShadowCoord = (depthMVPMatrix * position).xyz;
ShadowCoord.z -= 0.01;
//Lighting calculations
normalOut = normalize(normalMatrix * normal);
vec4 vertPos4 = vpMatrix * position;
vertPos = vertPos4.xyz / vertPos4.w;
}
Fragment shader:
#extension GL_EXT_shadow_samplers : enable
precision lowp float;
uniform sampler2DShadow shadowTexture;
varying vec3 normalOut;
uniform vec3 lightPosition;
varying vec3 vertPos;
varying vec3 ShadowCoord;
uniform vec4 fillColor;
uniform vec3 specColor;
void main() {
vec3 normal = normalize(normalOut);
vec3 lightDir = normalize(lightPosition - vertPos);
float lambertian = max(dot(lightDir,normal), 0.0);
vec3 reflectDir = reflect(-lightDir, normal);
vec3 viewDir = normalize(-vertPos);
float specAngle = max(dot(reflectDir, viewDir), 0.0);"
float specular = pow(specAngle, 16.0);
gl_FragColor = vec4((lambertian * fillColor.xyz + specular * specColor) * shadow2DEXT(shadowTexture, ShadowCoord), fillColor.w);
}
I've read that it is possible to use textures as lookup tables to reduce computation in the fragment shader, however the linked example seems to be doing full Phong lighting, rather than Blinn-Phong (I'm not doing anything with surface tangents). Furthermore, when running the sample the lighting seemed fairly banded (the background on mine, which is a solid color + Phong shading, looks slightly banded as a result of compression - it looks far smoother on the device). Is it possible to use a lookup texture in my case, or am I going to have to move down to 30fps (which I can just about achieve), turn off multi-sampling and limit Phong shading to the monkeys, rather than the full screen? In a real world (i.e. game) scenario, am I going to need do be doing Phong shading across the entire screen anyway?

OpenGL ES performance 2.0 vs 1.1 (iPad)

In my simple 2D game I have 2x framerate drop when using ES 2.0 implementation for drawing. Is it OK or 2.0 should be faster if used properly?
P.S. If you are interested in details. I use very simple shaders:
vertex program:
uniform vec2 u_xyscale;
uniform vec2 u_st_to_uv;
attribute vec2 a_vertex;
attribute vec2 a_texcoord;
attribute vec4 a_diffuse;
varying vec4 v_diffuse;
varying vec2 v_texcoord;
void main(void)
{
v_diffuse = a_diffuse;
// convert texture coordinates from ST space to UV.
v_texcoord = a_texcoord * u_st_to_uv;
// transform XY coordinates from screen space to clip space.
gl_Position.xy = a_vertex * u_xyscale + vec2( -1.0, 1.0 );
gl_Position.zw = vec2( 0.0, 1.0 );
}
fragment program:
precision mediump float;
uniform sampler2D t_bitmap;
varying lowp vec4 v_diffuse;
varying vec2 v_texcoord;
void main(void)
{
vec4 color = texture2D( t_bitmap, v_texcoord );
gl_FragColor = v_diffuse * color;
}
"color" is a mediump variable, due to the default precision that you specified. This forces the implementation to convert the lowp sampled result to mediump. It also requires that diffuse be converted to mediump to perform the multiplication.
You can fix this by declaring "color" as lowp.

Resources