I'm working on a simple phong shader in webgl, and I think I'm getting close but something is still wrong. Dead give away: if I have a billboard and have it roll (so it spins like a wheel), the part of the billboard that is lit up spins with it :(. This confuses me, because it seems like a problem with the model matrix, but the transform makes all the positions & rotations correct, just the lighting wrong. Ditto with the view matrix, I can move around and look freely and everything is located in its proper place, just lit wrong.
Here are my shaders (minus the definitions for space, and with the lighting in the model matrix moved into GPU for clarity) {if you prefer reading in github: https://github.com/nickgeorge/quantum/blob/master/index.html#L41}
<script id="fragment-shader" type="x-shader/x-fragment">
void main(void) {
vec3 lightWeighting;
if (!uUseLighting) {
lightWeighting = vec3(1.0, 1.0, 1.0);
} else {
vec3 lightDirection = normalize(vLightPosition.xyz - vPosition.xyz);
float directionalLightWeighting = max(0.0, dot(
normalize(vTransformedNormal),
lightDirection));
lightWeighting = uAmbientColor + uPointLightingColor * directionalLightWeighting;
}
vec4 fragmentColor;
if (uUseTexture) {
fragmentColor = texture2D(uSampler,
vec2(vTextureCoord.s, vTextureCoord.t));
} else {
fragmentColor = uColor;
}
gl_FragColor = vec4(fragmentColor.rgb * lightWeighting, fragmentColor.a);
}
</script>
<script id="vertex-shader" type="x-shader/x-vertex">
void main(void) {
vPosition = uModelMatrix * vec4(aVertexPosition, 1.0);
// TODO: Move back to CPU
vLightPosition = uModelMatrix * vec4(uPointLightingLocation, 1.0);
gl_Position = uPerspectiveMatrix * uViewMatrix * vPosition;
vTextureCoord = aTextureCoord;
vTransformedNormal = normalize(uNormalMatrix * aVertexNormal);
}
</script>
Thanks a lot, and let me know if there's anything else useful to add.
Dunno what the policy is around here for answering your own question, but in case someone stumbles upon this with the same problem:
I was transforming my light position by the model matrix. This doesn't make any sense, because the light position is already in world-coordinates, so there's no need to transform it at all.
Related
I'm playing around with WebGL, I scipted a simple flat-shaded cube.
I got a shader which takes projection matrix, view model matrix and a normal matrix, nothing fancy:
(...)
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vTextureCoord = aTextureCoord;
vec3 transformedNormal = uNMatrix * aVertexNormal;
float directionalLightWeighting = max(dot(transformedNormal, uLightingDirection), 0.0);
vLightWeighting = uAmbientColor + uDirectionalColor * directionalLightWeighting;
}
Everything is fine, the flat shading looks good, but as soon as I resize the cube (noted as mat4.scale below), the shading does not affect the scene anymore. If I scale down the computed normal matrix by the reverse factor, it works again.
The code follows the following schema (drawing pseudo routine):
projection = mat4.ortho
// set up general camera view
view = mat4.lookAt
// set up cube position / scaling / rotation on view matrix
mat4.translate(view)
mat4.scale(view) // remove for nice shading ..
mat4.rotate(view)
// normalFromMat4 returns upper-left 3x3 inverse transpose
normal = mat4.normalFromMat4 ( view )
pass projection, view, normal to shader
gl.drawElements
I am using gl-matrix as math library.
Any ideas where my mistake lies?
I'm trying to follow the suggestion in Apple's OpenGL ES Programming Guide section on instanced drawing: Use Instanced Drawing to Minimize Draw Calls. I have started with the example project that XCode generates for a Game app with OpenGL and Swift and converted it to OpenGL ES 3.0, adding some instanced drawing to duplicate the cube.
This works fine when I use the gl_InstanceID technique and simply generate an offset from that. But when I try to use the 'instanced arrays' technique to pass data in via a buffer I am not seeing any results.
My updated vertex shader looks like this:
#version 300 es
in vec4 position;
in vec3 normal;
layout(location = 5) in vec2 instOffset;
out lowp vec4 colorVarying;
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
void main()
{
vec3 eyeNormal = normalize(normalMatrix * normal);
vec3 lightPosition = vec3(0.0, 0.0, 1.0);
vec4 diffuseColor = vec4(0.4, 0.4, 1.0, 1.0);
float nDotVP = max(0.0, dot(eyeNormal, normalize(lightPosition)));
colorVarying = diffuseColor * nDotVP;
// gl_Position = modelViewProjectionMatrix * position + vec4( float(gl_InstanceID)*1.5, float(gl_InstanceID)*1.5, 1.0,1.0);
gl_Position = modelViewProjectionMatrix * position + vec4(instOffset, 1.0, 1.0);
}
and in my setupGL() method I have added the following:
//glGenVertexArraysOES(1, &instArray) // EDIT: WRONG
//glBindVertexArrayOES(instArray) // EDIT: WRONG
let kMyInstanceDataAttrib = 5
glGenBuffers(1, &instBuffer)
glBindBuffer(GLenum(GL_ARRAY_BUFFER), instBuffer)
glBufferData(GLenum(GL_ARRAY_BUFFER), GLsizeiptr(sizeof(GLfloat) * instData.count), &instData, GLenum(GL_STATIC_DRAW))
glEnableVertexAttribArray(GLuint(kMyInstanceDataAttrib))
glVertexAttribPointer(GLuint(kMyInstanceDataAttrib), 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0/*or 8?*/, BUFFER_OFFSET(0))
glVertexAttribDivisor(GLuint(kMyInstanceDataAttrib), 1);
along with some simple instance offset data:
var instData: [GLfloat] = [
1.5, 1.5,
2.5, 2.5,
3.5, 3.5,
]
I am drawing the same way with the above as with the instance id technique:
glDrawArraysInstanced(GLenum(GL_TRIANGLES), 0, 36, 3)
But it seems to have no effect. I just get the single cube and it doesn't even seem to fail if I remove the buffer setup, so I suspect my setup is missing something.
EDIT: Fixed the code by removing two bogus lines from init.
I had an unecessary gen and bind for the attribute vertex array. The code as edited above now works.
I'm trying to create a fragment shader to recolor a 2D grayscale sprite but leave white and near-white fragments intact (ie: don't recolor pure white fragments, and only slightly recolor near-white fragments). I'm not sure how to do this without using a conditional branch which results in poor performance on certain hardware.
The existing shader in the game engine just performs a simple multiplication:
#ifdef GL_ES
precision lowp float;
#endif
varying vec4 v_fragmentColor;
varying vec2 v_texCoord;
uniform sampler2D CC_Texture0;
void main()
{
vec4 texColor = texture2D(CC_Texture0, v_texCoord);
gl_FragColor = texColor * v_fragmentColor;
}
I think that in order to avoid the conditional, I need some sort of continuous mathematical function that will recolor fragments with RGB values greater than, say, (0.9, 0.9, 0.9) less than it would for fragments which are less than (0.9, 0.9, 0.9).
Any help would be great!
I would do something like this: Calculate the fully-recolored pixel, then mix with the original based on a function. Here's an idea:
vec4 texColor = texture2D(CC_Texture0, v_texCoord);
const vec4 kLumWeights = vec4(.2126, .7152, .0722, 0.0); // Rec. 709 luminance weights
float luminance = dot (texColor, kLumWeights);
vec4 recolored = texColor * v_fragmentColor;
const float kThreshold = 0.8;
float mixAmount = (luminance - kThreshold) / (1.0 - kThreshold); // Everything below kThreshold becomes 0, and from kThreshold to 1.0 becomes 0 to 1.0
mixAmount = clamp (mixAmount, 0.0, 1.0);
gl_FragColor = mix (recolored, texColor, mixAmount);
Let me know if that works.
I am learning about 3D programming with the book Learning Modern 3D Graphics Programming, but I am having no luck with the shaders and GLES 2.0 in iOS. I am working from the xcode 4 opengl game template, though with changes to make sense to the first example in the book.
The first shaders in the book will not compile with lots of different errors. The first vertex shader
#version 330
layout(location = 0) in vec4 position;
void main()
{
gl_Position = position;
}
Complains about the version statement and refuses to allow using layout as a specifier. I finally managed to get this to build.
attribute vec4 position;
void main()
{
gl_Position = position;
}
Again the first fragment shader refuses to build due to the version, and will not allow the output in a global segment
#version 330
out vec4 outputColor;
void main()
{
outputColor = vec4(1.0f, 1.0f, 1.0f, 1.0f);
}
With the error
ERROR: 0:10: Invalid qualifiers 'out' in global variable context
Okay so I managed to get the first example ( a simple triangle) to work with the following shaders.
vertex shader
#version 100
void main()
{
gl_FragColor = vec4(1.0,1.0,1.0,1.0);
}
fragment shader
attribute vec4 position;
void main()
{
gl_Position = position;
}
So those worked and I tried the first coloured example in the next chapter.
#version 330
out vec4 outputColor;
void main() {
float lerpValue = gl_FragCoord.y / 500.0f;
outputColor = mix(vec4(1.0f, 1.0f, 1.0f, 1.0f),
vec4(0.2f, 0.2f, 0.2f, 1.0f), lerpValue);
}
Even working around the fixed problems from eariler, the version, f's in floats not being allowed, The shader still refuses to build with this error
ERROR: 0:13: 'float' : declaration must include a precision qualifier for type
Effectively it is complaining about float.
I have tried googling to find an explanation of the differences, but none of these come up. I have also read through the apple docs looking for advice and found no help. I am not sure where else to look, or what I am really doing wrong.
add this at the top of the shader: precision mediump float;
or precision highp float;
depends on your needs.
I have been learning opengl es from the opengl es 2.0 programming guide. They have a particle effect that looks like an explosion. I am trying to enhance their example code by adding a mat4 projection matrix to the vertex shader, the shader compiles and works, but I am having problems getting the effect to position taking the projection into account. The code I have is as follows
const char* ParticleExplosionVertexShader = STRINGIFY (
uniform float u_time;
uniform vec3 u_centerPosition;
uniform mat4 Projection;
attribute float a_lifetime;
attribute vec3 a_startPosition;
attribute vec3 a_endPosition;
varying float v_lifetime;
void main()
{
if ( u_time <= a_lifetime )
{
gl_Position.xyz = a_startPosition + (u_time * a_endPosition);
gl_Position.xyz += u_centerPosition;
gl_Position.w = 1.0;
}
else
gl_Position = vec4( -1000, -1000, 0, 0 );
v_lifetime = 1.0 - ( u_time / a_lifetime );
v_lifetime = clamp ( v_lifetime, 0.0, 1.0 );
gl_PointSize = ( v_lifetime * v_lifetime ) * 40.0;
}
);
I am able to add the projection to the line without any errors, but unfortunately here its not really required as that code is placing the object of d=screen at the end of its lifetime
gl_Position = Projection * vec4( -1000, -1000, 0, 0 );
I have also tried changing the line
gl_Position.xyz += u_centerPosition;
to
gl_Position += Projection * u_centerPosition;
But I have had no luck getting it to place as I want it
Am I doing something wrong? Or is there a reason the book didn't have a projection matrix such as its not something someone should do with point sprites?
Any help or pointers to what I should look into will be appreciated
Thanks
Edit: Please let me know if you need more information from me
What about multiplying the whole gl_Position by modelview-projection matrix, as with any normal geometry?
Also, you will probably need to modify the line that calculates gl_PointSize, for example try to divide it by gl_Position.w (after multiplication by modelview-projection), otherwise the sprites will all have the same size (is that what you are trying to fix?).