I am still getting used to OpenGL with shaders, been using OGL ES 1.0 before but it's time to update my knowledge! Now I have a problem with the simple shaders I'm looking at and I have searched for 2 days straight with no luck of a solution.
Problem is this: I render some cubes with a VBO in the form of (Vx, Vy, Vz, NormalX, NormalY, NormalZ, ColorR, ColorG, ColorB, ColorA) and this works nicely when I render it without the shader but I have to use the shader for translation and stuff (I know it can be done without but bear with me). Here is my vertex shader, default from OGL template in XCode:
attribute vec4 position;
attribute vec3 normal;
uniform vec3 translation;
varying lowp vec4 colorVarying;
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
void main()
{
vec3 eyeNormal = normalize(normalMatrix * normal);
vec3 lightPosition = vec3(0.0, 0.0, 10.0);
vec4 diffuseColor = vec4(0.4, 0.4, 1.0, 1.0);
float nDotVP = max(0.0, dot(eyeNormal, normalize(lightPosition)));
colorVarying = diffuseColor * nDotVP;
gl_Position = modelViewProjectionMatrix * (position + vec4(translation, 1));
}
And the fragment shader, also default:
varying lowp vec4 colorVarying;
void main()
{
gl_FragColor = colorVarying;
}
Now this ALWAYS renders whatever triangles I draw in the same color (defined by diffuseColor) without regard for the colors in the VBO. So I have tried and failed with other fragment shader like gl_FragColor = gl_FrontColor; but gl_FrontColor/gl_Color etc aren't included in OpenGL ES and are deprecated in OpenGL 3.x or something. I have also viewed code using texture samplers but since I'm not using textures but colors it gets a bit complicated for a beginner.
So my question is this, how would I have my fragmentshader find the Material Color of the current fragment being shaded?
If I should pass the colors in an array to the shaders, how would I do that and how, then, would I reference it with regard to the currently shading fragment?
(Some 'also's; tried not using a fragment shader but OGL doesn't allow only using vertex shader. Tried simply removing the gl_FragColor = colorVarying; but that leaves the colors really screwed up)
You need to add a colour attribute to your shader:
attribute vec4 position;
attribute vec3 normal;
attribute vec4 colour;
...and use that attribute instead of diffuseColor.
You must also tell OpenGL where to find that vertex attribute within your VBO using glVertexAttribPointer (I assume you are doing this for the position and normal attributes already).
Related
I have a vertex shader which works fine on Windows with OpenGL. I want to use the same shader on an iPad which supports OpenGL ES2.0.
Compilation of the shader fails with:
Invalid storage qualifiers 'out' in global variable context
From what I have read, the 'out' keyword required GLSL 1.5 which the iPad won't support. Is there an equivalent keyword to 'out' that I can use to pass the color into my fragment shader?
attribute vec4 vPosition;
attribute vec4 vColor;
uniform mat4 MVP;
out vec4 pass_Color;
void main()
{
gl_Position = MVP * vPosition;
pass_Color = vColor;
}
This vertex shader is used by me to create gradient blends, so I'm assigning a color to each vertex of a triangle and then the fragment shader interpolates the color between each vertex. That's why I'm not passing a straight color directly into the fragment shader.
Solved! In GLSL ES 1.0 that I'm using, I need to use 'varying' instead of 'in' and 'out'. Here's the working shader:
attribute vec4 vPosition;
attribute vec4 vColor;
uniform mat4 MVP;
varying vec4 pass_Color;
void main()
{
gl_Position = MVP * vPosition;
pass_Color = vColor;
}
When I'm rendering my content onto a FBO with a texture bound to it and then render this bound texture to a fullscreen quad using a basic shader the performance drops ridiculously.
For example:
Render to screen directly (with basic shader):
And when render to texture first, then render texture with fullscreen quad: (with same basic shader, would be something like blur or bloom normally):
Anyone got an idea how to speed this up? Since the current performance is not usable. Also I'm using GLKit for the basic OpenGL stuff.
Need to use precisions in places where it's needed.
lowp - for colors, textures coord, normals etc.
highp - for matrices and vertices/positions
Quick reference , check the range of precisions, on 3 page in "Qualifiers".
// BasicShader.vsh
precision mediump float;
attribute highp vec2 position;
attribute lowp vec2 texCoord;
attribute lowp vec4 color;
varying lowp vec2 textureCoord;
varying lowp vec4 textureColor;
uniform highp mat4 projectionMat;
uniform highp mat4 worldMat;
void main() {
highp mat4 worldProj = worldMat * projectionMat;
gl_Position = worldProj * vec4(position, 0.0, 1.0);
textureCoord = texCoord;
textureColor = color;
}
// BasicShader.fsh
precision mediump float;
varying lowp vec2 textureCoord;
varying lowp vec4 textureColor;
uniform sampler2D sampler;
void main() {
lowp vec4 Color = texture2D(sampler, textureCoord);
gl_FragColor = Color * textureColor;
}
This is very likely caused by ill-performant openGL ES API calls.
You should attach a real device and do an openGL ES frame capture. (It really needs a real device, the option for frame capture won't be available with a simulator).
The frame capture will indicate memory and other warnings along with suggestions to fix them alongside each API call. Step through these and fix each. The performance should improve considerably.
Here's a couple of references to get this done:
Debugging openGL ES frame
Xcode tools overview
I'm experimenting with some lighting techniques on iOS and I've been able to produce some effects that I'm pleased with by taking advantage of iOS' OpenGL ES extensions for depth lookup textures and a relatively simple Blinn-Phong shader:
The above shows 20 Suzanne monkeys being rendered at full-screen retina with multi-sampling and the following shader. I'm doing multi-sampling because it is only adding 1ms per frame. My current average render time is 30ms total (iPad 3), which is far too slow for 60fps.
Vertex shader:
//Position
uniform mat4 mvpMatrix;
attribute vec4 position;
uniform mat4 depthMVPMatrix;
uniform mat4 vpMatrix;
//Shadow out
varying vec3 ShadowCoord;
//Lighting
attribute vec3 normal;
varying vec3 normalOut;
uniform mat3 normalMatrix;
varying vec3 vertPos;
uniform vec4 lightColor;
uniform vec3 lightPosition;
void main() {
gl_Position = mvpMatrix * position;
//Used for handling shadows
ShadowCoord = (depthMVPMatrix * position).xyz;
ShadowCoord.z -= 0.01;
//Lighting calculations
normalOut = normalize(normalMatrix * normal);
vec4 vertPos4 = vpMatrix * position;
vertPos = vertPos4.xyz / vertPos4.w;
}
Fragment shader:
#extension GL_EXT_shadow_samplers : enable
precision lowp float;
uniform sampler2DShadow shadowTexture;
varying vec3 normalOut;
uniform vec3 lightPosition;
varying vec3 vertPos;
varying vec3 ShadowCoord;
uniform vec4 fillColor;
uniform vec3 specColor;
void main() {
vec3 normal = normalize(normalOut);
vec3 lightDir = normalize(lightPosition - vertPos);
float lambertian = max(dot(lightDir,normal), 0.0);
vec3 reflectDir = reflect(-lightDir, normal);
vec3 viewDir = normalize(-vertPos);
float specAngle = max(dot(reflectDir, viewDir), 0.0);"
float specular = pow(specAngle, 16.0);
gl_FragColor = vec4((lambertian * fillColor.xyz + specular * specColor) * shadow2DEXT(shadowTexture, ShadowCoord), fillColor.w);
}
I've read that it is possible to use textures as lookup tables to reduce computation in the fragment shader, however the linked example seems to be doing full Phong lighting, rather than Blinn-Phong (I'm not doing anything with surface tangents). Furthermore, when running the sample the lighting seemed fairly banded (the background on mine, which is a solid color + Phong shading, looks slightly banded as a result of compression - it looks far smoother on the device). Is it possible to use a lookup texture in my case, or am I going to have to move down to 30fps (which I can just about achieve), turn off multi-sampling and limit Phong shading to the monkeys, rather than the full screen? In a real world (i.e. game) scenario, am I going to need do be doing Phong shading across the entire screen anyway?
So i cant get my shader to render with color. My shader works when i dont set the color using the attribute Color.
my code for vertex is:
typedef struct
{
GLKVector3 Position; //Position
GLKVector4 Color; //32 Bit color
GLKVector3 Normal; //For Lighting
GLKVector2 TexCoord; //For Texturing
} Vertex;
I have given the colors for all vertices as [1,0,0,1]
My vertex shader is this:
attribute vec3 Position;
attribute vec4 Color;
attribute vec3 Normal;
attribute vec2 TexCoord;
uniform mat4 ModelViewMatrix;
uniform mat4 ProjectionMatrix;
varying vec4 DestinationColor;
void main(void)
{
gl_Position = ProjectionMatrix*ModelViewMatrix*vec4(Position,1);
DestinationColor = Color;
}
And my Fragment Shader is this:
precision mediump float;
varying lowp vec4 DestinationColor;
void main (void)
{
gl_FragColor =DestinationColor;
}
And it Displays nothing.
It doesnt even work if i change the fragment shader to say gl_FragColor = vec4(1,0,0,1); Unless i uncomment the line in vertex shader setting the DestinationColor.
Please help i have been sitting on this for a while now
I found the answer to this problem but i can't access my old account for bobjamin so I am using this new one.
The solution was fairly simple.
Firstly i should mention that drhass' suggestion did help in that it allowed me to set a static color from the vertex shader and it would display however the problem was that the name Color must be a reserved keyword and its was causing problems.
The Answer was to change the attribute Color to SourceColor and everything worked fine!
As is shown below, the error is very strange. I use OpenGLES 2.0 and shader in my iPad program, but it seems something goes wrong with the code or project configuration. The model is drawn with no color at all (black color).
2012-12-01 14:21:56.707 medicare[6414:14303] Program link log:
WARNING: Could not find vertex shader attribute 'color' to match BindAttributeLocation request.
WARNING: Output of vertex shader 'colorVarying' not read by fragment shader
[Switching to process 6414 thread 0x1ad0f]
And I use glBindAttibLocation to pass position and normal data like this:
// This needs to be done prior to linking.
glBindAttribLocation(_program, INDEX_POSITION, "position");
glBindAttribLocation(_program, INDEX_NORMAL, "normal");
glBindAttribLocation(_program, INDEX_COLOR, "color"); //pass color to shader
There are two shaders in my project. So any good solutions to this odd error? Thanks a lot!
My vertex shader:
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
attribute vec4 position;
attribute vec3 normal;
attribute vec4 color;
varying lowp vec4 DestinationColor;
void main()
{
//vec4 a_Color = vec4(0.9, 0.4, 0.4, 1.0);
vec4 a_Color = color;
vec3 u_LightPos = vec3(1.0, 1.0, 2.0);
float distance = 2.4;
vec3 eyeNormal=normalize(normalMatrix * normal);
float diffuse = max(dot(eyeNormal, u_LightPos), 0.0); // remove approx ambient light
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));
DestinationColor = a_Color * diffuse; // average between ambient and diffuse a_Color * (diffuse + 0.3)/2.0;
gl_Position = modelViewProjectionMatrix * position;
}
And my fragment shader is:
varying lowp vec4 DestinationColor;
void main()
{
gl_FragColor = DestinationColor;
}
Very simple. Thanks a lot!
I think there are a few things wrong here. First your use of attribute might not be right. An attribute is like an element that changes for each vertex.. do you have the color as an element in your data structure? Cause if not, the shader isn't going to work right.
And I use glBindAttibLocation to pass position and normal data like
this:
no you don't. glBindAttribLocation "Associates a generic vertex attribute index with a named attribute variable". It doesn't pass data. It associates an index (an glint) with the variable. You pass things in later with: glVertexAttribPointer.
I don't even use the bind.. I do it this way - set up the attribute:
glAttributes[PROGNAME][A_vec3_vertexPosition] = glGetAttribLocation(glPrograms[PROGNAME], "a_vertexPosition");
glEnableVertexAttribArray(glAttributes[PROGNAME][A_vec3_vertexPosition]);
and then later before calling glDrawElemetns pass your pointer to it so it can get the data:
glVertexAttribPointer(glAttributes[PROGNAME][A_vec3_vertexPosition], 3, GL_FLOAT, GL_FALSE, stride, (void *) 0);
There I'm using a 2 dimensional array of ints called glAttributes to hold all of my attribute indexes. But you can use glints like you are now.
The error message tells you what's wrong. In your vertex shader you say:
attribute vec4 color;
But then down below you also have an a_Color:
DestinationColor = a_Color * diffuse;
Be consistent with your variable names. I put a_ v_ and u_ in front of all mine now to try to keep straight what kind of variable it is. What you're calling an a_ there is really a varying.
I also suspect that the error message was not from the same version of the shader and code that you posted because of the error:
WARNING: Output of vertex shader 'colorVarying' not read by fragment shader
And the error about colorVarying is confusing when it isn't even in this version of your vertex shader. Repost the current version of the shaders and the error messages you get from those and it will be easier to help you.