I am learning about 3D programming with the book Learning Modern 3D Graphics Programming, but I am having no luck with the shaders and GLES 2.0 in iOS. I am working from the xcode 4 opengl game template, though with changes to make sense to the first example in the book.
The first shaders in the book will not compile with lots of different errors. The first vertex shader
#version 330
layout(location = 0) in vec4 position;
void main()
{
gl_Position = position;
}
Complains about the version statement and refuses to allow using layout as a specifier. I finally managed to get this to build.
attribute vec4 position;
void main()
{
gl_Position = position;
}
Again the first fragment shader refuses to build due to the version, and will not allow the output in a global segment
#version 330
out vec4 outputColor;
void main()
{
outputColor = vec4(1.0f, 1.0f, 1.0f, 1.0f);
}
With the error
ERROR: 0:10: Invalid qualifiers 'out' in global variable context
Okay so I managed to get the first example ( a simple triangle) to work with the following shaders.
vertex shader
#version 100
void main()
{
gl_FragColor = vec4(1.0,1.0,1.0,1.0);
}
fragment shader
attribute vec4 position;
void main()
{
gl_Position = position;
}
So those worked and I tried the first coloured example in the next chapter.
#version 330
out vec4 outputColor;
void main() {
float lerpValue = gl_FragCoord.y / 500.0f;
outputColor = mix(vec4(1.0f, 1.0f, 1.0f, 1.0f),
vec4(0.2f, 0.2f, 0.2f, 1.0f), lerpValue);
}
Even working around the fixed problems from eariler, the version, f's in floats not being allowed, The shader still refuses to build with this error
ERROR: 0:13: 'float' : declaration must include a precision qualifier for type
Effectively it is complaining about float.
I have tried googling to find an explanation of the differences, but none of these come up. I have also read through the apple docs looking for advice and found no help. I am not sure where else to look, or what I am really doing wrong.
add this at the top of the shader: precision mediump float;
or precision highp float;
depends on your needs.
Related
In web tools such as shadertoy, my fragment shader source is included in a main() I don't control or see. It would be the same if I were distributing some GLSL library.
My problem is ensuring compatibility between webGL2 and 1: I would like to write GLSL fallbacks to emulate missing webGL2 built-in if the browser or OS is only WebGL1 capable.
->
Is there a way to test the current webGL/GLSL version from the shader, as we do for the availability of extension ?
( BTW, testing extensions is getting complicated now that some are included in the language: e.g., GL_EXT_shader_texture_lod is undefined in webGL2 despite the function is there. So being able to test the GLSL version is crucial.)
AFAICT there's no good way to test. The spec says the preprocessor macro __VERSION__ will be set to the version as in integer 300 for GLSL version 3.00 so
#if __VERSION__ == 300
// use 300 es stuff
#else
// use 100 es tuff
#endif
The problem is for WebGL2 when using 300 es shaders the very first line of the shader must be
#version 300 es
So you can't do this
#if IMAGINARY_WEBGL2_FLAG
#version 300 es // BAD!! This has to be the first line
...
#else
...
#
So, given that you already have to have update the first line why not just have 2 shaders, one for WebGL1, another for WebGL2. Otherwise all major engines generate their shaders so it should be pretty trivial to generate WebGL1 or WebGL2 in your code if you want to go down that path.
In the first place, there's no reason to use WebGL2 shader features if you can get by with WebGl1 and if you are using WebGL2 features then they're not really the same shader anymore are they? They'd need different setup, different inputs, etc...
Let's pretend we could do it all in GLSL though, what would you want it to look like?
// IMAGINARY WHAT IF EXAMPLE ....
#if WEBGL2
#version 300 es
#define texture2D texture
#define textureCube texture
#else
#define in varying
#define out varying
#endif
in vec4 position;
in vec2 texcoord;
out vec2 v_texcoord;
uniform sampler2D u_tex;
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * (position + texture2D(u_tex, texcoord));
v_texcoord = texcoord;
}
Let's assume you wanted to do that. You could do in JavaScript (not suggesting this way, just showing an example)
const webgl2Header = `#version 300 es
#define texture2D texture
#define textureCube texture
`;
const webglHeader = `
#define in varying
#define out varying
`;
function prepShader(gl, source) {
const isWebGL2 = gl.texImage3D !== undefined
const header = isWebGL2 ? webgl2Header : webglHeader;
return header + source;
}
const vs = `
in vec4 position;
in vec2 texcoord;
out vec2 v_texcoord;
uniform sampler2D u_tex;
uniform mat4 u_matrix;
void main() {
gl_Position = u_matrix * (position + texture2D(u_tex, texcoord));
v_texcoord = texcoord;
}
`;
const vsSrc = prepSource(gl, vs);
You can make your JavaScript substitutions as complicated as you want. For example this library
When I'm rendering my content onto a FBO with a texture bound to it and then render this bound texture to a fullscreen quad using a basic shader the performance drops ridiculously.
For example:
Render to screen directly (with basic shader):
And when render to texture first, then render texture with fullscreen quad: (with same basic shader, would be something like blur or bloom normally):
Anyone got an idea how to speed this up? Since the current performance is not usable. Also I'm using GLKit for the basic OpenGL stuff.
Need to use precisions in places where it's needed.
lowp - for colors, textures coord, normals etc.
highp - for matrices and vertices/positions
Quick reference , check the range of precisions, on 3 page in "Qualifiers".
// BasicShader.vsh
precision mediump float;
attribute highp vec2 position;
attribute lowp vec2 texCoord;
attribute lowp vec4 color;
varying lowp vec2 textureCoord;
varying lowp vec4 textureColor;
uniform highp mat4 projectionMat;
uniform highp mat4 worldMat;
void main() {
highp mat4 worldProj = worldMat * projectionMat;
gl_Position = worldProj * vec4(position, 0.0, 1.0);
textureCoord = texCoord;
textureColor = color;
}
// BasicShader.fsh
precision mediump float;
varying lowp vec2 textureCoord;
varying lowp vec4 textureColor;
uniform sampler2D sampler;
void main() {
lowp vec4 Color = texture2D(sampler, textureCoord);
gl_FragColor = Color * textureColor;
}
This is very likely caused by ill-performant openGL ES API calls.
You should attach a real device and do an openGL ES frame capture. (It really needs a real device, the option for frame capture won't be available with a simulator).
The frame capture will indicate memory and other warnings along with suggestions to fix them alongside each API call. Step through these and fix each. The performance should improve considerably.
Here's a couple of references to get this done:
Debugging openGL ES frame
Xcode tools overview
I am still getting used to OpenGL with shaders, been using OGL ES 1.0 before but it's time to update my knowledge! Now I have a problem with the simple shaders I'm looking at and I have searched for 2 days straight with no luck of a solution.
Problem is this: I render some cubes with a VBO in the form of (Vx, Vy, Vz, NormalX, NormalY, NormalZ, ColorR, ColorG, ColorB, ColorA) and this works nicely when I render it without the shader but I have to use the shader for translation and stuff (I know it can be done without but bear with me). Here is my vertex shader, default from OGL template in XCode:
attribute vec4 position;
attribute vec3 normal;
uniform vec3 translation;
varying lowp vec4 colorVarying;
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
void main()
{
vec3 eyeNormal = normalize(normalMatrix * normal);
vec3 lightPosition = vec3(0.0, 0.0, 10.0);
vec4 diffuseColor = vec4(0.4, 0.4, 1.0, 1.0);
float nDotVP = max(0.0, dot(eyeNormal, normalize(lightPosition)));
colorVarying = diffuseColor * nDotVP;
gl_Position = modelViewProjectionMatrix * (position + vec4(translation, 1));
}
And the fragment shader, also default:
varying lowp vec4 colorVarying;
void main()
{
gl_FragColor = colorVarying;
}
Now this ALWAYS renders whatever triangles I draw in the same color (defined by diffuseColor) without regard for the colors in the VBO. So I have tried and failed with other fragment shader like gl_FragColor = gl_FrontColor; but gl_FrontColor/gl_Color etc aren't included in OpenGL ES and are deprecated in OpenGL 3.x or something. I have also viewed code using texture samplers but since I'm not using textures but colors it gets a bit complicated for a beginner.
So my question is this, how would I have my fragmentshader find the Material Color of the current fragment being shaded?
If I should pass the colors in an array to the shaders, how would I do that and how, then, would I reference it with regard to the currently shading fragment?
(Some 'also's; tried not using a fragment shader but OGL doesn't allow only using vertex shader. Tried simply removing the gl_FragColor = colorVarying; but that leaves the colors really screwed up)
You need to add a colour attribute to your shader:
attribute vec4 position;
attribute vec3 normal;
attribute vec4 colour;
...and use that attribute instead of diffuseColor.
You must also tell OpenGL where to find that vertex attribute within your VBO using glVertexAttribPointer (I assume you are doing this for the position and normal attributes already).
As is shown below, the error is very strange. I use OpenGLES 2.0 and shader in my iPad program, but it seems something goes wrong with the code or project configuration. The model is drawn with no color at all (black color).
2012-12-01 14:21:56.707 medicare[6414:14303] Program link log:
WARNING: Could not find vertex shader attribute 'color' to match BindAttributeLocation request.
WARNING: Output of vertex shader 'colorVarying' not read by fragment shader
[Switching to process 6414 thread 0x1ad0f]
And I use glBindAttibLocation to pass position and normal data like this:
// This needs to be done prior to linking.
glBindAttribLocation(_program, INDEX_POSITION, "position");
glBindAttribLocation(_program, INDEX_NORMAL, "normal");
glBindAttribLocation(_program, INDEX_COLOR, "color"); //pass color to shader
There are two shaders in my project. So any good solutions to this odd error? Thanks a lot!
My vertex shader:
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
attribute vec4 position;
attribute vec3 normal;
attribute vec4 color;
varying lowp vec4 DestinationColor;
void main()
{
//vec4 a_Color = vec4(0.9, 0.4, 0.4, 1.0);
vec4 a_Color = color;
vec3 u_LightPos = vec3(1.0, 1.0, 2.0);
float distance = 2.4;
vec3 eyeNormal=normalize(normalMatrix * normal);
float diffuse = max(dot(eyeNormal, u_LightPos), 0.0); // remove approx ambient light
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));
DestinationColor = a_Color * diffuse; // average between ambient and diffuse a_Color * (diffuse + 0.3)/2.0;
gl_Position = modelViewProjectionMatrix * position;
}
And my fragment shader is:
varying lowp vec4 DestinationColor;
void main()
{
gl_FragColor = DestinationColor;
}
Very simple. Thanks a lot!
I think there are a few things wrong here. First your use of attribute might not be right. An attribute is like an element that changes for each vertex.. do you have the color as an element in your data structure? Cause if not, the shader isn't going to work right.
And I use glBindAttibLocation to pass position and normal data like
this:
no you don't. glBindAttribLocation "Associates a generic vertex attribute index with a named attribute variable". It doesn't pass data. It associates an index (an glint) with the variable. You pass things in later with: glVertexAttribPointer.
I don't even use the bind.. I do it this way - set up the attribute:
glAttributes[PROGNAME][A_vec3_vertexPosition] = glGetAttribLocation(glPrograms[PROGNAME], "a_vertexPosition");
glEnableVertexAttribArray(glAttributes[PROGNAME][A_vec3_vertexPosition]);
and then later before calling glDrawElemetns pass your pointer to it so it can get the data:
glVertexAttribPointer(glAttributes[PROGNAME][A_vec3_vertexPosition], 3, GL_FLOAT, GL_FALSE, stride, (void *) 0);
There I'm using a 2 dimensional array of ints called glAttributes to hold all of my attribute indexes. But you can use glints like you are now.
The error message tells you what's wrong. In your vertex shader you say:
attribute vec4 color;
But then down below you also have an a_Color:
DestinationColor = a_Color * diffuse;
Be consistent with your variable names. I put a_ v_ and u_ in front of all mine now to try to keep straight what kind of variable it is. What you're calling an a_ there is really a varying.
I also suspect that the error message was not from the same version of the shader and code that you posted because of the error:
WARNING: Output of vertex shader 'colorVarying' not read by fragment shader
And the error about colorVarying is confusing when it isn't even in this version of your vertex shader. Repost the current version of the shaders and the error messages you get from those and it will be easier to help you.
I'm trying to combine two texture using shaders in opengl es 2.0
as you can see on the screen shot, I am trying to create a needle reflection on backward object using dynamic environment mapping.
but, reflection of the needle looks semi transparent and it's blend with my environment map.
here is the my fragment shader;
varying highp vec4 R;
uniform samplerCube cube_map1;
uniform samplerCube cube_map2;
void main()
{
mediump vec3 output_color1;
mediump vec3 output_color2;
output_color1 = textureCube(cube_map1 , R.xyz).rgb;
output_color2 = textureCube(cube_map2 , R.xyz).rgb;
gl_FragColor = mix(vec4(output_color1,1.0),vec4(output_color2,1.0),0.5);
}
but, "mix" method cause a blending two textures.
I'm also checked Texture Combiners examples but it didn't help either.
is there any way to combine two textures without blend each other.
thanks.
Judging from the comments, my guess is you want to draw the needle on top of the landscape picture. I'd simply render it as an overlay but since you want to do it in a shader maybe this would work:
void main()
{
mediump vec3 output_color1;
mediump vec3 output_color2;
output_color1 = textureCube(cube_map1 , R.xyz).rgb;
output_color2 = textureCube(cube_map2 , R.xyz).rgb;
if ( length( output_color1 ) > 0.0 )
gl_FragColor = vec4(output_color1,1.0);
else
gl_FragColor = vec4(output_color2,1.0);
}