Why doesn't this OpenGL ES array instancing example work? - ios

I'm trying to follow the suggestion in Apple's OpenGL ES Programming Guide section on instanced drawing: Use Instanced Drawing to Minimize Draw Calls. I have started with the example project that XCode generates for a Game app with OpenGL and Swift and converted it to OpenGL ES 3.0, adding some instanced drawing to duplicate the cube.
This works fine when I use the gl_InstanceID technique and simply generate an offset from that. But when I try to use the 'instanced arrays' technique to pass data in via a buffer I am not seeing any results.
My updated vertex shader looks like this:
#version 300 es
in vec4 position;
in vec3 normal;
layout(location = 5) in vec2 instOffset;
out lowp vec4 colorVarying;
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
void main()
{
vec3 eyeNormal = normalize(normalMatrix * normal);
vec3 lightPosition = vec3(0.0, 0.0, 1.0);
vec4 diffuseColor = vec4(0.4, 0.4, 1.0, 1.0);
float nDotVP = max(0.0, dot(eyeNormal, normalize(lightPosition)));
colorVarying = diffuseColor * nDotVP;
// gl_Position = modelViewProjectionMatrix * position + vec4( float(gl_InstanceID)*1.5, float(gl_InstanceID)*1.5, 1.0,1.0);
gl_Position = modelViewProjectionMatrix * position + vec4(instOffset, 1.0, 1.0);
}
and in my setupGL() method I have added the following:
//glGenVertexArraysOES(1, &instArray) // EDIT: WRONG
//glBindVertexArrayOES(instArray) // EDIT: WRONG
let kMyInstanceDataAttrib = 5
glGenBuffers(1, &instBuffer)
glBindBuffer(GLenum(GL_ARRAY_BUFFER), instBuffer)
glBufferData(GLenum(GL_ARRAY_BUFFER), GLsizeiptr(sizeof(GLfloat) * instData.count), &instData, GLenum(GL_STATIC_DRAW))
glEnableVertexAttribArray(GLuint(kMyInstanceDataAttrib))
glVertexAttribPointer(GLuint(kMyInstanceDataAttrib), 2, GLenum(GL_FLOAT), GLboolean(GL_FALSE), 0/*or 8?*/, BUFFER_OFFSET(0))
glVertexAttribDivisor(GLuint(kMyInstanceDataAttrib), 1);
along with some simple instance offset data:
var instData: [GLfloat] = [
1.5, 1.5,
2.5, 2.5,
3.5, 3.5,
]
I am drawing the same way with the above as with the instance id technique:
glDrawArraysInstanced(GLenum(GL_TRIANGLES), 0, 36, 3)
But it seems to have no effect. I just get the single cube and it doesn't even seem to fail if I remove the buffer setup, so I suspect my setup is missing something.
EDIT: Fixed the code by removing two bogus lines from init.

I had an unecessary gen and bind for the attribute vertex array. The code as edited above now works.

Related

OpenGl ES on iOS lightshading removes color

I am still getting used to OpenGL with shaders, been using OGL ES 1.0 before but it's time to update my knowledge! Now I have a problem with the simple shaders I'm looking at and I have searched for 2 days straight with no luck of a solution.
Problem is this: I render some cubes with a VBO in the form of (Vx, Vy, Vz, NormalX, NormalY, NormalZ, ColorR, ColorG, ColorB, ColorA) and this works nicely when I render it without the shader but I have to use the shader for translation and stuff (I know it can be done without but bear with me). Here is my vertex shader, default from OGL template in XCode:
attribute vec4 position;
attribute vec3 normal;
uniform vec3 translation;
varying lowp vec4 colorVarying;
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
void main()
{
vec3 eyeNormal = normalize(normalMatrix * normal);
vec3 lightPosition = vec3(0.0, 0.0, 10.0);
vec4 diffuseColor = vec4(0.4, 0.4, 1.0, 1.0);
float nDotVP = max(0.0, dot(eyeNormal, normalize(lightPosition)));
colorVarying = diffuseColor * nDotVP;
gl_Position = modelViewProjectionMatrix * (position + vec4(translation, 1));
}
And the fragment shader, also default:
varying lowp vec4 colorVarying;
void main()
{
gl_FragColor = colorVarying;
}
Now this ALWAYS renders whatever triangles I draw in the same color (defined by diffuseColor) without regard for the colors in the VBO. So I have tried and failed with other fragment shader like gl_FragColor = gl_FrontColor; but gl_FrontColor/gl_Color etc aren't included in OpenGL ES and are deprecated in OpenGL 3.x or something. I have also viewed code using texture samplers but since I'm not using textures but colors it gets a bit complicated for a beginner.
So my question is this, how would I have my fragmentshader find the Material Color of the current fragment being shaded?
If I should pass the colors in an array to the shaders, how would I do that and how, then, would I reference it with regard to the currently shading fragment?
(Some 'also's; tried not using a fragment shader but OGL doesn't allow only using vertex shader. Tried simply removing the gl_FragColor = colorVarying; but that leaves the colors really screwed up)
You need to add a colour attribute to your shader:
attribute vec4 position;
attribute vec3 normal;
attribute vec4 colour;
...and use that attribute instead of diffuseColor.
You must also tell OpenGL where to find that vertex attribute within your VBO using glVertexAttribPointer (I assume you are doing this for the position and normal attributes already).

How do I draw gradients of discrete integer values from 0-255 in WebGL without making a buffer for each color

Hello I am using Dartlang and WebGl to write a neural net visualization and I the neurons output ranges from 0-1. I want to display the neurons outputs as a function of color using a sampling depth of 255 values in the red spectrum. I have learned basic WegGL and I know that I need to bind a color to an array and then read it using a GPU program. My program draws red triangle for a neuron with output close to 1 and white for a neuron who's output is close to 0. My question is how do I draw with colors of values in between white and red without creating a gl buffer for each of the 255 values. I assume I will do something in the GPU program itself and just bind the value of the neuron output to the Array and then have the GPU program convert it into a vec4 color.
A link to my current FULL code is here: https://github.com/SlightlyCyborg/dart-neuralnet/blob/master/web/part1.dart
Also here is segment of my code:
program = new GlProgram('''
precision mediump float;
varying vec4 vColor;
void main(void) {
gl_FragColor = vColor;
}
''','''
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
uniform mat4 uMVMatrix;
uniform mat4 uPMatrix;
varying vec4 vColor;
void main(void) {
gl_Position = uPMatrix * uMVMatrix * vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
}
''', ['aVertexPosition', 'aVertexColor'], ['uMVMatrix', 'uPMatrix']);
gl.useProgram(program.program);
Here is where I bind the buffer for the on_neuron_color
gl.bindBuffer(ARRAY_BUFFER, on_color_buff);
gl.bufferDataTyped(ARRAY_BUFFER, new Float32List.fromList([
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0,
1.0, 0.0, 0.0, 1.0
]), STATIC_DRAW);
And here is where I draw using that color:
gl.bindBuffer(ARRAY_BUFFER,tri_buff);
gl.vertexAttribPointer(program.attributes['aVertexPosition'], 3, FLOAT, false, 0, 0);
gl.bindBuffer(ARRAY_BUFFER, on_color_buff);
gl.vertexAttribPointer(program.attributes['aVertexColor'], 4, FLOAT, false, 0, 0);
setMatrixUniforms();
gl.drawArrays(TRIANGLE_STRIP, 0, 3);
I don't understand what you're really trying to do but....
If you change your fragment shader to
precision mediump float;
uniform vec4 uColor;
void main(void) {
gl_FragColor = uColor;
}
Then you can set the color that WebGL will draw with with
gl.uniform4f(program.uniforms['uColor'], r, g, b, a);
or
gl.uniform4fv(program.uniforms['uColor'], arrayOf4Floats);
You don't need any color buffers and you can remove all references to color from your vertex shader.

Output of vertex shader 'colorVarying' not read by fragment shader

As is shown below, the error is very strange. I use OpenGLES 2.0 and shader in my iPad program, but it seems something goes wrong with the code or project configuration. The model is drawn with no color at all (black color).
2012-12-01 14:21:56.707 medicare[6414:14303] Program link log:
WARNING: Could not find vertex shader attribute 'color' to match BindAttributeLocation request.
WARNING: Output of vertex shader 'colorVarying' not read by fragment shader
[Switching to process 6414 thread 0x1ad0f]
And I use glBindAttibLocation to pass position and normal data like this:
// This needs to be done prior to linking.
glBindAttribLocation(_program, INDEX_POSITION, "position");
glBindAttribLocation(_program, INDEX_NORMAL, "normal");
glBindAttribLocation(_program, INDEX_COLOR, "color"); //pass color to shader
There are two shaders in my project. So any good solutions to this odd error? Thanks a lot!
My vertex shader:
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
attribute vec4 position;
attribute vec3 normal;
attribute vec4 color;
varying lowp vec4 DestinationColor;
void main()
{
//vec4 a_Color = vec4(0.9, 0.4, 0.4, 1.0);
vec4 a_Color = color;
vec3 u_LightPos = vec3(1.0, 1.0, 2.0);
float distance = 2.4;
vec3 eyeNormal=normalize(normalMatrix * normal);
float diffuse = max(dot(eyeNormal, u_LightPos), 0.0); // remove approx ambient light
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));
DestinationColor = a_Color * diffuse; // average between ambient and diffuse a_Color * (diffuse + 0.3)/2.0;
gl_Position = modelViewProjectionMatrix * position;
}
And my fragment shader is:
varying lowp vec4 DestinationColor;
void main()
{
gl_FragColor = DestinationColor;
}
Very simple. Thanks a lot!
I think there are a few things wrong here. First your use of attribute might not be right. An attribute is like an element that changes for each vertex.. do you have the color as an element in your data structure? Cause if not, the shader isn't going to work right.
And I use glBindAttibLocation to pass position and normal data like
this:
no you don't. glBindAttribLocation "Associates a generic vertex attribute index with a named attribute variable". It doesn't pass data. It associates an index (an glint) with the variable. You pass things in later with: glVertexAttribPointer.
I don't even use the bind.. I do it this way - set up the attribute:
glAttributes[PROGNAME][A_vec3_vertexPosition] = glGetAttribLocation(glPrograms[PROGNAME], "a_vertexPosition");
glEnableVertexAttribArray(glAttributes[PROGNAME][A_vec3_vertexPosition]);
and then later before calling glDrawElemetns pass your pointer to it so it can get the data:
glVertexAttribPointer(glAttributes[PROGNAME][A_vec3_vertexPosition], 3, GL_FLOAT, GL_FALSE, stride, (void *) 0);
There I'm using a 2 dimensional array of ints called glAttributes to hold all of my attribute indexes. But you can use glints like you are now.
The error message tells you what's wrong. In your vertex shader you say:
attribute vec4 color;
But then down below you also have an a_Color:
DestinationColor = a_Color * diffuse;
Be consistent with your variable names. I put a_ v_ and u_ in front of all mine now to try to keep straight what kind of variable it is. What you're calling an a_ there is really a varying.
I also suspect that the error message was not from the same version of the shader and code that you posted because of the error:
WARNING: Output of vertex shader 'colorVarying' not read by fragment shader
And the error about colorVarying is confusing when it isn't even in this version of your vertex shader. Repost the current version of the shaders and the error messages you get from those and it will be easier to help you.

GLSL Shaders compile but don't draw anything on Windows

I'm trying to port some OpenGL rendering code I wrote for iOS to a Windows app. The code runs fine on iOS, but on Windows it doesn't draw anything. I've narrowed the problem down to this bit of code as fixed function stuff (such as glutSolidTorus) draws fine, but when shaders are enabled, nothing works.
Here's the rendering code:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_INDEX_ARRAY);
// Set the vertex buffer as current
this->vertexBuffer->MakeActive();
// Get a reference to the vertex description to save copying
const AT::Model::VertexDescription & vd = this->vertexBuffer->GetVertexDescription();
std::vector<GLuint> handles;
// Loop over the vertex descriptions
for (int i = 0, stride = 0; i < vd.size(); ++i)
{
// Get a handle to the vertex attribute on the shader object using the name of the current vertex description
GLint handle = shader.GetAttributeHandle(vd[i].first);
// If the handle is not an OpenGL 'Does not exist' handle
if (handle != -1)
{
glEnableVertexAttribArray(handle);
handles.push_back(handle);
// Set the pointer to the vertex attribute, with the vertex's element count,
// the size of a single vertex and the start position of the first attribute in the array
glVertexAttribPointer(handle, vd[i].second, GL_FLOAT, GL_FALSE,
sizeof(GLfloat) * (this->vertexBuffer->GetSingleVertexLength()),
(GLvoid *)stride);
}
// Add to the stride value with the size of the number of floats the vertex attr uses
stride += sizeof(GLfloat) * (vd[i].second);
}
// Draw the indexed elements using the current vertex buffer
glDrawElements(GL_TRIANGLES,
this->vertexBuffer->GetIndexArrayLength(),
GL_UNSIGNED_SHORT, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_INDEX_ARRAY);
// Disable the vertexattributearrays
for (int i = 0, stride = 0; i < handles.size(); ++i)
{
glDisableVertexAttribArray(handles[i]);
}
It's inside a function that takes a shader as a parameter, and the vertex description is a list of pairs: attribute handles to number of elements. Uniforms are being set outside this function. I'm enabling the shader for use before it's passed in to the function. Here are the two shader sources:
Vertex:
attribute vec3 position;
attribute vec2 texCoord;
attribute vec3 normal;
// Uniforms
uniform mat4 Model;
uniform mat4 View;
uniform mat4 Projection;
uniform mat3 NormalMatrix;
/// OUTPUTS
varying vec2 o_texCoords;
varying vec3 o_normals;
// Vertex Shader
void main()
{
// Do the normal position transform
gl_Position = Projection * View * Model * vec4(position, 1.0);
// Transform the normals to world space
o_normals = NormalMatrix * normal;
// Pass texture coords on for interpolation
o_texCoords = texCoord;
}
Fragment:
varying vec2 o_texCoords;
varying vec3 o_normals;
/// Fragment Shader
void main()
{
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
}
I'm running OpenGL 2.1 with Shader language 1.2. I'd be most appreciative for any help anyone can give me.
I'm seeng that you are assigning black color for the output color for the fragment in your fragment shader. Try changing that to something like
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
and see if the objects in the scene will be colored with green.
I came back to this recently and it seems that I wasn't checking for errors during rendering, it was giving me a 1285 error GL_OUT_OF_MEMORY after calling glDrawElements(). This lead me to check the vertex buffer objects to see if they contained any data and it turns out I wasn't properly deep copying them in a wrapper class, and as a result they were being deleted before any rendering happened. Fixing this sorted the issue.
Thank you for your suggestions.

Passing array of vec2 to Fragment Shader Opengl es 2.0

I am trying to pass in an array of vec2 to a fragment shader but i can't seem to work out how.
In my application i have the following array.
GLfloat myMatrix[] = { 100.0, 100.0,
200.0, 200.0 };
glUniformMatrix2fv(matrixLocation, 2, 0, myMatrix);
and in my fragment shader i am trying to access those values like so
uniform vec2 myMatrix[2];
gl_FragColor = gl_FragCoord.xy + myMatrix[0].xy;
however the fragcolor does not change which it should as if i hard code it to
gl_FragColor = gl_FragCoord.xy + vec2( 100.0, 100.0 ).xy;
Any ideas how i can pass these vec2 values into the shader
Thanks in advance

Resources