GLSL Shaders compile but don't draw anything on Windows - ios

I'm trying to port some OpenGL rendering code I wrote for iOS to a Windows app. The code runs fine on iOS, but on Windows it doesn't draw anything. I've narrowed the problem down to this bit of code as fixed function stuff (such as glutSolidTorus) draws fine, but when shaders are enabled, nothing works.
Here's the rendering code:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_INDEX_ARRAY);
// Set the vertex buffer as current
this->vertexBuffer->MakeActive();
// Get a reference to the vertex description to save copying
const AT::Model::VertexDescription & vd = this->vertexBuffer->GetVertexDescription();
std::vector<GLuint> handles;
// Loop over the vertex descriptions
for (int i = 0, stride = 0; i < vd.size(); ++i)
{
// Get a handle to the vertex attribute on the shader object using the name of the current vertex description
GLint handle = shader.GetAttributeHandle(vd[i].first);
// If the handle is not an OpenGL 'Does not exist' handle
if (handle != -1)
{
glEnableVertexAttribArray(handle);
handles.push_back(handle);
// Set the pointer to the vertex attribute, with the vertex's element count,
// the size of a single vertex and the start position of the first attribute in the array
glVertexAttribPointer(handle, vd[i].second, GL_FLOAT, GL_FALSE,
sizeof(GLfloat) * (this->vertexBuffer->GetSingleVertexLength()),
(GLvoid *)stride);
}
// Add to the stride value with the size of the number of floats the vertex attr uses
stride += sizeof(GLfloat) * (vd[i].second);
}
// Draw the indexed elements using the current vertex buffer
glDrawElements(GL_TRIANGLES,
this->vertexBuffer->GetIndexArrayLength(),
GL_UNSIGNED_SHORT, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_INDEX_ARRAY);
// Disable the vertexattributearrays
for (int i = 0, stride = 0; i < handles.size(); ++i)
{
glDisableVertexAttribArray(handles[i]);
}
It's inside a function that takes a shader as a parameter, and the vertex description is a list of pairs: attribute handles to number of elements. Uniforms are being set outside this function. I'm enabling the shader for use before it's passed in to the function. Here are the two shader sources:
Vertex:
attribute vec3 position;
attribute vec2 texCoord;
attribute vec3 normal;
// Uniforms
uniform mat4 Model;
uniform mat4 View;
uniform mat4 Projection;
uniform mat3 NormalMatrix;
/// OUTPUTS
varying vec2 o_texCoords;
varying vec3 o_normals;
// Vertex Shader
void main()
{
// Do the normal position transform
gl_Position = Projection * View * Model * vec4(position, 1.0);
// Transform the normals to world space
o_normals = NormalMatrix * normal;
// Pass texture coords on for interpolation
o_texCoords = texCoord;
}
Fragment:
varying vec2 o_texCoords;
varying vec3 o_normals;
/// Fragment Shader
void main()
{
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
}
I'm running OpenGL 2.1 with Shader language 1.2. I'd be most appreciative for any help anyone can give me.

I'm seeng that you are assigning black color for the output color for the fragment in your fragment shader. Try changing that to something like
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
and see if the objects in the scene will be colored with green.

I came back to this recently and it seems that I wasn't checking for errors during rendering, it was giving me a 1285 error GL_OUT_OF_MEMORY after calling glDrawElements(). This lead me to check the vertex buffer objects to see if they contained any data and it turns out I wasn't properly deep copying them in a wrapper class, and as a result they were being deleted before any rendering happened. Fixing this sorted the issue.
Thank you for your suggestions.

Related

Does a Vertex Shader VAO need a VBO?

I am trying to use a VAO with a vertex shader. This works, but only if I set the length of the bufferData to 0. My understanding is that when using a vertex shader, a VBO is not required because my shader is generating the vertices of a quad. If I attempt to create the VAO without binding a buffer, it will also crash.
As I mentioned, this works, however I am concerned because in Apple's Instruments, the OpenGL Expert reports a severe error:
Draw Call Exceeded Array Buffer Bounds
No Buffer Data - DYFKNoBufferData
Here is the code for generating the VAO:
glGenVertexArrays(1, &vaoID); // Create our Vertex Array Object
glBindVertexArray(avoid); // Bind VAO
GLfloat vertices[12]; // Vertices for our square
vertices[0] = -0.5; vertices[1] = 0.5; vertices[2] = 0.0; // Top left corner
vertices[3] = -0.5; vertices[4] = -0.5; vertices[5] = 0.0; // Bottom left corner
vertices[6] = 0.5; vertices[7] = 0.5; vertices[8] = 0.0; // Top Right corner
vertices[9] = 0.5; vertices[10] = -0.5; vertices[11] = 0.0; // Bottom right corner
glGenBuffers(1, &fboTextureVboID); // Create our Vertex Buffer Object
glBindBuffer(GL_ARRAY_BUFFER, fboTextureVboID); // Bind VBO
// As long as I set the buffer data length to 0
// then my glDrawArrays(GL_TRIANGLE_STRIP, 0, 4) call works
// otherwise I get EXC_BAD_ACCESS
glBufferData(GL_ARRAY_BUFFER, 0, vertices, GL_STATIC_DRAW);
// configure vertex attributes
glEnableVertexAttribArray (...
glVertexAttribPointer(...
...
glEnableVertexAttribArray(0); // Make our Vertex Array Object Inactive
glBindVertexArray(0); // Make our Vertex Buffer Object Inactive
Drawing with:
glUseProgram(vertexShaderProgram);
glBindVertexArray(vaoID);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
Can I safely ignore Apple's errors? I am trying to use a VAO for the vertex shader because I would like to eliminate all the vertex attribute bindings in my drawing code. Or is there a better way to do this with a shader with or without a VAO?
EDIT:
Here is my vertex shader source:
#version 300 es
uniform lowp mat4 uProjectionMatrix;
in lowp vec4 a_position;
in lowp vec2 a_texCoord;
out lowp vec2 v_texCoord;
void main()
{
gl_Position = uProjectionMatrix * a_position;
v_texCoord = a_texCoord;
}
And fragment shader source:
#version 300 es
precision mediump float;
uniform lowp sampler2D uTexture;
in lowp vec2 v_texCoord;
out lowp vec4 fragmentColor;
void main()
{
fragmentColor = texture( uTexture, v_texCoord );
}
You can pick one of two things.
It is perfectly legal to have a VAO that has no attached buffer objects. However, this does not mean "create a buffer object, but don't put anything in it". It means not to attach buffer objects to the VAO. You just call glGenVertexArrays to generate the vertex array, and you're done.
No calls to glEnableVertexAttribArrays. No calls to glVertexAttribPointer. If you're not using vertex arrays at all, you should not be making these calls at all.
It is also perfectly legal to have a VAO that contains buffer objects. These work like normal.
What you can't do is create a buffer object that has no allocation, then try to use it for vertex data. That's what happens if you remove just glBufferData.
So you have to pick one side of the road or the other. Either your VAO uses one or more buffers, or it doesn't. If it uses a buffer, those buffers have to have storage. If it doesn't, then it won't care.

WebGL: Access buffer from shader

I need to access a buffer from my shader. The buffer is created from an array. (In the real scenario, the array has 10k+ (variable) numbers.)
var myBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, myBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Uint8Array([1,2,3,4,5,6,7]), gl.STATIC_DRAW);
How do I send it so it's usable by the shader?
precision mediump float;
uniform uint[] myBuffer;//???
void main() {
gl_FragColor = vec4(myBuffer[0],myBuffer[1],0,1);
}
Normally, if it were a attribute, it'd be
gl.vertexAttribPointer(myBuffer, 2, gl.UNSIGNED_BYTE, false, 4, 0);
but I need to be able to access the whole array from any shader pixel, so it's not a vertex attribute.
Use a texture if you want random access to lots of data in a shader.
If you have 10000 values you might make a texture that's 100x100 pixels. you can then get each value from the texture with something like
uniform sampler2D u_texture;
vec2 textureSize = vec2(100.0, 100.0);
vec4 getValueFromTexture(float index) {
float column = mod(index, textureSize.x);
float row = floor(index / textureSize.x);
vec2 uv = vec2(
(column + 0.5) / textureSize.x,
(row + 0.5) / textureSize.y);
return texture2D(u_texture, uv);
}
Make sure your texture filtering is set to gl.NEAREST.
Of course if you make textureSize a uniform you could pass in the size of the texture.
As for why the + 0.5 part see this answer
You can use normal gl.RGBA, gl.UNSIGNED_BYTE textures and add/multiply the channels together to get a large range of values. Or, you could use floating point textures if you don't want to mess with that. You need to enable floating point textures.

Should the number of vertexes be equal to the number of texCoords?

My vertexShader:
attribute vec4 vertexPosition;
attribute vec2 vertexTexCoord;
varying vec2 texCoord;
uniform mat4 modelViewProjectionMatrix;
void main()
{
gl_Position = modelViewProjectionMatrix * vertexPosition;
texCoord = vertexTexCoord;
}
My fragmentShder:
precision mediump float;
varying vec2 texCoord;
uniform sampler2D texSampler2D;
void main()
{
gl_FragColor = texture2D(texSampler2D, texCoord);
}
Init Shader:
if (shader2D == nil) {
shader2D = [[Shader2D alloc] init];
shader2D.shaderProgramID = [ShaderUtils compileShaders:vertexShader2d :fragmentShader2d];
if (0 < shader2D.shaderProgramID) {
shader2D.vertexHandle = glGetAttribLocation(shader2D.shaderProgramID, "vertexPosition");
shader2D.textureCoordHandle = glGetAttribLocation(shader2D.shaderProgramID, "vertexTexCoord");
shader2D.mvpMatrixHandle = glGetUniformLocation(shader2D.shaderProgramID, "modelViewProjectionMatrix");
shader2D.texSampler2DHandle = glGetUniformLocation(shader2D.shaderProgramID,"texSampler2D");
}
else {
NSLog(#"Could not initialise shader2D");
}
}
return shader2D;
Rendering:
GLKMatrix4 mvpMatrix;
mvpMatrix = [self position: position];
mvpMatrix = GLKMatrix4Multiply([QCARutils getInstance].projectionMatrix, mvpMatrix);
glUseProgram(shader.shaderProgramID);
glVertexAttribPointer(shader.vertexHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)vertices);
glVertexAttribPointer(shader.textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)texCoords);
glEnableVertexAttribArray(shader.vertexHandle);
glEnableVertexAttribArray(shader.textureCoordHandle);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, [texture textureID]);
glUniformMatrix4fv(shader.mvpMatrixHandle, 1, GL_FALSE, (const GLfloat*)&mvpMatrix);
glUniform1i(shader.texSampler2DHandle, 0);
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, (const GLvoid*)indices);
glDisableVertexAttribArray(shader.vertexHandle);
glDisableVertexAttribArray(shader.textureCoordHandle);
It seems to work properly when one texture coordinates corresponds to one and only one vertex coordinates(Number of texCoords == Number of vertices)
My question: Does openGL assign a texture coordinates to one and only one vertex? In other words, when texture coordinates and vertex coordinates are not one-to-one correspondence, what will the rendering result turn out to be?
Yes, there needs to be a one-to-one correspondence between vertices and texCoords -- all information passed down the OpenGL pipeline is per-vertex, so every normal and every texCoord must have a vertex.
Note, however, that you can (and will often need to) have multiple texCoords, normals, or other per-vertex data for the same point in space: e.g. if you're wrapping a texture map around a sphere, there will be a "seam" where the ends of the rectangular texture meet. At those spots you'll need to have multiple vertices that occupy the same point.

Output of vertex shader 'colorVarying' not read by fragment shader

As is shown below, the error is very strange. I use OpenGLES 2.0 and shader in my iPad program, but it seems something goes wrong with the code or project configuration. The model is drawn with no color at all (black color).
2012-12-01 14:21:56.707 medicare[6414:14303] Program link log:
WARNING: Could not find vertex shader attribute 'color' to match BindAttributeLocation request.
WARNING: Output of vertex shader 'colorVarying' not read by fragment shader
[Switching to process 6414 thread 0x1ad0f]
And I use glBindAttibLocation to pass position and normal data like this:
// This needs to be done prior to linking.
glBindAttribLocation(_program, INDEX_POSITION, "position");
glBindAttribLocation(_program, INDEX_NORMAL, "normal");
glBindAttribLocation(_program, INDEX_COLOR, "color"); //pass color to shader
There are two shaders in my project. So any good solutions to this odd error? Thanks a lot!
My vertex shader:
uniform mat4 modelViewProjectionMatrix;
uniform mat3 normalMatrix;
attribute vec4 position;
attribute vec3 normal;
attribute vec4 color;
varying lowp vec4 DestinationColor;
void main()
{
//vec4 a_Color = vec4(0.9, 0.4, 0.4, 1.0);
vec4 a_Color = color;
vec3 u_LightPos = vec3(1.0, 1.0, 2.0);
float distance = 2.4;
vec3 eyeNormal=normalize(normalMatrix * normal);
float diffuse = max(dot(eyeNormal, u_LightPos), 0.0); // remove approx ambient light
diffuse = diffuse * (1.0 / (1.0 + (0.25 * distance * distance)));
DestinationColor = a_Color * diffuse; // average between ambient and diffuse a_Color * (diffuse + 0.3)/2.0;
gl_Position = modelViewProjectionMatrix * position;
}
And my fragment shader is:
varying lowp vec4 DestinationColor;
void main()
{
gl_FragColor = DestinationColor;
}
Very simple. Thanks a lot!
I think there are a few things wrong here. First your use of attribute might not be right. An attribute is like an element that changes for each vertex.. do you have the color as an element in your data structure? Cause if not, the shader isn't going to work right.
And I use glBindAttibLocation to pass position and normal data like
this:
no you don't. glBindAttribLocation "Associates a generic vertex attribute index with a named attribute variable". It doesn't pass data. It associates an index (an glint) with the variable. You pass things in later with: glVertexAttribPointer.
I don't even use the bind.. I do it this way - set up the attribute:
glAttributes[PROGNAME][A_vec3_vertexPosition] = glGetAttribLocation(glPrograms[PROGNAME], "a_vertexPosition");
glEnableVertexAttribArray(glAttributes[PROGNAME][A_vec3_vertexPosition]);
and then later before calling glDrawElemetns pass your pointer to it so it can get the data:
glVertexAttribPointer(glAttributes[PROGNAME][A_vec3_vertexPosition], 3, GL_FLOAT, GL_FALSE, stride, (void *) 0);
There I'm using a 2 dimensional array of ints called glAttributes to hold all of my attribute indexes. But you can use glints like you are now.
The error message tells you what's wrong. In your vertex shader you say:
attribute vec4 color;
But then down below you also have an a_Color:
DestinationColor = a_Color * diffuse;
Be consistent with your variable names. I put a_ v_ and u_ in front of all mine now to try to keep straight what kind of variable it is. What you're calling an a_ there is really a varying.
I also suspect that the error message was not from the same version of the shader and code that you posted because of the error:
WARNING: Output of vertex shader 'colorVarying' not read by fragment shader
And the error about colorVarying is confusing when it isn't even in this version of your vertex shader. Repost the current version of the shaders and the error messages you get from those and it will be easier to help you.

Render YpCbCr iPhone 4 Camera Frame to an OpenGL ES 2.0 Texture in iOS 4.3

I'm trying to render a native planar image to an OpenGL ES 2.0 texture in iOS 4.3 on an iPhone 4. The texture however winds up all black. My camera is configured as such:
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
and I'm passing the pixel data to my texture like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_RGB_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, CVPixelBufferGetBaseAddress(cameraFrame));
My fragement shaders is:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main() {
lowp vec4 color;
color = texture2D(videoFrame, textureCoordinate);
lowp vec3 convertedColor = vec3(-0.87075, 0.52975, -1.08175);
convertedColor += 1.164 * color.g; // Y
convertedColor += vec3(0.0, -0.391, 2.018) * color.b; // U
convertedColor += vec3(1.596, -0.813, 0.0) * color.r; // V
gl_FragColor = vec4(convertedColor, 1.0);
}
and my vertex shader is
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
This works just fine when I'm working with an BGRA image, and my fragment shader only does
gl_FragColor = texture2D(videoFrame, textureCoordinate);
What if anything am I missing here? Thanks!
OK. We have a working success here. The key was passing the Y and the UV as two separate textures to the fragment shader. Here is the final shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 textureCoordinate;
uniform sampler2D videoFrame;
uniform sampler2D videoFrameUV;
const mat3 yuv2rgb = mat3(
1, 0, 1.2802,
1, -0.214821, -0.380589,
1, 2.127982, 0
);
void main() {
vec3 yuv = vec3(
1.1643 * (texture2D(videoFrame, textureCoordinate).r - 0.0625),
texture2D(videoFrameUV, textureCoordinate).r - 0.5,
texture2D(videoFrameUV, textureCoordinate).a - 0.5
);
vec3 rgb = yuv * yuv2rgb;
gl_FragColor = vec4(rgb, 1.0);
}
You'll need to create your textures along like this:
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0));
glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1));
and then pass them like this:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);
glActiveTexture(GL_TEXTURE0);
glUniform1i(videoFrameUniform, 0);
glUniform1i(videoFrameUniformUV, 1);
Boy am I relieved!
P.S. The values for the yuv2rgb matrix are from here http://en.wikipedia.org/wiki/YUV and I copied code from here http://www.ogre3d.org/forums/viewtopic.php?f=5&t=25877 to figure out how to get the correct YUV values to begin with.
Your code appears to attempt to convert a 32-bit colour in 444-plus-unused-byte to RGBA. That's not going to work too well. I don't know of anything that outputs "YUVA", for one.
Also, I think the returned alpha channel is 0 for BGRA camera output, not 1, so I'm not sure why it works (IIRC to convert it to a CGImage you need to use AlphaNoneSkipLast).
The 420 "bi planar" output is structued something like this:
A header telling you where the planes are (used by CVPixelBufferGetBaseAddressOfPlane() and friends)
The Y plane: height × bytes_per_row_1 × 1 bytes
The Cb,Cr plane: height/2 × bytes_per_row_2 × 2 bytes (2 bytes per 2x2 block).
bytes_per_row_1 is approximately width and bytes_per_row_2 is approximately width/2, but you'll want to use CVPixelBufferGetBytesPerRowOfPlane() for robustness (you also might want to check the results of ..GetHeightOfPlane and ...GetWidthOfPlane).
You might have luck treating it as a 1-component width*height texture and a 2-component width/2*height/2 texture. You'll probably want to check bytes-per-row and handle the case where it isn't simply width*number-of-components (although this is probably true for most of the video modes). AIUI, you'll also want to flush the GL context before calling CVPixelBufferUnlockBaseAddress().
Alternatively, you can copy it all to memory into your expected format (optimizing this loop might be a bit tricky). Copying has the advantage that you don't need to worry about things accessing memory after you've unlocked the pixel buffer.

Resources