OpenGL ES 2.0 Texture Won't Display - ios

So I am currently learning OpenGL ES 2.0 with iOS and working on a maze game. The maze is randomly generated (so not a loaded model) and my struggle is texturing the walls and floor of the maze. My approach is to just treat the maze as a series of cubes, and I have code that draws the individual faces of a cube separately (so I can create a path by simply leaving some faces out).
Using capture GPU frame, I have confirmed that the texture is indeed loading in correctly, the data in the frame buffers is correct and that I'm not getting any errors. I can see my other lighting effects (so the face isn't completely black), but no texture appears.
Here is how I've defined my cube faces
GLfloat rightCubeVertexData[] =
{
0.5f, -0.5f, -0.5f,
0.5f, -0.5f, 0.5f,
0.5f, 0.5f, -0.5f,
0.5f, 0.5f, -0.5f,
0.5f, -0.5f, 0.5f,
0.5f, 0.5f, 0.5f,
};
GLfloat rightCubeNormalData[] =
{
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
};
GLfloat rightCubeTexCoords[] =
{
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
0.0, 1.0,
1.0, 0.0,
1.0, 1.0,
};
The other faces are defined essentially the same way (except they are in one array each, splitting up the position, normals, and tex coords was just something I tried; I'm just trying to get one face to texture and then I'll expand to the rest).
Here is how I load the data into the buffer
glGenVertexArraysOES(1, &_rightVertexArray);
glBindVertexArrayOES(_rightVertexArray);
glGenBuffers(3, _rightVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _rightVertexBuffer[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(rightCubeVertexData), rightCubeVertexData, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
glBindBuffer(GL_ARRAY_BUFFER, _rightVertexBuffer[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(rightCubeNormalData), rightCubeNormalData, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
glBindBuffer(GL_ARRAY_BUFFER, _rightVertexBuffer[2]);
glBufferData(GL_ARRAY_BUFFER, sizeof(rightCubeTexCoords), rightCubeTexCoords, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
Again, using three buffers was an experiment, the rest are defined in one buffer with an offset.
Here is how I load textures
crateTexture = [self setupTexture:#"crate.jpg"];
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, crateTexture);
glUniform1i(uniforms[UNIFORM_TEXTURE], 0);
// Load in and set up texture image (adapted from Ray Wenderlich)
- (GLuint)setupTexture:(NSString *)fileName
{
CGImageRef spriteImage = [UIImage imageNamed:fileName].CGImage;
if (!spriteImage) {
NSLog(#"Failed to load image %#", fileName);
exit(1);
}
size_t width = CGImageGetWidth(spriteImage);
size_t height = CGImageGetHeight(spriteImage);
GLubyte *spriteData = (GLubyte *) calloc(width*height*4, sizeof(GLubyte));
CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast);
CGContextDrawImage(spriteContext, CGRectMake(0, 0, width, height), spriteImage);
CGContextRelease(spriteContext);
GLuint texName;
glGenTextures(1, &texName);
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
free(spriteData);
return texName;
}
Then, at the appropriate time, I simply call glDrawArrays to draw the face. I am completely stumped on this, and it is probably a very silly error, but any help anybody could provide would be much appreciated.
P.S. Here is my fragment shader
varying vec3 eyeNormal;
varying vec4 eyePos;
varying vec2 texCoordOut;
uniform sampler2D texture;
uniform vec3 flashlightPosition;
uniform vec3 diffuseLightPosition;
uniform vec4 diffuseComponent;
uniform float shininess;
uniform vec4 specularComponent;
uniform vec4 ambientComponent;
void main()
{
vec4 ambient = ambientComponent;
vec3 N = normalize(eyeNormal);
float nDotVP = max(0.0, dot(N, normalize(diffuseLightPosition)));
vec4 diffuse = diffuseComponent * nDotVP;
vec3 E = normalize(-eyePos.xyz);
vec3 L = normalize(flashlightPosition - eyePos.xyz);
vec3 H = normalize(L+E);
float Ks = pow(max(dot(N, H), 0.0), shininess);
vec4 specular = Ks*specularComponent;
if( dot(L, N) < 0.0 ) {
specular = vec4(0.0, 0.0, 0.0, 1.0);
}
gl_FragColor = (ambient + diffuse + specular) * texture2D(texture, texCoordOut);
//gl_FragColor = ambient + diffuse + specular;
gl_FragColor.a = 1.0;
}
And yes, all the uniform names are correct and correspond to something in the main code.
EDIT: Here is the vertex shader
precision mediump float;
attribute vec4 position;
attribute vec3 normal;
attribute vec2 texCoordIn;
varying vec3 eyeNormal;
varying vec4 eyePos;
varying vec2 texCoordOut;
uniform mat4 modelViewProjectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat3 normalMatrix;
void main()
{
eyeNormal = (normalMatrix * normal);
eyePos = modelViewMatrix * position;
texCoordOut = texCoordIn;
gl_Position = modelViewProjectionMatrix * position;
}

To sum up the procedure done from the comments...
There is much that can go wrong when dealing with textures and it is good to know how to pinpoint where the issue is.
What to be careful wit the texture itself:
Check if you did set the parameters such as
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
Check setting the uniform as glUniform1i(uniformName, 0) where the last parameter corresponds to the active texture and not the texture ID.
Other include checking if the uniform name is correct, texture is bound.. And if possible check the debugger if the texture is properly loaded.
Next to that there are chances your texture coordinates are messed up and this seems to be a very common issue. To debug that it is best that you replace the color gotten from the texture in your fragment shader with the texture coordinate itself. E.g. replace texture2D(texture, texCoordOut) with vec4(texCoordOut.x, texCoordOut.y, .0, 1.0). Since the texture coordinates should be in range [0,1] you should see nice gradients between red and green color in your scene. If you do not see them then your texture coordinates are messed up: If all is black your coordinates are all zero, if most is yellow then your coordinates are most likely too large.
In your case the texture coordinates were all black which means you were always getting the first pixel from the texture thus a constant color in your scene. What to check at this point is:
Are the coordinates you push to the GPU correct
Is the pointer set correctly as glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0)) (check all the parameters)
Is the attribute enabled glEnableVertexAttribArray(GLKVertexAttribTexCoord0)
Is the attribute bound to the shader being compiled.
In your case you have forgotten to bind the texture coordinate attribute which is quite natural to miss.
From the information you have given us it is impossible to find this mistake but you should note the procedure I have given you to pinpoint the issue as to where it actually lies. It might be handy in future as well.

Turns out I had forgotten to bind the attribute location when compiling the shader. Needed to add the line
glBindAttribLocation(_program, GLKVertexAttribTexCoord0, "texCoordIn");
to the load shaders method.

Related

Drawing 2D bitmap in OpenGL ES (iOS)

I've been struggling for hours trying to render a simple 2D bitmap in OpenGL ES (iOS). While in OpenGL I could simply use glDrawPixels, it doesn't exist in OpenGL ES, neither does glBegin. Seems like glVertexPointer is now deprecated too.
(Note: the bitmap I'm rendering is constantly changing at 60 FPS, so glDrawPixels is a better solution than using textures)
I failed to find any documented sample code that draws a bitmap using current APIs.
So to put it shortly: given an array of pixels (in RGBX format, for example), how to I render it, potentially scaled using nearest neighbor, using OpenGL ES?
The short answer is to render a textured quad and implement a model matrix to perform various transforms (e.g. scaling).
How to render a textured quad
First you'll need to build a VBO with your quad's vertex positions:
float[] positions = {
+0.5f, +0.5f, +0f, // top right
-0.5f, +0.5f, +0f, // top left
+0.5f, -0.5f, +0f, // bottom right
-0.5f, -0.5f, +0f // bottom left
};
int positionVBO = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, positionVBO);
glBufferData(GL_ARRAY_BUFFER, floatBuffer(positions), GL_STATIC_DRAW);
Then pass the necessary info to your vertex shader:
int positionAttribute = glGetAttribLocation(shader, "position");
glEnableVertexAttribArray(positionAttribute);
glVertexAttribPointer(positionAttribute, 3, GL_FLOAT, false, 0, 0);
Now we'll do the same thing but with the quad's texture coordinates:
float[] texcoords = {
1f, 0f, // top right
0f, 0f, // top left
1f, 1f, // bottom right
0f, 1f // bottom left
};
int texcoordVBO = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, texcoordVBO);
glBufferData(GL_ARRAY_BUFFER, floatBuffer(texcoords), GL_STATIC_DRAW);
int textureAttribute = glGetAttribLocation(shader.getId(), "texcoord");
glEnableVertexAttribArray(textureAttribute);
glVertexAttribPointer(textureAttribute, 2, GL_FLOAT, false, 0, 0);
You could interleave this data into a single VBO but I'll leave that to the reader. Regardless we've submitted all the quad vertex data to the GPU and told the shader how to access it.
Next we build our texture buffer assuming we have an object called image:
int texture = glGenTextures();
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, image.getWidth(), image.getHeight(), 0, GL_RGB, GL_UNSIGNED_BYTE, image.getPixels());
And pass that info to the shaders:
int textureUniform = glGetUniformLocation(shader, "image");
glUniform1i(textureUniform, 0);
Check out open.gl's page on Textures for more information.
Finally, the shaders:
vertex.glsl
attribute vec3 position;
attribute vec2 texcoord;
varying vec2 uv;
void main()
{
gl_Position = vec4(position, 1.0);
uv = texcoord;
}
fragment.glsl
varying vec2 uv;
uniform sampler2D image;
void main()
{
gl_FragColor = texture(image, uv);
}
Given no other GL state changes this will render the following:
Note: Since I don't have access to an iOS development environment currently this sample is written in Java. The principle is the same however.
EDIT: How to build the shader program
A shader program is composed from a series of shaders. The bare minimum is a vertex and fragment shader. This is how we would build a shader program from the two shaders above:
String vertexSource = loadShaderSource("vertex.glsl");
GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertexShader, vertexSource);
glCompileShader(vertexShader);
String fragmentSource = loadFileAsString("fragment.glsl");
GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragmentShader, fragmentSource);
glCompileShader(fragmentShader);
GLuint shaderProgram = glCreateProgram();
glAttachShader(shaderProgram, vertexShader);
glAttachShader(shaderProgram, fragmentShader);
glLinkProgram(shaderProgram);
Once created you would communicate with it via glVertexAttribPointer and glUniform.

OpenGL ES triangles drawing mistake on iOS

I try to draw multiple triangles using OpenGL ES and iOS. I create vertices array with float values with following structure
{x, y, z, r, g, b, a}
for each vertex. Final array for one triangle is:
{x1, y1, z1, r1, g1, b1, a1, x2, y2, z2, r2, g2, b2, a2, x3, y3, z3,
r3, g3, b3, a3}
Here is my update method:
-(void)update {
float aspect = fabsf(self.view.bounds.size.width / self.view.bounds.size.height);
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(65.0f), aspect, 1.0, 100.0);
self.effect.transform.projectionMatrix = projectionMatrix;
}
and render:
-(void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
[self drawShapes]; // here I fill vertices array
glClearColor(0.65f, 0.65f, 0.8f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
int numItems = 3 * trianglesCount;
glBindVertexArrayOES(vao);
[self.effect prepareToDraw];
glUseProgram(shaderProgram);
glBindBuffer(GL_ARRAY_BUFFER, vbo);
glBufferData(GL_ARRAY_BUFFER, sizeof(float) * itemSize, convertedVerts, GL_DYNAMIC_DRAW);
glVertexAttribPointer(vertexPositionAttribute, 3, GL_FLOAT, false, stride, 0);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(vertexColorAttribute, 4, GL_FLOAT, false, stride, (GLvoid*)(3 * sizeof(float)));
glEnableVertexAttribArray(GLKVertexAttribColor);
glDrawArrays(GL_TRIANGLES, 0, numItems);
}
Context setup. Here I bind my vertex array and generate vertex buffer:
-(void)setupContext
{
self.context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
if(!self.context) {
NSLog(#"Failed to create OpenGL ES Context");
}
GLKView *view = (GLKView *)self.view;
view.context = self.context;
view.drawableDepthFormat = GLKViewDrawableDepthFormat24;
[EAGLContext setCurrentContext:self.context];
self.effect = [[GLKBaseEffect alloc] init];
glEnable(GL_DEPTH_TEST);
glDisable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
glGenVertexArraysOES(1, &vao);
glBindVertexArrayOES(vao);
glGenBuffers(1, &vbo);
}
Fragment and vertex shaders are pretty simple:
//fragment
varying lowp vec4 vColor;
void main(void) {
gl_FragColor = vColor;
}
//vertex
attribute vec3 aVertexPosition;
attribute vec4 aVertexColor;
varying lowp vec4 vColor;
void main(void) {
gl_Position = vec4(aVertexPosition, 1.0);
vColor = aVertexColor;
}
Result. Triangles aren't shown:
Where is mistake? I guess problem is with projection matrix. Here is github link to Xcode project.
Downloaded your code and tried it out. I see the purplish screen and no triangles, so I'm guessing that's the problem. I see two things that could be the problem:
1) You'll need to pass glBufferData the total number of bytes you're sending it, like this: glBufferData(GL_ARRAY_BUFFER, sizeof(float) * itemSize * numItems, convertedVerts, GL_DYNAMIC_DRAW);. Any data related to how to chunk the data stays glVertexAttribPointer.
2) That doesn't seem to be the only thing since I still can't get triangles to show up. I've never used GLKit before (I just have a little experience with OpenGL on the desktop platform). That being said, if I replace GLKVertexAttributePosition and GLKVertexAttribColor with 0 and 1 respectively. And apply the glBufferData fix from 1 I see artifacts flashing on the simulator screen when I move the mouse. So there's gotta be something fishy with those enum values and glVertexAttribPointer.
Edit - clarification for 2:
After changing the glBufferData line as described in 1. I also modified the glEnableVertexAttribArray lines so the looked like this:
glVertexAttribPointer(vertexPositionAttribute, 3, GL_FLOAT, false, stride, 0);
glEnableVertexAttribArray(0);
glVertexAttribPointer(vertexColorAttribute, 4, GL_FLOAT, false, stride, (GLvoid*)(3 * sizeof(float)));
glEnableVertexAttribArray(1);
After both of those changes I can see red triangles flickering on the screen. A step closer, since I couldn't see anything before. But I haven't been able to figure it out any further than that :(

Should the number of vertexes be equal to the number of texCoords?

My vertexShader:
attribute vec4 vertexPosition;
attribute vec2 vertexTexCoord;
varying vec2 texCoord;
uniform mat4 modelViewProjectionMatrix;
void main()
{
gl_Position = modelViewProjectionMatrix * vertexPosition;
texCoord = vertexTexCoord;
}
My fragmentShder:
precision mediump float;
varying vec2 texCoord;
uniform sampler2D texSampler2D;
void main()
{
gl_FragColor = texture2D(texSampler2D, texCoord);
}
Init Shader:
if (shader2D == nil) {
shader2D = [[Shader2D alloc] init];
shader2D.shaderProgramID = [ShaderUtils compileShaders:vertexShader2d :fragmentShader2d];
if (0 < shader2D.shaderProgramID) {
shader2D.vertexHandle = glGetAttribLocation(shader2D.shaderProgramID, "vertexPosition");
shader2D.textureCoordHandle = glGetAttribLocation(shader2D.shaderProgramID, "vertexTexCoord");
shader2D.mvpMatrixHandle = glGetUniformLocation(shader2D.shaderProgramID, "modelViewProjectionMatrix");
shader2D.texSampler2DHandle = glGetUniformLocation(shader2D.shaderProgramID,"texSampler2D");
}
else {
NSLog(#"Could not initialise shader2D");
}
}
return shader2D;
Rendering:
GLKMatrix4 mvpMatrix;
mvpMatrix = [self position: position];
mvpMatrix = GLKMatrix4Multiply([QCARutils getInstance].projectionMatrix, mvpMatrix);
glUseProgram(shader.shaderProgramID);
glVertexAttribPointer(shader.vertexHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)vertices);
glVertexAttribPointer(shader.textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)texCoords);
glEnableVertexAttribArray(shader.vertexHandle);
glEnableVertexAttribArray(shader.textureCoordHandle);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, [texture textureID]);
glUniformMatrix4fv(shader.mvpMatrixHandle, 1, GL_FALSE, (const GLfloat*)&mvpMatrix);
glUniform1i(shader.texSampler2DHandle, 0);
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, (const GLvoid*)indices);
glDisableVertexAttribArray(shader.vertexHandle);
glDisableVertexAttribArray(shader.textureCoordHandle);
It seems to work properly when one texture coordinates corresponds to one and only one vertex coordinates(Number of texCoords == Number of vertices)
My question: Does openGL assign a texture coordinates to one and only one vertex? In other words, when texture coordinates and vertex coordinates are not one-to-one correspondence, what will the rendering result turn out to be?
Yes, there needs to be a one-to-one correspondence between vertices and texCoords -- all information passed down the OpenGL pipeline is per-vertex, so every normal and every texCoord must have a vertex.
Note, however, that you can (and will often need to) have multiple texCoords, normals, or other per-vertex data for the same point in space: e.g. if you're wrapping a texture map around a sphere, there will be a "seam" where the ends of the rectangular texture meet. At those spots you'll need to have multiple vertices that occupy the same point.

glGetAttribLocation/glGetUniformLocation returns 0, causing EXC_BAD_ACCESS on glDrawArrays call (iOS)

I am creating a simple OpenGL ES 2.0 application for iOS, and whenever I call glDrawArrays. I found that this was occurring when I had previously called glEnableVertexAttribArray for my two attributes (position and color), and then found that glGetAttribLocation was returning 1 for position, and 0 for color, and then also found that glGetUniformLocation was returning 0 for my MVP matrix. I am not sure if 0 is a valid value, or why glEnableVertexAttribArray appears to be causing EXC_BAD_ACCESS when glDrawArrays is called.
Here is my code:
compileShaders function:
-(void)compileShaders {
GLuint vertShader = [self compileShader:#"Shader" ofType:GL_VERTEX_SHADER];
GLuint fragShader = [self compileShader:#"Shader" ofType:GL_FRAGMENT_SHADER];
GLuint program = glCreateProgram();
glAttachShader(program, vertShader);
glAttachShader(program, fragShader);
glLinkProgram(program);
GLint success;
glGetProgramiv(program, GL_LINK_STATUS, &success);
if (success == GL_FALSE) {
GLchar messages[256];
glGetProgramInfoLog(program, sizeof(messages), 0, &messages[0]);
NSLog(#"%#", [NSString stringWithUTF8String:messages]);
exit(1);
}
glUseProgram(program);
_positionSlot = glGetAttribLocation(program, "position");
_colorSlot = glGetAttribLocation(program, "color"); //Returns 0
_mvpSlot = glGetUniformLocation(program, "MVP"); //Returns 0
if (!_positionSlot || !_colorSlot || !_mvpSlot) {
NSLog(#"Failed to retrieve the locations of the shader variables:\n Position:%i\n Color:%i\n MVP:%i", _positionSlot, _colorSlot, _mvpSlot); //Prints out the values of 1, 0, 0
}
glEnableVertexAttribArray(_positionSlot);
glEnableVertexAttribArray(_colorSlot);
My render function:
-(void)render:(CADisplayLink *)displayLink {
glClearColor(0.5, 0.5, 0.5, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_DEPTH_TEST);
GLfloat mvp[16] = {
1.0f, 0.0f, 0.0f, 0.0f,
0.0f, 1.0f, 0.0f, 0.0f,
0.0f, 0.0f, 1.0f, 0.0f,
0.0f, 0.0f, 0.0f, 1.0f,
};
glUniformMatrix4fv(_mvpSlot, 1, 0, mvp);
glViewport(0, 0, width, height);
glVertexAttribPointer(_positionSlot, 3, GL_FLOAT, GL_FALSE, 0, vertices);
glVertexAttribPointer(_colorSlot, 4, GL_UNSIGNED_BYTE, GL_FALSE, 0, colors);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
[_context presentRenderbuffer:GL_RENDERBUFFER];
}
And my vertex shader:
attribute vec4 position;
attribute vec4 color;
uniform mat4 MVP;
varying vec4 v_color;
void main(void) {
gl_Position = MVP * position;
v_color = color;
}
0 is a valid value, -1 is the "error" or "not found" value. The space for uniform locations and attrib locations is separate, so it's fine to have both a location 0 uniform and a location 0 attribute.
You should post the vertices and colors arrays to check if they have the right sizes, this is likely to cause a crash.
I have the same issue and no solution, however I have tried to extensively debug, so I thought perhaps it might be helpful to share.
I traced the issue down to glCreateProgram() and glCreateShader() calls. They both return zero, which is the error return. However in both cases glGetError() returns GL_NO_ERROR. In fact no error pops up all the way through the compile and link chain following the create calls.
I have the same symptom that glGetAttribLocation and also uniforms always return zero (rather than -1 for error!). But I can even feed broken shader files and it will not lead to a compile error or any log info. The correct string was checked to be fed into to compile.
I am completely dumbfounded by the behavior, because clearly glCreateProgram errors but doesn't set an error. And it is not clear to me at all why it does error. What is especially confusion is that a whole chain of GL function calls don't return errors. I am getting my first error when I the program calls a gl function that operates on a uniform, such as glUniformMatrix4fv() or glUniform1i(). But clearly things are broken way before then, because all uniforms return as 0 even though they cannot all be zero.
Edit: I found the solution in my case. Opengles1 seems to be quite flexible when framebuffers are set up. But I needed to force framebuffer setup before starting with shaders to get rid of the issue. e.g. if one follows the old eaglview template, framebuffers are created in layoutSubviews, however that is too late for a typical initialization of shaders in say initwithcoder. The symptoms are the above otherwise.

Render YpCbCr iPhone 4 Camera Frame to an OpenGL ES 2.0 Texture in iOS 4.3

I'm trying to render a native planar image to an OpenGL ES 2.0 texture in iOS 4.3 on an iPhone 4. The texture however winds up all black. My camera is configured as such:
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
and I'm passing the pixel data to my texture like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_RGB_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, CVPixelBufferGetBaseAddress(cameraFrame));
My fragement shaders is:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main() {
lowp vec4 color;
color = texture2D(videoFrame, textureCoordinate);
lowp vec3 convertedColor = vec3(-0.87075, 0.52975, -1.08175);
convertedColor += 1.164 * color.g; // Y
convertedColor += vec3(0.0, -0.391, 2.018) * color.b; // U
convertedColor += vec3(1.596, -0.813, 0.0) * color.r; // V
gl_FragColor = vec4(convertedColor, 1.0);
}
and my vertex shader is
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
This works just fine when I'm working with an BGRA image, and my fragment shader only does
gl_FragColor = texture2D(videoFrame, textureCoordinate);
What if anything am I missing here? Thanks!
OK. We have a working success here. The key was passing the Y and the UV as two separate textures to the fragment shader. Here is the final shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 textureCoordinate;
uniform sampler2D videoFrame;
uniform sampler2D videoFrameUV;
const mat3 yuv2rgb = mat3(
1, 0, 1.2802,
1, -0.214821, -0.380589,
1, 2.127982, 0
);
void main() {
vec3 yuv = vec3(
1.1643 * (texture2D(videoFrame, textureCoordinate).r - 0.0625),
texture2D(videoFrameUV, textureCoordinate).r - 0.5,
texture2D(videoFrameUV, textureCoordinate).a - 0.5
);
vec3 rgb = yuv * yuv2rgb;
gl_FragColor = vec4(rgb, 1.0);
}
You'll need to create your textures along like this:
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0));
glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1));
and then pass them like this:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);
glActiveTexture(GL_TEXTURE0);
glUniform1i(videoFrameUniform, 0);
glUniform1i(videoFrameUniformUV, 1);
Boy am I relieved!
P.S. The values for the yuv2rgb matrix are from here http://en.wikipedia.org/wiki/YUV and I copied code from here http://www.ogre3d.org/forums/viewtopic.php?f=5&t=25877 to figure out how to get the correct YUV values to begin with.
Your code appears to attempt to convert a 32-bit colour in 444-plus-unused-byte to RGBA. That's not going to work too well. I don't know of anything that outputs "YUVA", for one.
Also, I think the returned alpha channel is 0 for BGRA camera output, not 1, so I'm not sure why it works (IIRC to convert it to a CGImage you need to use AlphaNoneSkipLast).
The 420 "bi planar" output is structued something like this:
A header telling you where the planes are (used by CVPixelBufferGetBaseAddressOfPlane() and friends)
The Y plane: height × bytes_per_row_1 × 1 bytes
The Cb,Cr plane: height/2 × bytes_per_row_2 × 2 bytes (2 bytes per 2x2 block).
bytes_per_row_1 is approximately width and bytes_per_row_2 is approximately width/2, but you'll want to use CVPixelBufferGetBytesPerRowOfPlane() for robustness (you also might want to check the results of ..GetHeightOfPlane and ...GetWidthOfPlane).
You might have luck treating it as a 1-component width*height texture and a 2-component width/2*height/2 texture. You'll probably want to check bytes-per-row and handle the case where it isn't simply width*number-of-components (although this is probably true for most of the video modes). AIUI, you'll also want to flush the GL context before calling CVPixelBufferUnlockBaseAddress().
Alternatively, you can copy it all to memory into your expected format (optimizing this loop might be a bit tricky). Copying has the advantage that you don't need to worry about things accessing memory after you've unlocked the pixel buffer.

Resources