Drawing 2D bitmap in OpenGL ES (iOS) - ios

I've been struggling for hours trying to render a simple 2D bitmap in OpenGL ES (iOS). While in OpenGL I could simply use glDrawPixels, it doesn't exist in OpenGL ES, neither does glBegin. Seems like glVertexPointer is now deprecated too.
(Note: the bitmap I'm rendering is constantly changing at 60 FPS, so glDrawPixels is a better solution than using textures)
I failed to find any documented sample code that draws a bitmap using current APIs.
So to put it shortly: given an array of pixels (in RGBX format, for example), how to I render it, potentially scaled using nearest neighbor, using OpenGL ES?

The short answer is to render a textured quad and implement a model matrix to perform various transforms (e.g. scaling).
How to render a textured quad
First you'll need to build a VBO with your quad's vertex positions:
float[] positions = {
+0.5f, +0.5f, +0f, // top right
-0.5f, +0.5f, +0f, // top left
+0.5f, -0.5f, +0f, // bottom right
-0.5f, -0.5f, +0f // bottom left
};
int positionVBO = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, positionVBO);
glBufferData(GL_ARRAY_BUFFER, floatBuffer(positions), GL_STATIC_DRAW);
Then pass the necessary info to your vertex shader:
int positionAttribute = glGetAttribLocation(shader, "position");
glEnableVertexAttribArray(positionAttribute);
glVertexAttribPointer(positionAttribute, 3, GL_FLOAT, false, 0, 0);
Now we'll do the same thing but with the quad's texture coordinates:
float[] texcoords = {
1f, 0f, // top right
0f, 0f, // top left
1f, 1f, // bottom right
0f, 1f // bottom left
};
int texcoordVBO = glGenBuffers();
glBindBuffer(GL_ARRAY_BUFFER, texcoordVBO);
glBufferData(GL_ARRAY_BUFFER, floatBuffer(texcoords), GL_STATIC_DRAW);
int textureAttribute = glGetAttribLocation(shader.getId(), "texcoord");
glEnableVertexAttribArray(textureAttribute);
glVertexAttribPointer(textureAttribute, 2, GL_FLOAT, false, 0, 0);
You could interleave this data into a single VBO but I'll leave that to the reader. Regardless we've submitted all the quad vertex data to the GPU and told the shader how to access it.
Next we build our texture buffer assuming we have an object called image:
int texture = glGenTextures();
glBindTexture(GL_TEXTURE_2D, texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, image.getWidth(), image.getHeight(), 0, GL_RGB, GL_UNSIGNED_BYTE, image.getPixels());
And pass that info to the shaders:
int textureUniform = glGetUniformLocation(shader, "image");
glUniform1i(textureUniform, 0);
Check out open.gl's page on Textures for more information.
Finally, the shaders:
vertex.glsl
attribute vec3 position;
attribute vec2 texcoord;
varying vec2 uv;
void main()
{
gl_Position = vec4(position, 1.0);
uv = texcoord;
}
fragment.glsl
varying vec2 uv;
uniform sampler2D image;
void main()
{
gl_FragColor = texture(image, uv);
}
Given no other GL state changes this will render the following:
Note: Since I don't have access to an iOS development environment currently this sample is written in Java. The principle is the same however.
EDIT: How to build the shader program
A shader program is composed from a series of shaders. The bare minimum is a vertex and fragment shader. This is how we would build a shader program from the two shaders above:
String vertexSource = loadShaderSource("vertex.glsl");
GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
glShaderSource(vertexShader, vertexSource);
glCompileShader(vertexShader);
String fragmentSource = loadFileAsString("fragment.glsl");
GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);
glShaderSource(fragmentShader, fragmentSource);
glCompileShader(fragmentShader);
GLuint shaderProgram = glCreateProgram();
glAttachShader(shaderProgram, vertexShader);
glAttachShader(shaderProgram, fragmentShader);
glLinkProgram(shaderProgram);
Once created you would communicate with it via glVertexAttribPointer and glUniform.

Related

OpenGL ES 2.0 Texture Won't Display

So I am currently learning OpenGL ES 2.0 with iOS and working on a maze game. The maze is randomly generated (so not a loaded model) and my struggle is texturing the walls and floor of the maze. My approach is to just treat the maze as a series of cubes, and I have code that draws the individual faces of a cube separately (so I can create a path by simply leaving some faces out).
Using capture GPU frame, I have confirmed that the texture is indeed loading in correctly, the data in the frame buffers is correct and that I'm not getting any errors. I can see my other lighting effects (so the face isn't completely black), but no texture appears.
Here is how I've defined my cube faces
GLfloat rightCubeVertexData[] =
{
0.5f, -0.5f, -0.5f,
0.5f, -0.5f, 0.5f,
0.5f, 0.5f, -0.5f,
0.5f, 0.5f, -0.5f,
0.5f, -0.5f, 0.5f,
0.5f, 0.5f, 0.5f,
};
GLfloat rightCubeNormalData[] =
{
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
-1.0f, 0.0f, 0.0f,
};
GLfloat rightCubeTexCoords[] =
{
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
0.0, 1.0,
1.0, 0.0,
1.0, 1.0,
};
The other faces are defined essentially the same way (except they are in one array each, splitting up the position, normals, and tex coords was just something I tried; I'm just trying to get one face to texture and then I'll expand to the rest).
Here is how I load the data into the buffer
glGenVertexArraysOES(1, &_rightVertexArray);
glBindVertexArrayOES(_rightVertexArray);
glGenBuffers(3, _rightVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, _rightVertexBuffer[0]);
glBufferData(GL_ARRAY_BUFFER, sizeof(rightCubeVertexData), rightCubeVertexData, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
glBindBuffer(GL_ARRAY_BUFFER, _rightVertexBuffer[1]);
glBufferData(GL_ARRAY_BUFFER, sizeof(rightCubeNormalData), rightCubeNormalData, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribNormal);
glVertexAttribPointer(GLKVertexAttribNormal, 3, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
glBindBuffer(GL_ARRAY_BUFFER, _rightVertexBuffer[2]);
glBufferData(GL_ARRAY_BUFFER, sizeof(rightCubeTexCoords), rightCubeTexCoords, GL_STATIC_DRAW);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0));
Again, using three buffers was an experiment, the rest are defined in one buffer with an offset.
Here is how I load textures
crateTexture = [self setupTexture:#"crate.jpg"];
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, crateTexture);
glUniform1i(uniforms[UNIFORM_TEXTURE], 0);
// Load in and set up texture image (adapted from Ray Wenderlich)
- (GLuint)setupTexture:(NSString *)fileName
{
CGImageRef spriteImage = [UIImage imageNamed:fileName].CGImage;
if (!spriteImage) {
NSLog(#"Failed to load image %#", fileName);
exit(1);
}
size_t width = CGImageGetWidth(spriteImage);
size_t height = CGImageGetHeight(spriteImage);
GLubyte *spriteData = (GLubyte *) calloc(width*height*4, sizeof(GLubyte));
CGContextRef spriteContext = CGBitmapContextCreate(spriteData, width, height, 8, width*4, CGImageGetColorSpace(spriteImage), kCGImageAlphaPremultipliedLast);
CGContextDrawImage(spriteContext, CGRectMake(0, 0, width, height), spriteImage);
CGContextRelease(spriteContext);
GLuint texName;
glGenTextures(1, &texName);
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, spriteData);
free(spriteData);
return texName;
}
Then, at the appropriate time, I simply call glDrawArrays to draw the face. I am completely stumped on this, and it is probably a very silly error, but any help anybody could provide would be much appreciated.
P.S. Here is my fragment shader
varying vec3 eyeNormal;
varying vec4 eyePos;
varying vec2 texCoordOut;
uniform sampler2D texture;
uniform vec3 flashlightPosition;
uniform vec3 diffuseLightPosition;
uniform vec4 diffuseComponent;
uniform float shininess;
uniform vec4 specularComponent;
uniform vec4 ambientComponent;
void main()
{
vec4 ambient = ambientComponent;
vec3 N = normalize(eyeNormal);
float nDotVP = max(0.0, dot(N, normalize(diffuseLightPosition)));
vec4 diffuse = diffuseComponent * nDotVP;
vec3 E = normalize(-eyePos.xyz);
vec3 L = normalize(flashlightPosition - eyePos.xyz);
vec3 H = normalize(L+E);
float Ks = pow(max(dot(N, H), 0.0), shininess);
vec4 specular = Ks*specularComponent;
if( dot(L, N) < 0.0 ) {
specular = vec4(0.0, 0.0, 0.0, 1.0);
}
gl_FragColor = (ambient + diffuse + specular) * texture2D(texture, texCoordOut);
//gl_FragColor = ambient + diffuse + specular;
gl_FragColor.a = 1.0;
}
And yes, all the uniform names are correct and correspond to something in the main code.
EDIT: Here is the vertex shader
precision mediump float;
attribute vec4 position;
attribute vec3 normal;
attribute vec2 texCoordIn;
varying vec3 eyeNormal;
varying vec4 eyePos;
varying vec2 texCoordOut;
uniform mat4 modelViewProjectionMatrix;
uniform mat4 modelViewMatrix;
uniform mat3 normalMatrix;
void main()
{
eyeNormal = (normalMatrix * normal);
eyePos = modelViewMatrix * position;
texCoordOut = texCoordIn;
gl_Position = modelViewProjectionMatrix * position;
}
To sum up the procedure done from the comments...
There is much that can go wrong when dealing with textures and it is good to know how to pinpoint where the issue is.
What to be careful wit the texture itself:
Check if you did set the parameters such as
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
Check setting the uniform as glUniform1i(uniformName, 0) where the last parameter corresponds to the active texture and not the texture ID.
Other include checking if the uniform name is correct, texture is bound.. And if possible check the debugger if the texture is properly loaded.
Next to that there are chances your texture coordinates are messed up and this seems to be a very common issue. To debug that it is best that you replace the color gotten from the texture in your fragment shader with the texture coordinate itself. E.g. replace texture2D(texture, texCoordOut) with vec4(texCoordOut.x, texCoordOut.y, .0, 1.0). Since the texture coordinates should be in range [0,1] you should see nice gradients between red and green color in your scene. If you do not see them then your texture coordinates are messed up: If all is black your coordinates are all zero, if most is yellow then your coordinates are most likely too large.
In your case the texture coordinates were all black which means you were always getting the first pixel from the texture thus a constant color in your scene. What to check at this point is:
Are the coordinates you push to the GPU correct
Is the pointer set correctly as glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 12, BUFFER_OFFSET(0)) (check all the parameters)
Is the attribute enabled glEnableVertexAttribArray(GLKVertexAttribTexCoord0)
Is the attribute bound to the shader being compiled.
In your case you have forgotten to bind the texture coordinate attribute which is quite natural to miss.
From the information you have given us it is impossible to find this mistake but you should note the procedure I have given you to pinpoint the issue as to where it actually lies. It might be handy in future as well.
Turns out I had forgotten to bind the attribute location when compiling the shader. Needed to add the line
glBindAttribLocation(_program, GLKVertexAttribTexCoord0, "texCoordIn");
to the load shaders method.

Should the number of vertexes be equal to the number of texCoords?

My vertexShader:
attribute vec4 vertexPosition;
attribute vec2 vertexTexCoord;
varying vec2 texCoord;
uniform mat4 modelViewProjectionMatrix;
void main()
{
gl_Position = modelViewProjectionMatrix * vertexPosition;
texCoord = vertexTexCoord;
}
My fragmentShder:
precision mediump float;
varying vec2 texCoord;
uniform sampler2D texSampler2D;
void main()
{
gl_FragColor = texture2D(texSampler2D, texCoord);
}
Init Shader:
if (shader2D == nil) {
shader2D = [[Shader2D alloc] init];
shader2D.shaderProgramID = [ShaderUtils compileShaders:vertexShader2d :fragmentShader2d];
if (0 < shader2D.shaderProgramID) {
shader2D.vertexHandle = glGetAttribLocation(shader2D.shaderProgramID, "vertexPosition");
shader2D.textureCoordHandle = glGetAttribLocation(shader2D.shaderProgramID, "vertexTexCoord");
shader2D.mvpMatrixHandle = glGetUniformLocation(shader2D.shaderProgramID, "modelViewProjectionMatrix");
shader2D.texSampler2DHandle = glGetUniformLocation(shader2D.shaderProgramID,"texSampler2D");
}
else {
NSLog(#"Could not initialise shader2D");
}
}
return shader2D;
Rendering:
GLKMatrix4 mvpMatrix;
mvpMatrix = [self position: position];
mvpMatrix = GLKMatrix4Multiply([QCARutils getInstance].projectionMatrix, mvpMatrix);
glUseProgram(shader.shaderProgramID);
glVertexAttribPointer(shader.vertexHandle, 3, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)vertices);
glVertexAttribPointer(shader.textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, (const GLvoid*)texCoords);
glEnableVertexAttribArray(shader.vertexHandle);
glEnableVertexAttribArray(shader.textureCoordHandle);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, [texture textureID]);
glUniformMatrix4fv(shader.mvpMatrixHandle, 1, GL_FALSE, (const GLfloat*)&mvpMatrix);
glUniform1i(shader.texSampler2DHandle, 0);
glDrawElements(GL_TRIANGLES, numIndices, GL_UNSIGNED_SHORT, (const GLvoid*)indices);
glDisableVertexAttribArray(shader.vertexHandle);
glDisableVertexAttribArray(shader.textureCoordHandle);
It seems to work properly when one texture coordinates corresponds to one and only one vertex coordinates(Number of texCoords == Number of vertices)
My question: Does openGL assign a texture coordinates to one and only one vertex? In other words, when texture coordinates and vertex coordinates are not one-to-one correspondence, what will the rendering result turn out to be?
Yes, there needs to be a one-to-one correspondence between vertices and texCoords -- all information passed down the OpenGL pipeline is per-vertex, so every normal and every texCoord must have a vertex.
Note, however, that you can (and will often need to) have multiple texCoords, normals, or other per-vertex data for the same point in space: e.g. if you're wrapping a texture map around a sphere, there will be a "seam" where the ends of the rectangular texture meet. At those spots you'll need to have multiple vertices that occupy the same point.

GLSL Shaders compile but don't draw anything on Windows

I'm trying to port some OpenGL rendering code I wrote for iOS to a Windows app. The code runs fine on iOS, but on Windows it doesn't draw anything. I've narrowed the problem down to this bit of code as fixed function stuff (such as glutSolidTorus) draws fine, but when shaders are enabled, nothing works.
Here's the rendering code:
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_INDEX_ARRAY);
// Set the vertex buffer as current
this->vertexBuffer->MakeActive();
// Get a reference to the vertex description to save copying
const AT::Model::VertexDescription & vd = this->vertexBuffer->GetVertexDescription();
std::vector<GLuint> handles;
// Loop over the vertex descriptions
for (int i = 0, stride = 0; i < vd.size(); ++i)
{
// Get a handle to the vertex attribute on the shader object using the name of the current vertex description
GLint handle = shader.GetAttributeHandle(vd[i].first);
// If the handle is not an OpenGL 'Does not exist' handle
if (handle != -1)
{
glEnableVertexAttribArray(handle);
handles.push_back(handle);
// Set the pointer to the vertex attribute, with the vertex's element count,
// the size of a single vertex and the start position of the first attribute in the array
glVertexAttribPointer(handle, vd[i].second, GL_FLOAT, GL_FALSE,
sizeof(GLfloat) * (this->vertexBuffer->GetSingleVertexLength()),
(GLvoid *)stride);
}
// Add to the stride value with the size of the number of floats the vertex attr uses
stride += sizeof(GLfloat) * (vd[i].second);
}
// Draw the indexed elements using the current vertex buffer
glDrawElements(GL_TRIANGLES,
this->vertexBuffer->GetIndexArrayLength(),
GL_UNSIGNED_SHORT, 0);
glBindBuffer(GL_ARRAY_BUFFER, 0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_INDEX_ARRAY);
// Disable the vertexattributearrays
for (int i = 0, stride = 0; i < handles.size(); ++i)
{
glDisableVertexAttribArray(handles[i]);
}
It's inside a function that takes a shader as a parameter, and the vertex description is a list of pairs: attribute handles to number of elements. Uniforms are being set outside this function. I'm enabling the shader for use before it's passed in to the function. Here are the two shader sources:
Vertex:
attribute vec3 position;
attribute vec2 texCoord;
attribute vec3 normal;
// Uniforms
uniform mat4 Model;
uniform mat4 View;
uniform mat4 Projection;
uniform mat3 NormalMatrix;
/// OUTPUTS
varying vec2 o_texCoords;
varying vec3 o_normals;
// Vertex Shader
void main()
{
// Do the normal position transform
gl_Position = Projection * View * Model * vec4(position, 1.0);
// Transform the normals to world space
o_normals = NormalMatrix * normal;
// Pass texture coords on for interpolation
o_texCoords = texCoord;
}
Fragment:
varying vec2 o_texCoords;
varying vec3 o_normals;
/// Fragment Shader
void main()
{
gl_FragColor = vec4(0.0, 0.0, 0.0, 1.0);
}
I'm running OpenGL 2.1 with Shader language 1.2. I'd be most appreciative for any help anyone can give me.
I'm seeng that you are assigning black color for the output color for the fragment in your fragment shader. Try changing that to something like
gl_FragColor = vec4(0.0, 1.0, 0.0, 1.0);
and see if the objects in the scene will be colored with green.
I came back to this recently and it seems that I wasn't checking for errors during rendering, it was giving me a 1285 error GL_OUT_OF_MEMORY after calling glDrawElements(). This lead me to check the vertex buffer objects to see if they contained any data and it turns out I wasn't properly deep copying them in a wrapper class, and as a result they were being deleted before any rendering happened. Fixing this sorted the issue.
Thank you for your suggestions.

Why isn't this OpenGL ES 2.0 shader working with my VBO on iOS?

If anyone can shed light on what's going wrong here, perhaps a misordering of gl commands or some other incompatible command sequence, I would be tremendously grateful for your assistance. I have been trying to get this code working all day with no success, despite much Google research and poring over examples in "OpenGL ES 2.0 Programming Guide".
I'm trying to use a Vertex Buffer Object and custom shaders in OpenGL ES 2.0 on iPhone. I am attempting to interleave vertex data from a series of custom structures of the following type:
typedef struct {
float x, y; // Position.
float radius;
float colR,colG,colB,colA; // Color rgba components.
} VType;
The position, radius and color bytes are to be considered for vertex location, pointsize and color respectively. Ids for these are initialised:
ID_ATT_Pos = 0;
ID_ATT_Radius = 1;
ID_ATT_Color = 2;
// Note: I have also tried values of 1,2,3 but no difference.
The stride for these is specified in each glVertexAttribPointer call.
It is intended that each vertex be drawn at its x,y position with the specified color and a pointsize of its radius. Associated with each aforementioned attribute is a vertex shader attribute, these are "a_position","a_color" and "a_radius". Here are the vertex and fragment shaders:
VertexShader.txt
attribute vec2 a_position;
attribute vec4 a_color;
attribute float a_radius;
varying vec4 v_color;
void main()
{
gl_Position = vec4(a_position,0.0,1.0);
gl_PointSize = a_radius;
v_color = a_color;
}
FragmentShader.txt
#ifdef GL_FRAGMENT_PRECISION_HIGH
precision highp float;
#else
precision mediump float;
#endif
void main()
{
gl_FragColor = vec4(1.0,0.0,0.0,1.0);
}
I wonder if a projection matrix is required in the vertex shader? All the points I create are 2D in the dimensions of the iPhone screen, and as you can see they each have 'z,w' appended as '0.0,1.0' in the vertex shader above.
Remaining core code to setup the VBO and render using glDrawElements is listed below. When running this code it is visibly apparent that the glClear has been successful and indeed NSLog print confirms this, however no VType vertices are drawn in the viewport by the DrawFrame code listed below. The vertex coordinates are well within screen dimensions e.g. x,y: (92, 454).
Note that any undeclared variables in the following code are class properties, and of appropriate type so e.g. 'vao' is GLuint, 'vbos' is GLuint[2], 'program' is a GLuint program handle. I have also left out the boilerplate OpenGL setup code, which has been tested with different code internals and shown to work.
Load Shader Code
-(GLuint)loadShaderType:(GLenum)type From:(NSString*)shaderFile {
GLuint shader;
GLint compiled;
// Create and compile vertex shader.
NSString *filepath = [[NSBundle mainBundle] pathForResource:shaderFile ofType:#"txt"];
const GLchar *shaderSrc = (GLchar *)[[NSString stringWithContentsOfFile:filepath encoding:NSUTF8StringEncoding error:nil] UTF8String];
if (!shaderSrc) {
NSLog(#"Failed to load vertex shader");
return 0;
}
// Create shader object.
shader = glCreateShader(type);
if (shader == 0) return 0;
// Load shader source.
glShaderSource(shader, 1, &shaderSrc, NULL);
// Compile shader.
glCompileShader(shader);
// Check compile status.
glGetShaderiv(shader, GL_COMPILE_STATUS, &compiled);
if (!compiled) {
GLint infoLen = 0;
glGetShaderiv(shader, GL_INFO_LOG_LENGTH, &infoLen);
if (infoLen > 1) {
char * infoLog = (char*)malloc(sizeof(char)*infoLen);
glGetShaderInfoLog(shader, infoLen, NULL, infoLog);
NSLog(#"Error compiling shader:\n%s\n",infoLog);
free(infoLog);
}
glDeleteShader(shader);
return 0;
}
return shader;
}
Initialisation Code
GLfloat screenHeight = [[UIScreen mainScreen] bounds].size.height;
GLfloat screenWidth = [[UIScreen mainScreen] bounds].size.width;
glViewport(0, 0, screenWidth, screenHeight);
glGenVertexArraysOES(1, &vao);
glBindVertexArrayOES(vao);
// Generate buffer, bind to use now, set initial data.
glGenBuffers(2, vbos);
glBindBuffer(GL_ARRAY_BUFFER, vbos[0]);
glBufferData(GL_ARRAY_BUFFER, vxBufSize, squidVxs, GL_STATIC_DRAW);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, vbos[1]);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, ixBufSize, squidIxs, GL_STATIC_DRAW);
glEnableVertexAttribArray(ID_ATT_Pos); // Pos
glVertexAttribPointer(ID_ATT_Pos, 2, GL_FLOAT, GL_FALSE, sizeof(VType), BUFFER_OFFSET(0));
glEnableVertexAttribArray(ID_ATT_Radius);// Radius
glVertexAttribPointer(ID_ATT_Radius, 1, GL_FLOAT, GL_FALSE, sizeof(VType), BUFFER_OFFSET(sizeof(float)*2));
glEnableVertexAttribArray(ID_ATT_Color);// Color
glVertexAttribPointer(ID_ATT_Color, 4, GL_FLOAT, GL_FALSE, sizeof(VType), BUFFER_OFFSET(sizeof(float)*3));
GLuint shaders[2];
shaders[0] = [self loadShaderType:GL_VERTEX_SHADER From:#"VertexShader"];
shaders[1] = [self loadShaderType:GL_FRAGMENT_SHADER From:#"FragmentShader"];
program = glCreateProgram();
glAttachShader(program, shaders[0]);
glAttachShader(program, shaders[1]);
glBindAttribLocation(program, ID_ATT_Pos, "a_position");
glBindAttribLocation(program, ID_ATT_Radius, "a_radius");
glBindAttribLocation(program, ID_ATT_Color, "a_color");
glLinkProgram(program);
GLint linked;
glGetProgramiv(program, GL_LINK_STATUS, &linked);
if (!linked) {
GLint infoLen = 0;
glGetProgramiv(program, GL_INFO_LOG_LENGTH, &infoLen);
if (infoLen > 1) {
char* infoLog = (char*)malloc(sizeof(char)*infoLen);
glGetProgramInfoLog(program, infoLen, NULL, infoLog);
NSLog(#"Error linking program:\n%s\n",infoLog);
free(infoLog);
}
glDeleteProgram(program);
}
DrawFrame Code
// Note: Framebuffer swapping is taken care of before/after these
// lines, and has been shown to work.
glClearColor(0.33f, 0.0f, 0.33f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(program);
glDrawElements(GL_POINTS, numPoints, GL_UNSIGNED_SHORT, BUFFER_OFFSET(0));
Let me know if any other information is needed, thank you for your time.
You do not necessarily need a projection matrix for your 2d vertices, but you definitely need to transform all 3 coordinates into the [-1,1] range (which you have already done for z by setting it to 0). These coordinates are then transformed by GL with the current viewport transformation (that should usually match your screen dimensions). So if the coordinates are in screen dimensions, then transform them into [-1,1] in the shader, or just use [-1,1] coordinates in the application (which would also be more resolution agnostic).
EDIT: And I don't know how OpenGL ES handles this, but in desktop GL (at least upto 2.1) you need to call glEnable(GL_VERTEX_PROGRAM_POINT_SIZE) for the vertex shader to be able to chnage the point size.

Render YpCbCr iPhone 4 Camera Frame to an OpenGL ES 2.0 Texture in iOS 4.3

I'm trying to render a native planar image to an OpenGL ES 2.0 texture in iOS 4.3 on an iPhone 4. The texture however winds up all black. My camera is configured as such:
[videoOutput setVideoSettings:[NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange]
forKey:(id)kCVPixelBufferPixelFormatTypeKey]];
and I'm passing the pixel data to my texture like this:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, bufferWidth, bufferHeight, 0, GL_RGB_422_APPLE, GL_UNSIGNED_SHORT_8_8_REV_APPLE, CVPixelBufferGetBaseAddress(cameraFrame));
My fragement shaders is:
varying highp vec2 textureCoordinate;
uniform sampler2D videoFrame;
void main() {
lowp vec4 color;
color = texture2D(videoFrame, textureCoordinate);
lowp vec3 convertedColor = vec3(-0.87075, 0.52975, -1.08175);
convertedColor += 1.164 * color.g; // Y
convertedColor += vec3(0.0, -0.391, 2.018) * color.b; // U
convertedColor += vec3(1.596, -0.813, 0.0) * color.r; // V
gl_FragColor = vec4(convertedColor, 1.0);
}
and my vertex shader is
attribute vec4 position;
attribute vec4 inputTextureCoordinate;
varying vec2 textureCoordinate;
void main()
{
gl_Position = position;
textureCoordinate = inputTextureCoordinate.xy;
}
This works just fine when I'm working with an BGRA image, and my fragment shader only does
gl_FragColor = texture2D(videoFrame, textureCoordinate);
What if anything am I missing here? Thanks!
OK. We have a working success here. The key was passing the Y and the UV as two separate textures to the fragment shader. Here is the final shader:
#ifdef GL_ES
precision mediump float;
#endif
varying vec2 textureCoordinate;
uniform sampler2D videoFrame;
uniform sampler2D videoFrameUV;
const mat3 yuv2rgb = mat3(
1, 0, 1.2802,
1, -0.214821, -0.380589,
1, 2.127982, 0
);
void main() {
vec3 yuv = vec3(
1.1643 * (texture2D(videoFrame, textureCoordinate).r - 0.0625),
texture2D(videoFrameUV, textureCoordinate).r - 0.5,
texture2D(videoFrameUV, textureCoordinate).a - 0.5
);
vec3 rgb = yuv * yuv2rgb;
gl_FragColor = vec4(rgb, 1.0);
}
You'll need to create your textures along like this:
int bufferHeight = CVPixelBufferGetHeight(cameraFrame);
int bufferWidth = CVPixelBufferGetWidth(cameraFrame);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE, bufferWidth, bufferHeight, 0, GL_LUMINANCE, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 0));
glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);
glTexImage2D(GL_TEXTURE_2D, 0, GL_LUMINANCE_ALPHA, bufferWidth/2, bufferHeight/2, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddressOfPlane(cameraFrame, 1));
and then pass them like this:
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, videoFrameTexture);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, videoFrameTextureUV);
glActiveTexture(GL_TEXTURE0);
glUniform1i(videoFrameUniform, 0);
glUniform1i(videoFrameUniformUV, 1);
Boy am I relieved!
P.S. The values for the yuv2rgb matrix are from here http://en.wikipedia.org/wiki/YUV and I copied code from here http://www.ogre3d.org/forums/viewtopic.php?f=5&t=25877 to figure out how to get the correct YUV values to begin with.
Your code appears to attempt to convert a 32-bit colour in 444-plus-unused-byte to RGBA. That's not going to work too well. I don't know of anything that outputs "YUVA", for one.
Also, I think the returned alpha channel is 0 for BGRA camera output, not 1, so I'm not sure why it works (IIRC to convert it to a CGImage you need to use AlphaNoneSkipLast).
The 420 "bi planar" output is structued something like this:
A header telling you where the planes are (used by CVPixelBufferGetBaseAddressOfPlane() and friends)
The Y plane: height × bytes_per_row_1 × 1 bytes
The Cb,Cr plane: height/2 × bytes_per_row_2 × 2 bytes (2 bytes per 2x2 block).
bytes_per_row_1 is approximately width and bytes_per_row_2 is approximately width/2, but you'll want to use CVPixelBufferGetBytesPerRowOfPlane() for robustness (you also might want to check the results of ..GetHeightOfPlane and ...GetWidthOfPlane).
You might have luck treating it as a 1-component width*height texture and a 2-component width/2*height/2 texture. You'll probably want to check bytes-per-row and handle the case where it isn't simply width*number-of-components (although this is probably true for most of the video modes). AIUI, you'll also want to flush the GL context before calling CVPixelBufferUnlockBaseAddress().
Alternatively, you can copy it all to memory into your expected format (optimizing this loop might be a bit tricky). Copying has the advantage that you don't need to worry about things accessing memory after you've unlocked the pixel buffer.

Resources