GLKBaseEffect not loading texture (texture appears black on object) - ios

I'm using GLKit in an OpenGL project. Everything is based on GLKView and GLKBaseEffect (no custom shaders). In my project I have several views that have GLKViews for showing 3D objects, and occasionally several of those view can be "open" at once (i.e. are in the modal view stack).
While until now everything was working great, in a new view I was creating I needed to have a rectangle with texture to simulate a measuring tape for the 3D world of my app. For some unknown reason, in that view only, the texture isn't loaded right into the opengl context: the texture is loaded right by GLKTextureLoader, but when drawing the rectangle is black, and looking at the OpenGL frame in debug, I can see that an empty texture is loaded (there's a reference to a texture, but it's all zeroed out or null).
The shape I'm drawing is defined by: (it was originally a triangle strip, but I switched for triangles to make sure it's not the issue)
static const GLfloat initTape[] = {
-TAPE_WIDTH / 2.0f, 0, 0,
TAPE_WIDTH / 2.0f, 0, 0,
-TAPE_WIDTH / 2.0f, TAPE_INIT_LENGTH, 0,
TAPE_WIDTH / 2.0f, 0, 0,
TAPE_WIDTH / 2.0f, TAPE_INIT_LENGTH, 0,
-TAPE_WIDTH / 2.0f, TAPE_INIT_LENGTH, 0,
};
static const GLfloat initTapeTex[] = {
0, 0,
1, 0,
0, 1.0,
1, 0,
1, 1,
0, 1,
};
I set the effect variable as:
effect.transform.modelviewMatrix = modelview;
effect.light0.enabled = GL_FALSE;
// Projection setup
GLfloat ratio = self.view.bounds.size.width/self.view.bounds.size.height;
GLKMatrix4 projection = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(self.fov), ratio, 0.1f, 1000.0f);
effect.transform.projectionMatrix = projection;
// Set the color of the wireframe.
if (tapeTex == nil) {
NSError* error;
tapeTex = [GLKTextureLoader textureWithContentsOfFile:[[[NSBundle mainBundle] URLForResource:#"ruler_texture" withExtension:#"png"] path] options:nil error:&error];
}
effect.texture2d0.enabled = GL_TRUE;
effect.texture2d0.target = GLKTextureTarget2D;
effect.texture2d0.envMode = GLKTextureEnvModeReplace;
effect.texture2d0.name = tapeTex.name;
And the rendering loop is:
[effect prepareToDraw];
glDisable(GL_DEPTH_TEST);
glDisable(GL_CULL_FACE);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribTexCoord0);
glVertexAttribPointer(GLKVertexAttribPosition, COORDS, GL_FLOAT, GL_FALSE, 0, tapeVerts);
glVertexAttribPointer(GLKVertexAttribTexCoord0, 2, GL_FLOAT, GL_FALSE, 0, tapeTexCoord);
glDrawArrays(GL_TRIANGLES, 0, TAPE_VERTS);
glDisableVertexAttribArray(GLKVertexAttribPosition);
glDisableVertexAttribArray(GLKVertexAttribTexCoord0);
I've also tested the texture itself in another view with other objects and it works fine, so it's not the texture file fault.
Any help would be greatly appreciated, as I'm stuck on this issue for over 3 days.
Update: Also, there are no glErrors during the rendering loop.

After many many days I've finally found my mistake - When using multiple openGL contexts, it's important to create a GLKTextureLoader using a shareGroup, or else the textures aren't necessarily loaded to the right context.
Instead of using the class method textureWithContentOf, every context needs it's own GLKTextureLoader that is initialized with context.sharegroup, and use only that texture loader for that view. (actually the textures can be saved between different contexts, but I didn't needed that feature of sharegroups).

Easy tutorial http://games.ianterrell.com/how-to-texturize-objects-with-glkit/
I think it will help you.

Related

Rendering cube on top of square having video feed as texture

I am trying to develop a POC which helps to visualize a 3D object on camera feed. The kind 3D object I have, easily gets rendered using this project. And I am referring Camera Ripple code by Apple for showing camera feed. Both of these are separate objects in the same context. Each of these uses its own shader program. I am confused how to switch from one program to another.
My glkview:drawInRect: method looks like this
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(_program);
if (_ripple)
{
glDrawElements(GL_TRIANGLE_STRIP, [_ripple getIndexCount], GL_UNSIGNED_SHORT, 0);
}
glUseProgram(_program1);
glClearColor(1.0, 1.0, 1.0, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Set View Matrices
[self updateViewMatrices];
glUniformMatrix4fv(_uniforms.uProjectionMatrix, 1, 0, _projectionMatrix1.m);
glUniformMatrix4fv(_uniforms.uModelViewMatrix, 1, 0, _modelViewMatrix1.m);
glUniformMatrix3fv(_uniforms.uNormalMatrix, 1, 0, _normalMatrix1.m);
// Attach Texture
glUniform1i(_uniforms.uTexture, 0);
// Set View Mode
glUniform1i(_uniforms.uMode, self.viewMode.selectedSegmentIndex);
// Enable Attributes
glEnableVertexAttribArray(_attributes.aVertex);
glEnableVertexAttribArray(_attributes.aNormal);
glEnableVertexAttribArray(_attributes.aTexture);
// Load OBJ Data
glVertexAttribPointer(_attributes.aVertex, 3, GL_FLOAT, GL_FALSE, 0, cubeOBJVerts);
glVertexAttribPointer(_attributes.aNormal, 3, GL_FLOAT, GL_FALSE, 0, cubeOBJNormals);
glVertexAttribPointer(_attributes.aTexture, 2, GL_FLOAT, GL_FALSE, 0, cubeOBJTexCoords);
// Load MTL Data
for(int i=0; i<cubeMTLNumMaterials; i++)
{
glUniform3f(_uniforms.uAmbient, cubeMTLAmbient[i][0], cubeMTLAmbient[i][1], cubeMTLAmbient[i][2]);
glUniform3f(_uniforms.uDiffuse, cubeMTLDiffuse[i][0], cubeMTLDiffuse[i][1], cubeMTLDiffuse[i][2]);
glUniform3f(_uniforms.uSpecular, cubeMTLSpecular[i][0], cubeMTLSpecular[i][1], cubeMTLSpecular[i][2]);
glUniform1f(_uniforms.uExponent, cubeMTLExponent[i]);
// Draw scene by material group
glDrawArrays(GL_TRIANGLES, cubeMTLFirst[i], cubeMTLCount[i]);
}
// Disable Attributes
glDisableVertexAttribArray(_attributes.aVertex);
glDisableVertexAttribArray(_attributes.aNormal);
glDisableVertexAttribArray(_attributes.aTexture);
}
this cause a crash by throwing this error gpus_ReturnGuiltyForHardwareRestart
I found solution to my problem is reseting everything between the use of both programs. Now my glkview:drawInRect: looks like below,
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(GL_COLOR_BUFFER_BIT);
glUseProgram(_program);
if (_ripple)
{
glDrawElements(GL_TRIANGLE_STRIP, [_ripple getIndexCount], GL_UNSIGNED_SHORT, 0);
[self resetProgrameOne];
}
glUseProgram(_program1);
glClear(GL_DEPTH_BUFFER_BIT);
// Set View Matrices
[self updateViewMatrices];
glUniformMatrix4fv(_uniforms.uProjectionMatrix, 1, 0, _projectionMatrix1.m);
glUniformMatrix4fv(_uniforms.uModelViewMatrix, 1, 0, _modelViewMatrix1.m);
glUniformMatrix3fv(_uniforms.uNormalMatrix, 1, 0, _normalMatrix1.m);
// Attach Texture
glUniform1i(_uniforms.uTexture, 0);
// Set View Mode
glUniform1i(_uniforms.uMode, 1);
// Enable Attributes
glEnableVertexAttribArray(_attributes.aVertex);
glEnableVertexAttribArray(_attributes.aNormal);
glEnableVertexAttribArray(_attributes.aTexture);
// Load OBJ Data
glVertexAttribPointer(_attributes.aVertex, 3, GL_FLOAT, GL_FALSE, 0, table1OBJVerts);
glVertexAttribPointer(_attributes.aNormal, 3, GL_FLOAT, GL_FALSE, 0, table1OBJNormals);
glVertexAttribPointer(_attributes.aTexture, 2, GL_FLOAT, GL_FALSE, 0, table1OBJTexCoords);
// Load MTL Data
for(int i=0; i<table1MTLNumMaterials; i++)
{
glUniform3f(_uniforms.uAmbient, table1MTLAmbient[i][0], table1MTLAmbient[i][1], table1MTLAmbient[i][2]);
glUniform3f(_uniforms.uDiffuse, table1MTLDiffuse[i][0], table1MTLDiffuse[i][1], table1MTLDiffuse[i][2]);
glUniform3f(_uniforms.uSpecular, table1MTLSpecular[i][0], table1MTLSpecular[i][1], table1MTLSpecular[i][2]);
glUniform1f(_uniforms.uExponent, table1MTLExponent[i]);
// Draw scene by material group
glDrawArrays(GL_TRIANGLES, table1MTLFirst[i], table1MTLCount[i]);
}
// Disable Attributes
glDisableVertexAttribArray(_attributes.aVertex);
glDisableVertexAttribArray(_attributes.aNormal);
glDisableVertexAttribArray(_attributes.aTexture);
}
and resetProgrameOne method resets all the buffers and necessary things by deleting buffers and disabling glDisableVertexAttribArrays.

iPad texture loading differences (32-bit vs. 64-bit)

I am working on a drawing application and I am noticing significant differences in textures loaded on a 32-bit iPad vs. a 64-bit iPad.
Here is the texture drawn on a 32-bit iPad:
Here is the texture drawn on a 64-bit iPad:
The 64-bit is what I desire, but it seems like maybe it is losing some data?
I create a default brush texture with this code:
UIGraphicsBeginImageContext(CGSizeMake(64, 64));
CGContextRef defBrushTextureContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(defBrushTextureContext);
size_t num_locations = 3;
CGFloat locations[3] = { 0.0, 0.8, 1.0 };
CGFloat components[12] = { 1.0,1.0,1.0, 1.0,
1.0,1.0,1.0, 1.0,
1.0,1.0,1.0, 0.0 };
CGColorSpaceRef myColorspace = CGColorSpaceCreateDeviceRGB();
CGGradientRef myGradient = CGGradientCreateWithColorComponents (myColorspace, components, locations, num_locations);
CGPoint myCentrePoint = CGPointMake(32, 32);
float myRadius = 20;
CGGradientDrawingOptions options = kCGGradientDrawsBeforeStartLocation | kCGGradientDrawsAfterEndLocation;
CGContextDrawRadialGradient (UIGraphicsGetCurrentContext(), myGradient, myCentrePoint,
0, myCentrePoint, myRadius,
options);
CFRelease(myGradient);
CFRelease(myColorspace);
UIGraphicsPopContext();
[self setBrushTexture:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
And then actually set the brush texture like this:
-(void) setBrushTexture:(UIImage*)brushImage{
// save our current texture.
currentTexture = brushImage;
// first, delete the old texture if needed
if (brushTexture){
glDeleteTextures(1, &brushTexture);
brushTexture = 0;
}
// fetch the cgimage for us to draw into a texture
CGImageRef brushCGImage = brushImage.CGImage;
// Make sure the image exists
if(brushCGImage) {
// Get the width and height of the image
GLint width = CGImageGetWidth(brushCGImage);
GLint height = CGImageGetHeight(brushCGImage);
// Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
// you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.
// Allocate memory needed for the bitmap context
GLubyte* brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
CGContextRef brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushCGImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushCGImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
}
}
Update:
I've updated CGFloats to be GLfloats with no success. Maybe there is an issue with this rendering code?
if(frameBuffer){
// draw the stroke element
[self prepOpenGLStateForFBO:frameBuffer];
[self prepOpenGLBlendModeForColor:element.color];
CheckGLError();
}
// find our screen scale so that we can convert from
// points to pixels
GLfloat scale = self.contentScaleFactor;
// fetch the vertex data from the element
struct Vertex* vertexBuffer = [element generatedVertexArrayWithPreviousElement:previousElement forScale:scale];
glLineWidth(2);
// if the element has any data, then draw it
if(vertexBuffer){
glVertexPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Position[0]);
glColorPointer(4, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Color[0]);
glTexCoordPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Texture[0]);
glDrawArrays(GL_TRIANGLES, 0, (GLint)[element numberOfSteps] * (GLint)[element numberOfVerticesPerStep]);
CheckGLError();
}
if(frameBuffer){
[self unprepOpenGLState];
}
The vertex struct is the following:
struct Vertex{
GLfloat Position[2]; // x,y position
GLfloat Color [4]; // rgba color
GLfloat Texture[2]; // x,y texture coord
};
Update:
The issue does not actually appear to be 32-bit, 64-bit based, but rather something different about the A7 GPU and GL drivers. I found this out by running a 32-bit build and 64-bit build on the 64-bit iPad. The textures ended up looking exactly the same on both builds of the app.
I would like you to check two things.
Check your alpha blending logic(or option) in OpenGL.
Check your interpolation logic which is proportional to velocity of dragging.
It seems you don't have second one or not effective which is required to drawing app
I don't think the problem is in the texture but in the frame buffer to which you composite the line elements.
Your code fragments look like you draw segments by segment, so there are several overlapping segments drawn on top of each other. If the depth of the frame buffer is low there will be artifacts, especially in the lighter regions of the blended areas.
You can check the frame buffer using Xcode's OpenGL debugger. Activate it by running your code on the device and click the little "Capture OpenGL ES Frame" button: .
Select a "glBindFramebuffer" command in the "Debug Navigator" and look at the frame buffer description in the console area:
The interesting part is the GL_FRAMEBUFFER_INTERNAL_FORMAT.
In my opinion, the problem is in the blending mode you use when composing different image passes. I assume that you upload the texture for display only, and keep the in-memory image where you composite different drawing operations, or you read-back the image content using glReadPixels ?
Basically your second images appears like a straight-alpha image drawn like a pre-multiplied alpha image.
To be sure that it isn't a texture problem, save a NSImage to file before uploading to the texture, and check that the image is actually correct.

Particle Engine much, much slower on device than on Simulator

I've implemented a particle engine for a game for iPad. On the iPad Simulator I get a very good framerate with >500 particles (way more than I need). On an iPad itself however, I get completely different results. With just 10 particles (I need a few more than that) I only get a very poor framerate...
As a basis I've taken this tutorial to implement my Particle Emitter class: http://www.71squared.com/en/article/806/iphone-game-programming-tutorial-8-particle-emitter
(uses OpenGL ES 1)
Because I use OpenGL ES 2.0, I wrote my own render method:
- (void) renderParticles:(RenderMode)renderMode ofParticleEmitter:(ParticleEmitter*)particleEmitter xOffset:(int)xoffset yOffset:(int)yoffset
{
PointSprite *vertices = [particleEmitter getVertices];
for (int p = 0; p < particleEmitter.particleCount; p++) {
CC3GLMatrix *modelView = [CC3GLMatrix matrix];
// Translate the Modelviewmatrix
[modelView populateFromTranslation:CC3VectorMake(_cameraX, _cameraY, -5.0)];
[modelView translateByX:vertices[p].x + xoffset];
[modelView translateByY:vertices[p].y + yoffset];
[modelView translateByZ:101.0];
[modelView scaleByX:2.0];
[modelView scaleByY:2.0];
glUniformMatrix4fv(_modelViewUniformT, 1, 0, modelView.glMatrix);
glBindTexture(GL_TEXTURE_2D, [particleEmitter getTexture]);
// Create and Bind a rectangular VBO
[self calcCharacterVBOwithCols:1 rows:1 currentCol:1 currentRow:1];
glVertexAttribPointer(_positionSlotT, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*) 0);
glEnableVertexAttribArray(_positionSlotT);
// Fragment Shader value
float opacity = 1.0;
glUniform1f(_opacity, opacity);
// Normal render, add Texture coordinates
// Activate Texturing Pipeline and Bind Texture
glVertexAttribPointer(_texCoordSlot, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), (GLvoid*) (sizeof(float) * 3));
glEnableVertexAttribArray(_texCoordSlot);
glDrawElements(GL_TRIANGLES, sizeof(IndicesLayer)/sizeof(IndicesLayer[0]), GL_UNSIGNED_BYTE, 0);
glDisableVertexAttribArray(_texCoordSlot);
glDisableVertexAttribArray(_positionSlotT);
[self destroyCharacterVBO];
}
}
Did I miss some essential point on particles? What can I do better to get a better framerate on the device?
The problem seems to have been assigning the particle texture again and again for every single particle even though they all use the same texture.
So by binding the texture
glBindTexture(GL_TEXTURE_2D, [particleEmitter getTexture]);
before looping over all the particles I get a much faster frame rate that is comparable to the simulator.

OpenGL artifacts - triangles on the back overlapping the ones on the front on old iOS device

I am rendering my scene as the code below
struct vertex
{
float x, y, z, nx, ny, nz;
};
bool CShell::UpdateScene()
{
glEnable(GL_DEPTH_TEST);
glClearColor(0.3f, 0.3f, 0.4f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
// Set the OpenGL projection matrix
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
const float near = 0.1f;
const float far = 1000.0f;
float top = near * tanf(fieldOfView * SIMD_PI / 180.0f);
float bottom = -top;
float left = bottom * aspectRatio;
float right = top * aspectRatio;
glFrustumf(left, right, bottom, top, near, far);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
}
bool CShell::RenderScene()
{
glEnable(GL_DEPTH_TEST);
glBindBuffer(GL_ARRAY_BUFFER, vertsVBO);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_NORMAL_ARRAY);
glVertexPointer(3, GL_FLOAT, elementSize, 0);
glNormalPointer(GL_FLOAT, elementSize, (const GLvoid*) normalOffset);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, indicesVBO);
glEnable(GL_LIGHTING);
lightPosition[0] = (-gravity.x()+0.0)*lightHeight;
lightPosition[1] = (-gravity.y()+0.0)*lightHeight;
lightPosition[2] = (-gravity.z()+0.5)*lightHeight;
glLightfv(GL_LIGHT0, GL_POSITION, lightPosition);
float worldMat[16];
/// draw donuts
for (int i=0;i<numDonuts;i++)
{
sBoxBodies[i]->getCenterOfMassTransform().getOpenGLMatrix(worldMat);
glPushMatrix();
glMultMatrixf(worldMat);
glVertexPointer(3, GL_FLOAT, elementSize, (const GLvoid*)(char*)sizeof(vertex));
glNormalPointer(GL_FLOAT, elementSize, (const GLvoid*)(char*)(sizeof(vertex)+normalOffset));
glDrawElements(GL_TRIANGLES, numberOfIndices, GL_UNSIGNED_SHORT, (const GLvoid*)(char*)sizeof(GLushort));
glPopMatrix();
}
glBindBuffer(GL_ARRAY_BUFFER,0);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);
glDisable(GL_LIGHTING);
return true;
}
My project uses Oolong Engine
These are two screenshots, iPodTouch 4G (iOS 6.0)
and iPodTouch 2G (iOS 4.2.1)
What could be causing those strange artitacts that appears on the later screenshot?
It apears as if the triangles on the back are overlapping those of the front
It occurs some times thought, as the artifats are jerky, it's like if there is "z-fighting", but triangles on the back have z values below of those for triangles on the front
Here is an image of the vertices and normals z arrangement
The blue arrows are normals shared by the surrounding faces, and the triangle with red lines is a representation of what could be causing those artifacts
it's like if there is "z-fighting", but triangles on the back have z values below of those for triangles on the front
It doesn't matter so much that one has z-value less than the other, you get z-fighting when your objects are too close together and you don't have enough z resolution.
The problem here I guess is that you set your projection range too large, from 0.1 to 1000. The greater magnitude the difference between these numbers, the less z-resolution you will get.
I recommend to try near/far of 0.1/100, or 1.0/1000, as long as that works with your application. It should help your z-fighting issue.

Using OpenGL ES 2.0 inside CCRenderTexture

I am trying to follow a tutorial on Dynamic Textures in iOS by Ray Wenderlich
http://www.raywenderlich.com/3857/how-to-create-dynamic-textures-with-ccrendertexture
but using Cocos2D 2.0 and OpenGL ES 2.0 instead of 1.1. The tutorial begins by drawing a coloured square to the screen with a shadow gradient applied to it, but I cannot get the gradient to render to the coloured square. This part of the tutorial is where OpenGL ES code is sent to the CCRenderTexture, so I figure I must be setting up my OpenGL ES 2.0 code wrong (I have very little experience with OpenGL / OpenGL ES). The OpenGL ES 1.1 code is
glDisable(GL_TEXTURE_2D);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
float gradientAlpha = 0.7;
CGPoint vertices[4];
ccColor4F colors[4];
int nVertices = 0;
vertices[nVertices] = CGPointMake(0, 0);
colors[nVertices++] = (ccColor4F){0, 0, 0, 0 };
vertices[nVertices] = CGPointMake(textureSize, 0);
colors[nVertices++] = (ccColor4F){0, 0, 0, 0};
vertices[nVertices] = CGPointMake(0, textureSize);
colors[nVertices++] = (ccColor4F){0, 0, 0, gradientAlpha};
vertices[nVertices] = CGPointMake(textureSize, textureSize);
colors[nVertices++] = (ccColor4F){0, 0, 0, gradientAlpha};
glVertexPointer(2, GL_FLOAT, 0, vertices);
glColorPointer(4, GL_FLOAT, 0, colors);
glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)nVertices);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
which goes between the CCRenderTexture begin and end methods (full code can be found at the above link). My Cocos2D 2.0 / OpenGL ES 2.0 attempt is
float gradientAlpha = 0.7;
CGPoint vertices[4];
ccColor4F colors[4];
int nVertices = 0;
vertices[nVertices] = CGPointMake(0, 0);
colors[nVertices++] = (ccColor4F){0, 0, 0, 0 };
vertices[nVertices] = CGPointMake(textureSize, 0);
colors[nVertices++] = (ccColor4F){0, 0, 0, 0};
vertices[nVertices] = CGPointMake(0, textureSize);
colors[nVertices++] = (ccColor4F){0, 0, 0, gradientAlpha};
vertices[nVertices] = CGPointMake(textureSize, textureSize);
colors[nVertices++] = (ccColor4F){0, 0, 0, gradientAlpha};
// Setup OpenGl ES shader programs
CCGLProgram *positionColourProgram = [[CCShaderCache sharedShaderCache] programForKey:kCCShader_PositionColor];
[rt setShaderProgram:positionColourProgram];
ccGLEnableVertexAttribs(kCCVertexAttribFlag_Position | kCCVertexAttribFlag_Color);
glVertexAttribPointer(kCCVertexAttrib_Position, 2, GL_FLOAT, GL_FALSE, 0, vertices);
glVertexAttribPointer(kCCVertexAttrib_Color, 4, GL_FLOAT, GL_FALSE, 0, colors);
glDrawArrays(GL_TRIANGLE_STRIP, 0, (GLsizei)nVertices);
where rt is the CCRenderTexture object. There are no errors in the console but the image on the screen is a solid colour square with no gradient. Do I need to use an OpenGL blending function perhaps? Any help would be much appreciated. Thanks in advance.
I have figured out the changes needed to make it \work and I posted my comment in the tutorial's forum http://www.raywenderlich.com/forums//viewtopic.php?f=20&t=512&start=40
Hope it is not too late for you.
To save you time from looking through the forum to find my posting, here it is what I posted there:
I have posted my fix at:
http://www.wfu.edu/~ylwong/download/cocos2d-2.0-texture/
The HelloWorldLayer.mm is the final file incorporated all the changes so you do not have to type them in. The pdf file marks up the changes in case you want to see what the changes are.
Basically, in addition to replacing the statements that are not supported in OpenGLES 2.0, I have to add code to set up the shaders for vertex and fragment. Also, instead of using the range 0 to textureSize in the vertex arrays, I have to use the range -1 to 1, which means that in setting up the vertex arrays, the texture width is now 2, 0 becomes -1, and textureSize becomes 1.
To set up the shaders for that tutorial, we can use the ones that come with Cocos2d or write custom but simple shaders. I have included both methods to choose from.
Hope this helps!

Resources