Overlapping 2 transparent Texture2Ds in OpenGL ES - ipad

I'm trying to make a 2D game for the iPad with OpenGL. I'm new to OpenGL in general so this blending stuff is new.
My drawing code looks like this:
static CGFloat r=0;
r+=2.5;
r=remainder(r, 360);
glLoadIdentity();
//you can ignore the rotating and scaling
glRotatef(90, 0,0, -1);
glScalef(1, -1, 1);
glTranslatef(-1024, -768, 0);
glClearColor(0.3,0.8,1, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glEnable (GL_BLEND);
glBlendFunc (GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
[texture drawInRect:CGRectMake(512-54, abs(sin(((r+45)/180)*3.14)*500), 108, 108)];
[texture drawInRect:CGRectMake(512-54, abs(sin((r/180)*3.14)*500), 108, 108)];
("texture" is a Texture2D that has a transparent background)
All I need to know how to do is make it so that a blue box around the texture doesnt cover up the other one.

Sounds like you just need to open up the texture image in your favourite image editor and set the blue area to be 0% opaque (i.e. 0) in the alpha channel. The SRC_ALPHA part of GL_ONE_MINUS_SRC_ALPHA means the alpha value in the source texture.
Chances are you're using 32-bit colour, in which case you'll have four channels, 8 bits for Red, Green, Blue and Alpha.

Related

Trails effect, clearing a frame buffer with a transparent quad

I want to get a trails effect. I am drawing particles to a frame buffer. which is never cleared (accumulates draw calls). Fading out is done by drawing a black quad with small alpha, for example 0.0, 0.0, 0.0, 0.1. A two step process, repeated per frame:
- drawing a black quad
- drawing particles at new positions
All works nice, the moving particles produce long trails EXCEPT the black quad does not clear the FBO down to perfect zero. Faint trails remain forever (e.g. buffer's RGBA = 4,4,4,255).
I assume the problem starts when a blending function multiplies small values of FBO's 8bit RGBA (destination color) by, for example (1.0-0.1)=0.9 and rounding prevents further reduction. For example 4 * 0.9 = 3.6 -> rounded back to 4, for ever.
Is my method (drawing a black quad) inherently useless for trails? I cannot find a blend function that could help, since all of them multiply the DST color by some value, which must be very small to produce long trails.
The trails are drawn using a code:
GLuint drawableFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &drawableFBO);
glBindFramebuffer(GL_FRAMEBUFFER, FBO); /// has an attached texture glFramebufferTexture2D -> FBOTextureId
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glUseProgram(fboClearShader);
glUniform4f(fboClearShader.uniforms.color, 0.0, 0.0, 0.0, 0.1);
glUniformMatrix4fv(fboClearShader.uniforms.modelViewProjectionMatrix, 1, 0, mtx.m);
glBindVertexArray(fboClearShaderBuffer);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glUseProgram(particlesShader);
glUniformMatrix4fv(shader.uniforms.modelViewProjectionMatrix, 1, 0, mtx.m);
glUniform1f(shader.uniforms.globalAlpha, 0.9);
glBlendFunc(GL_ONE, GL_ONE);
glBindTexture(particleTextureId);
glBindVertexArray(particlesBuffer);
glDrawArrays(GL_TRIANGLES, 0, 1000*6);
/// back to drawable buffer
glBindFramebuffer(GL_FRAMEBUFFER, drawableFBO);
glUseProgram(fullScreenShader);
glBindVertexArray(screenQuad);
glBlendFuncGL_ONE dFactor:GL_ONE];
glBindTexture(FBOTextureId);
glDrawArrays(GL_TRIANGLES, 0, 6);
Blending is not only defined by the by the blend function glBlendFunc, it is also defined by the blend equation glBlendEquation.
By default the source value and the destination values are summed up, after they are processed by the blend function.
Use a blend function which subtracts a tiny value from the destination buffer, so the destination color will slightly decreased in each frame and finally becomes 0.0.
The the results of the blend equations is clamped to the range [0, 1].
e.g.
dest_color = dest_color - RGB(0.01)
The blend equation which subtracts the source color form the destination color is GL_FUNC_REVERSE_SUBTRACT:
float dec = 0.01f; // should be at least 1.0/256.0
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_REVERSE_SUBTRACT);
glBlendFunc(GL_ONE, GL_ONE);
glUseProgram(fboClearShader);
glUniform4f(fboClearShader.uniforms.color, dec, dec, dec, 0.0);

Achieving a persistence effect in GLKit view

I have a GLKit view set up to draw a solid shape, a line and an array of points which all change every frame. The basics of my drawInRect method are:
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(...);
glBufferData(...);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// draw points
glDrawArrays(GL_POINTS, ...);
}
This works fine; each array contains around 2000 points, but my iPad seems to have no problem rendering it all at 60fps.
The issue now is that I would like the lines to fade away slowly over time, instead of disappearing with the next frame, making a persistence or phosphor-like effect. The solid shape and the points must not linger, only the line.
I've tried the brute-force method (as used in Apple's example project aurioTouch): storing the data from the last 100 frames and drawing all 100 lines every frame, but this is too slow. My iPad can't render more than about 10fps with this method.
So my question is: can I achieve this more efficiently using some kind of frame or render buffer which accumulates the color of previous frames? Since I'm using GLKit, I haven't had to deal directly with these things before, and so don't know much about them. I've read about accumulation buffers, which seem to do what I want, but I've heard that they are very slow and anyway I can't tell whether they even exist in OpenGL ES 3, let alone how to use them.
I'm imagining something like the following (after setting up some kind of storage buffer):
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
glClear(...);
glBufferData(...);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw contents of storage buffer
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// multiply the alpha value of each pixel in the storage buffer by 0.9 to fade
// draw line again, this time into the storage buffer
// draw points
glDrawArrays(GL_POINTS, ...);
}
Is this possible? What are the commands I need to use (in particular, to combine the contents of the storage buffer and change its alpha)? And is this likely to actually be more efficient than the brute-force method?
I ended up achieving the desired result by rendering to a texture, as described for example here. The basic idea is to setup a custom framebuffer and attach a texture to it – I then render the line that I want to persist into this framebuffer (without clearing it) and render the whole framebuffer as a texture into the default framebuffer (which is cleared every frame). Instead of clearing the custom framebuffer, I render a slightly opaque quad over the whole screen to make the previous contents fade out a little every frame.
The relevant code is below; setting up the framebuffer and persistence texture is done in the init method:
// vertex data for fullscreen textured quad (x, y, texX, texY)
GLfloat persistVertexData[16] = {-1.0, -1.0, 0.0, 0.0,
-1.0, 1.0, 0.0, 1.0,
1.0, -1.0, 1.0, 0.0,
1.0, 1.0, 1.0, 1.0};
// setup texture vertex buffer
glGenBuffers(1, &persistVertexBuffer);
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glBufferData(GL_ARRAY_BUFFER, sizeof(persistVertexData), persistVertexData, GL_STATIC_DRAW);
// create texture for persistence data and bind
glGenTextures(1, &persistTexture);
glBindTexture(GL_TEXTURE_2D, persistTexture);
// provide an empty image
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 2048, 1536, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0);
// set texture parameters
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
// create frame buffer for persistence data
glGenFramebuffers(1, &persistFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, persistFrameBuffer);
// attach render buffer
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, persistTexture, 0);
// check for errors
NSAssert(glCheckFramebufferStatus(GL_FRAMEBUFFER) == GL_FRAMEBUFFER_COMPLETE, #"Error: persistence framebuffer incomplete!");
// initialize default frame buffer pointer
defaultFrameBuffer = -1;
and in the glkView:drawInRect: method:
// get default frame buffer id
if (defaultFrameBuffer == -1)
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &defaultFrameBuffer);
// clear screen
glClear(GL_COLOR_BUFFER_BIT);
// DRAW PERSISTENCE
// bind persistence framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, persistFrameBuffer);
// render full screen quad to fade
glEnableVertexAttribArray(...);
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glVertexAttribPointer(...);
glUniform4f(colorU, 0.0, 0.0, 0.0, 0.01);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// add most recent line
glBindBuffer(GL_ARRAY_BUFFER, dataVertexBuffer);
glVertexAttribPointer(...);
glUniform4f(colorU, color[0], color[1], color[2], 0.8*color[3]);
glDrawArrays(...);
// return to normal framebuffer
glBindFramebuffer(GL_FRAMEBUFFER, defaultFrameBuffer);
// switch to texture shader
glUseProgram(textureProgram);
// bind texture
glBindTexture(GL_TEXTURE_2D, persistTexture);
glUniform1i(textureTextureU, 0);
// set texture vertex attributes
glBindBuffer(GL_ARRAY_BUFFER, persistVertexBuffer);
glEnableVertexAttribArray(texturePositionA);
glEnableVertexAttribArray(textureTexCoordA);
glVertexAttribPointer(self.shaderBridge.texturePositionA, 2, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 0);
glVertexAttribPointer(self.shaderBridge.textureTexCoordA, 2, GL_FLOAT, GL_FALSE, 4*sizeof(GLfloat), 2*sizeof(GLfloat));
// draw fullscreen quad with texture
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// DRAW NORMAL FRAME
glUseProgram(normalProgram);
glEnableVertexAttribArray(...);
glVertexAttribPointer(...);
// draw solid shape
glDrawArrays(GL_TRIANGLE_STRIP, ...);
// draw line
glDrawArrays(GL_LINE_STRIP, ...);
// draw points
glDrawArrays(GL_POINTS, ...);
The texture shaders are very simple: the vertex shader just passes the texture coordinate to the fragment shader:
attribute vec4 aPosition;
attribute vec2 aTexCoord;
varying vec2 vTexCoord;
void main(void)
{
gl_Position = aPosition;
vTexCoord = aTexCoord;
}
and the fragment shader reads the fragment color from the texture:
uniform highp sampler2D uTexture;
varying vec2 vTexCoord;
void main(void)
{
gl_FragColor = texture2D(uTexture, vTexCoord);
}
Although this works, it doesn't seem very efficient, causing the renderer utilization to rise to close to 100%. It only seems better than the brute force approach when the number of lines drawn each frame exceeds 100 or so. If anyone has any suggestions on how to improve this code, I would be very grateful!

iPad texture loading differences (32-bit vs. 64-bit)

I am working on a drawing application and I am noticing significant differences in textures loaded on a 32-bit iPad vs. a 64-bit iPad.
Here is the texture drawn on a 32-bit iPad:
Here is the texture drawn on a 64-bit iPad:
The 64-bit is what I desire, but it seems like maybe it is losing some data?
I create a default brush texture with this code:
UIGraphicsBeginImageContext(CGSizeMake(64, 64));
CGContextRef defBrushTextureContext = UIGraphicsGetCurrentContext();
UIGraphicsPushContext(defBrushTextureContext);
size_t num_locations = 3;
CGFloat locations[3] = { 0.0, 0.8, 1.0 };
CGFloat components[12] = { 1.0,1.0,1.0, 1.0,
1.0,1.0,1.0, 1.0,
1.0,1.0,1.0, 0.0 };
CGColorSpaceRef myColorspace = CGColorSpaceCreateDeviceRGB();
CGGradientRef myGradient = CGGradientCreateWithColorComponents (myColorspace, components, locations, num_locations);
CGPoint myCentrePoint = CGPointMake(32, 32);
float myRadius = 20;
CGGradientDrawingOptions options = kCGGradientDrawsBeforeStartLocation | kCGGradientDrawsAfterEndLocation;
CGContextDrawRadialGradient (UIGraphicsGetCurrentContext(), myGradient, myCentrePoint,
0, myCentrePoint, myRadius,
options);
CFRelease(myGradient);
CFRelease(myColorspace);
UIGraphicsPopContext();
[self setBrushTexture:UIGraphicsGetImageFromCurrentImageContext()];
UIGraphicsEndImageContext();
And then actually set the brush texture like this:
-(void) setBrushTexture:(UIImage*)brushImage{
// save our current texture.
currentTexture = brushImage;
// first, delete the old texture if needed
if (brushTexture){
glDeleteTextures(1, &brushTexture);
brushTexture = 0;
}
// fetch the cgimage for us to draw into a texture
CGImageRef brushCGImage = brushImage.CGImage;
// Make sure the image exists
if(brushCGImage) {
// Get the width and height of the image
GLint width = CGImageGetWidth(brushCGImage);
GLint height = CGImageGetHeight(brushCGImage);
// Texture dimensions must be a power of 2. If you write an application that allows users to supply an image,
// you'll want to add code that checks the dimensions and takes appropriate action if they are not a power of 2.
// Allocate memory needed for the bitmap context
GLubyte* brushData = (GLubyte *) calloc(width * height * 4, sizeof(GLubyte));
// Use the bitmatp creation function provided by the Core Graphics framework.
CGContextRef brushContext = CGBitmapContextCreate(brushData, width, height, 8, width * 4, CGImageGetColorSpace(brushCGImage), kCGImageAlphaPremultipliedLast);
// After you create the context, you can draw the image to the context.
CGContextDrawImage(brushContext, CGRectMake(0.0, 0.0, (CGFloat)width, (CGFloat)height), brushCGImage);
// You don't need the context at this point, so you need to release it to avoid memory leaks.
CGContextRelease(brushContext);
// Use OpenGL ES to generate a name for the texture.
glGenTextures(1, &brushTexture);
// Bind the texture name.
glBindTexture(GL_TEXTURE_2D, brushTexture);
// Set the texture parameters to use a minifying filter and a linear filer (weighted average)
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
// Specify a 2D texture image, providing the a pointer to the image data in memory
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, brushData);
// Release the image data; it's no longer needed
free(brushData);
}
}
Update:
I've updated CGFloats to be GLfloats with no success. Maybe there is an issue with this rendering code?
if(frameBuffer){
// draw the stroke element
[self prepOpenGLStateForFBO:frameBuffer];
[self prepOpenGLBlendModeForColor:element.color];
CheckGLError();
}
// find our screen scale so that we can convert from
// points to pixels
GLfloat scale = self.contentScaleFactor;
// fetch the vertex data from the element
struct Vertex* vertexBuffer = [element generatedVertexArrayWithPreviousElement:previousElement forScale:scale];
glLineWidth(2);
// if the element has any data, then draw it
if(vertexBuffer){
glVertexPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Position[0]);
glColorPointer(4, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Color[0]);
glTexCoordPointer(2, GL_FLOAT, sizeof(struct Vertex), &vertexBuffer[0].Texture[0]);
glDrawArrays(GL_TRIANGLES, 0, (GLint)[element numberOfSteps] * (GLint)[element numberOfVerticesPerStep]);
CheckGLError();
}
if(frameBuffer){
[self unprepOpenGLState];
}
The vertex struct is the following:
struct Vertex{
GLfloat Position[2]; // x,y position
GLfloat Color [4]; // rgba color
GLfloat Texture[2]; // x,y texture coord
};
Update:
The issue does not actually appear to be 32-bit, 64-bit based, but rather something different about the A7 GPU and GL drivers. I found this out by running a 32-bit build and 64-bit build on the 64-bit iPad. The textures ended up looking exactly the same on both builds of the app.
I would like you to check two things.
Check your alpha blending logic(or option) in OpenGL.
Check your interpolation logic which is proportional to velocity of dragging.
It seems you don't have second one or not effective which is required to drawing app
I don't think the problem is in the texture but in the frame buffer to which you composite the line elements.
Your code fragments look like you draw segments by segment, so there are several overlapping segments drawn on top of each other. If the depth of the frame buffer is low there will be artifacts, especially in the lighter regions of the blended areas.
You can check the frame buffer using Xcode's OpenGL debugger. Activate it by running your code on the device and click the little "Capture OpenGL ES Frame" button: .
Select a "glBindFramebuffer" command in the "Debug Navigator" and look at the frame buffer description in the console area:
The interesting part is the GL_FRAMEBUFFER_INTERNAL_FORMAT.
In my opinion, the problem is in the blending mode you use when composing different image passes. I assume that you upload the texture for display only, and keep the in-memory image where you composite different drawing operations, or you read-back the image content using glReadPixels ?
Basically your second images appears like a straight-alpha image drawn like a pre-multiplied alpha image.
To be sure that it isn't a texture problem, save a NSImage to file before uploading to the texture, and check that the image is actually correct.

Render Large Texture To Smaller Renderbuffer

I have a render buffer that is 852x640 and a texture that is 1280x720. When I render the texture, it is getting cropped, not just stretched. I know the aspect ratio needs correcting, but how can I get it so that the full texture displays in the render buffer?
//-------------------------------------
glGenFramebuffers(1, &frameBufferHandle);
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
glGenRenderbuffers(1, &renderBufferHandle);
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferHandle);
[oglContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &renderBufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &renderBufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderBufferHandle);
//-------------------------------------
static const GLfloat squareVertices[] = {
-1.0f, 1.0f,
1.0f, 1.0f,
-1.0f, -1.0f,
1.0f, -1.0f
};
static const GLfloat horizontalFlipTextureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
size_t frameWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t frameHeight = CVPixelBufferGetHeight(pixelBuffer);
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
frameWidth,
frameHeight,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
if (!texture || err) {
NSLog(#"CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);
return;
}
glBindTexture(CVOpenGLESTextureGetTarget(texture), CVOpenGLESTextureGetName(texture));
glViewport(0, 0, renderBufferWidth, renderBufferHeight); // setting this to 1280x720 fixes the aspect ratio but still crops
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
glUseProgram(shaderPrograms[PASSTHROUGH]);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, horizontalFlipTextureCoordinates);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Present
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferHandle);
[oglContext presentRenderbuffer:GL_RENDERBUFFER];
EDIT
I'm still running into issues. I've included more source. Basically, I need the entire raw input texture to display in wide screen while also writing the raw texture to disk.
When rendering to a smaller texture, things are automatically scaled, is this not the case with a renderbuffer?
I guess I could make another passthrough to a smaller texture, but that would slow things down.
First of all, keep glViewport(0, 0, renderBufferWidth, renderBufferHeight); with 852x640.
The problem is in your squareVertices - looks like it keeps coordinates that represent texture size. You need to set it equal to renderbuffer size.
The idea is that texture is mapped on your squareVertices rect. So you can render texture of any size mapped to rect of any size - texture image will be scaled to fit the rect.
[Update: square vertices]
In your case it should be:
{
0.0f, (float)renderBufferWidth/frameHeight,
(float)renderBufferWidth/frameWidth, (float)renderBufferHeight/frameHeight,
0.0f, 0.0f,
(float)renderBufferWidth/frameWidth, 0.0f,
};
But this is not good solution in common. From theory, the rectangle size on screen is determined by vertices position and transformation matrix. Each vertice is multiplied with matrix before rendering on screen. Looks like you don't set OpenGL projection matrix. With correct orthogonal projection your vertices should have pixel-equivalent positions.
Since I, Being new to OpenGL, remembers that the texture to be mapped should be in the powers of 2 by 2.
for eg the image resolution should be... 256x256, 512x512.
You can then SCALE the image using
gl.glScalef(x,y,z); function accordingly as per your requirements.
get the height and width accordingly and put these in your scalef function.
Try this, i hope this works.
Try these functions. My answer can be validated from the info #songhoa.ca.com
glGenFramebuffers()
void glGenFramebuffers(GLsizei n, GLuint* ids)
number of frame-buffers to create
void glDeleteFramebuffers(GLsizei n, const GLuint* ids)
pointer to a GLuint variable or an array to store a number of IDs.It returns the IDs of unused framebuffer objects. ID 0 means the default framebuffer, which is the window-system-provided framebuffer.
FBO may be deleted by calling glDeleteFramebuffers() when it is not used anymore.
glBindFramebuffer()
Once a FBO is created, it has to be bound before using it.
void glBindFramebuffer(GLenum target, GLuint id)
First parameter is The target should be GL_FRAMEBUFFER.
Second parameter is the ID of a framebuffer object.
Once a FBO is bound, all OpenGL operations affect onto the current bound framebuffer object. The object ID 0 is reserved for the default window-system provided framebuffer. Therefore, in order to unbind the current framebuffer (FBO), use ID 0 in glBindFramebuffer().
Try using those, or at least visit the link which could help you a lot. Sorry, i'm not experienced in OpenGL but I wanted to contribute the link, and explain the 2 functions. I think you can use the info to write your code.
Oh boy, so the answer is that this was working all along ;) It turns out the high resolution preset mode on the iPhone 4 actually covers less area than the medium resolution preset. This threw me in for a loop until Brigadir suggested what I should have done first all along, check the GPU snapshots.
I figured out the aspect ratio issue too by hacking the appropriate code in the GPUImage framework. https://github.com/bradLarson/GPUImage

How do I draw thousands of squares with glkit, opengl es2?

I'm trying to draw up to 200,000 squares on the screen. Or a lot of squares basically. I believe I'm just calling way to many draw calls, and it's crippling the performance of the app. The squares only update when I press a button, so I don't necessarily have to update this every frame.
Here's the code i have now:
- (void)glkViewControllerUpdate:(GLKViewController *)controller
{
//static float transY = 0.0f;
//float y = sinf(transY)/2.0f;
//transY += 0.175f;
GLKMatrix4 modelview = GLKMatrix4MakeTranslation(0, 0, -5.f);
effect.transform.modelviewMatrix = modelview;
//GLfloat ratio = self.view.bounds.size.width/self.view.bounds.size.height;
GLKMatrix4 projection = GLKMatrix4MakeOrtho(0, 768, 1024, 0, 0.1f, 20.0f);
effect.transform.projectionMatrix = projection;
_isOpenGLViewReady = YES;
}
- (void)glkView:(GLKView *)view drawInRect:(CGRect)rect
{
if(_model.updateView && _isOpenGLViewReady)
{
glClear(GL_COLOR_BUFFER_BIT);
[effect prepareToDraw];
int pixelSize = _model.pixelSize;
if(!_model.isReady)
return;
//NSLog(#"UPDATING: %d, %d", _model.rows, _model.columns);
for(int i = 0; i < _model.rows; i++)
{
for(int ii = 0; ii < _model.columns; ii++)
{
ColorModel *color = [_model getColorAtRow:i andColumn:ii];
CGRect rect = CGRectMake(ii * pixelSize, i*pixelSize, pixelSize, pixelSize);
//[self drawRectWithRect:rect withColor:c];
GLubyte squareColors[] = {
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255,
color.red, color.green, color.blue, 255
};
// NSLog(#"Drawing color with red: %d", color.red);
int xVal = rect.origin.x;
int yVal = rect.origin.y;
int width = rect.size.width;
int height = rect.size.height;
GLfloat squareVertices[] = {
xVal, yVal, 1,
xVal + width, yVal, 1,
xVal, yVal + height, 1,
xVal + width, yVal + height, 1
};
glEnableVertexAttribArray(GLKVertexAttribPosition);
glEnableVertexAttribArray(GLKVertexAttribColor);
glVertexAttribPointer(GLKVertexAttribPosition, 3, GL_FLOAT, GL_FALSE, 0, squareVertices);
glVertexAttribPointer(GLKVertexAttribColor, 4, GL_UNSIGNED_BYTE, GL_TRUE, 0, squareColors);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableVertexAttribArray(GLKVertexAttribPosition);
glDisableVertexAttribArray(GLKVertexAttribColor);
}
}
_model.updateView = YES;
}
First, do you really need to draw 200,000 squares? Your viewport only has 786,000 pixels total. You might be able to reduce the number of drawn objects without significantly impacting the overall quality of your scene.
That said, if these are smaller squares, you could draw them as points with a pixel size large enough to cover your square's area. That would require setting gl_PointSize in your vertex shader to the appropriate pixel width. You could then generate your coordinates and send them all to be drawn at once as GL_POINTS. That should remove the overhead of the extra geometry of the triangles and the individual draw calls you are using here.
Even if you don't use points, it's still a good idea to calculate all of the triangle geometry you need first, then send all that in a single draw call. This will significantly reduce your OpenGL ES API call overhead.
One other thing you could look into would be to use vertex buffer objects to store this geometry. If the geometry is static, you can avoid sending it on each drawn frame, or only update a part of it that has changed. Even if you just change out the data each frame, I believe using a VBO for dynamic geometry has performance advantages on the modern iOS devices.
Can you not try to optimize it somehow? I'm not terribly familiar with graphics type stuff, but I'd imagine that if you are drawing 200,000 squares chances that all of them are actually visible seems to be unlikely. Could you not add some sort of isVisible tag for your mySquare class that determines whether or not the square you want to draw is actually visible? Then the obvious next step is to modify your draw function so that if the square isn't visible, you don't draw it.
Or are you asking for someone to try to improve the current code you have, because if your performance is as bad as you say, I don't think making small changes to the above code will solve your problem. You'll have to rethink how you're doing your drawing.
It looks like what your code is actually trying to do is take a _model.rows × _model.columns 2D image and draw it upscaled by _model.pixelSize. If -[ColorModel getColorAtRow:andColumn:] is retrieving 3 bytes at a time from an array of color values, then you may want to consider uploading that array of color values into an OpenGL texture as GL_RGB/GL_UNSIGNED_BYTE data and letting the GPU scale up all of your pixels at once.
Alternatively, if scaling up the contents of your ColorModel is the only reason that you’re using OpenGL ES and GLKit, you might be better off wrapping your color values into a CGImage and allowing UIKit and Core Animation do the drawing for you. How often do the color values in the ColorModel get updated?

Resources