directx - texture render result is incorrect - directx

case 1:
I create the texture by
D3DXCreateTexture(device, width, height, 0, D3DUSAGE_DYNAMIC, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &texture)
and update the texture with white color.
D3DLOCKED_RECT lr;
HRESULT hr = texture->LockRect(0, &lr, NULL, 0);
ConvertRGB2RGBA(width, height, pixels, stride, (unsigned char*)(lr.pBits), lr.Pitch);
texture->UnlockRect(0);
the render result shows as:
What I want is pure white on the surface.
The z value of all the vertexes equals to 0.0f.
case 2:
If I create the texture by
D3DXCreateTextureFromFile(device, "e:\\test.bmp", &texture);
and do not update the texture, it shows absolutly correct.
case 3:
If I create the texture from file as case 2, and update the texture as case 1, the result is incorrect, there is test.bmp content remains slightly.
conclusion:
There must be something wrong with updating texture. What's wrong???
SOLVED!!!Change the levels param to 1, then it works.
D3DXCreateTexture(device, width, height, 1, D3DUSAGE_DYNAMIC, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &texture)

Congratulations. When the mipmap level argument of D3DXCreateTexture is set to 0, full mipmapped texture will be created. If you want to use mipmaps, your function ConvertRGB2RGBA should cover not only the top level texture, but also lower level textures.

Change the levels param to 1, then it works.
D3DXCreateTexture(device, width, height, 1, D3DUSAGE_DYNAMIC, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, &texture)

Related

Trails effect, clearing a frame buffer with a transparent quad

I want to get a trails effect. I am drawing particles to a frame buffer. which is never cleared (accumulates draw calls). Fading out is done by drawing a black quad with small alpha, for example 0.0, 0.0, 0.0, 0.1. A two step process, repeated per frame:
- drawing a black quad
- drawing particles at new positions
All works nice, the moving particles produce long trails EXCEPT the black quad does not clear the FBO down to perfect zero. Faint trails remain forever (e.g. buffer's RGBA = 4,4,4,255).
I assume the problem starts when a blending function multiplies small values of FBO's 8bit RGBA (destination color) by, for example (1.0-0.1)=0.9 and rounding prevents further reduction. For example 4 * 0.9 = 3.6 -> rounded back to 4, for ever.
Is my method (drawing a black quad) inherently useless for trails? I cannot find a blend function that could help, since all of them multiply the DST color by some value, which must be very small to produce long trails.
The trails are drawn using a code:
GLuint drawableFBO;
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &drawableFBO);
glBindFramebuffer(GL_FRAMEBUFFER, FBO); /// has an attached texture glFramebufferTexture2D -> FBOTextureId
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE_MINUS_SRC_ALPHA);
glUseProgram(fboClearShader);
glUniform4f(fboClearShader.uniforms.color, 0.0, 0.0, 0.0, 0.1);
glUniformMatrix4fv(fboClearShader.uniforms.modelViewProjectionMatrix, 1, 0, mtx.m);
glBindVertexArray(fboClearShaderBuffer);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glUseProgram(particlesShader);
glUniformMatrix4fv(shader.uniforms.modelViewProjectionMatrix, 1, 0, mtx.m);
glUniform1f(shader.uniforms.globalAlpha, 0.9);
glBlendFunc(GL_ONE, GL_ONE);
glBindTexture(particleTextureId);
glBindVertexArray(particlesBuffer);
glDrawArrays(GL_TRIANGLES, 0, 1000*6);
/// back to drawable buffer
glBindFramebuffer(GL_FRAMEBUFFER, drawableFBO);
glUseProgram(fullScreenShader);
glBindVertexArray(screenQuad);
glBlendFuncGL_ONE dFactor:GL_ONE];
glBindTexture(FBOTextureId);
glDrawArrays(GL_TRIANGLES, 0, 6);
Blending is not only defined by the by the blend function glBlendFunc, it is also defined by the blend equation glBlendEquation.
By default the source value and the destination values are summed up, after they are processed by the blend function.
Use a blend function which subtracts a tiny value from the destination buffer, so the destination color will slightly decreased in each frame and finally becomes 0.0.
The the results of the blend equations is clamped to the range [0, 1].
e.g.
dest_color = dest_color - RGB(0.01)
The blend equation which subtracts the source color form the destination color is GL_FUNC_REVERSE_SUBTRACT:
float dec = 0.01f; // should be at least 1.0/256.0
glEnable(GL_BLEND);
glBlendEquation(GL_FUNC_REVERSE_SUBTRACT);
glBlendFunc(GL_ONE, GL_ONE);
glUseProgram(fboClearShader);
glUniform4f(fboClearShader.uniforms.color, dec, dec, dec, 0.0);

What's the effect of geometry on the final texture output in WebGL?

Updated with more explanation around my confusion
(This is how a non-graphics developer imagines the rendering process!)
I specify a 2x2 sqaure to be drawn in by way of two triangles. I'm going to not talk about the triangle anymore. Square is a lot better. Let's say the square gets drawn in one piece.
I have not specified any units for my drawing. The only places in my code that I do something like that is: canvas size (set to 1x1 in my case) and the viewport (i always set this to the dimensions of my output texture).
Then I call draw().
What happens is this: that regardless of the size of my texture (being 1x1 or 10000x10000) all my texels are filled with data (color) that I returned from my frag shader. This is working each time perfectly.
So now I'm trying to explain this to myself:
The GPU is only concerned with coloring the pixels.
Pixel is the smallest unit that the GPU deals with (colors).
Depending on how many pixels my 2x2 square is mapped to, I should be running into one of the following 3 cases:
The number of pixels (to be colored) and my output texture dims match one to one: In this ideal case, for each pixel, there would be one value assigned to my output texture. Very clear to me.
The number of pixels are fewer than my output texture dims. In this case, I should expect that some of the output texels to have exact same value (which is the color of the pixel the fall under). For instance if the GPU ends up drawing 16x16 pixels and my texture is 64x64 then I'll have blocks of 4 texel which get the same value. I have not observed such case regardless of the size of my texture. Which means there is never a case where we end up with fewer pixels (really hard to imagine -- let's keep going)
The number of pixels end up being more than the number of texels. In this case, the GPU should decide which value to assign to my texel. Would it average out the pixel colors? If the GPU is coloring 64x64 pixels and my output texture is 16x16 then I should expect that each texel gets an average color of the 4x4 pixels it contains. Anyway, in this case my texture should be completely filled with values I didn't intend specifically for them (like averaged out) however this has not been the case.
I didn't even talk about how many times my frag shader gets called because it didn't matter. The results would be deterministic anyway.
So considering that I have never run into 2nd and 3rd case where the values in my texels are not what I expected them the only conclusion I can come up with is that the whole assumption of the GPU trying to render pixels is actually wrong. When I assign an output texture to it (which is supposed to stretch over my 2x2 square all the time) then the GPU will happily oblige and for each texel will call my frag shader. Somewhere along the line the pixels get colored too.
But the above lunatistic explanation also fails to answer why I end up with no values in my texels or incorrect values if I stretch my geometry to 1x1 or 4x4 instead of 2x2.
Hopefully the above fantastic narration of the GPU coloring process has given you clues as to where I'm getting this wrong.
Original Post:
We're using WebGL for general computation. As such we create a rectangle and draw 2 triangles in it. Ultimately what we want is the data inside the texture mapped to this geometry.
What I don't understand is if I change the rectangle from (-1,-1):(1,1) to say (-0.5,-0.5):(0.5,0.5) suddenly data is dropped from the texture bound to the framebuffer.
I'd appreciate if someone makes me understand the correlations. The only places that real dimensions of the output texture come into play are the call to viewPort() and readPixels().
Below are relevant pieces of code for you to see what I'm doing:
... // canvas is created with size: 1x1
... // context attributes passed to canvas.getContext()
contextAttributes = {
alpha: false,
depth: false,
antialias: false,
stencil: false,
preserveDrawingBuffer: false,
premultipliedAlpha: false,
failIfMajorPerformanceCaveat: true
};
... // default geometry
// Sets of x,y,z (for rectangle) and s,t coordinates (for texture)
return new Float32Array([
-1.0, 1.0, 0.0, 0.0, 1.0, // upper left
-1.0, -1.0, 0.0, 0.0, 0.0, // lower left
1.0, 1.0, 0.0, 1.0, 1.0, // upper right
1.0, -1.0, 0.0, 1.0, 0.0 // lower right
]);
...
const geometry = this.createDefaultGeometry();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, geometry, gl.STATIC_DRAW);
... // binding to the vertex shader attribs
gl.vertexAttribPointer(positionHandle, 3, gl.FLOAT, false, 20, 0);
gl.vertexAttribPointer(textureCoordHandle, 2, gl.FLOAT, false, 20, 12);
gl.enableVertexAttribArray(positionHandle);
gl.enableVertexAttribArray(textureCoordHandle);
... // setting up framebuffer; I set the viewport to output texture dimensions (I think this is absolutely needed but not sure)
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.bindFramebuffer(gl.FRAMEBUFFER, this.framebuffer);
gl.framebufferTexture2D(
gl.FRAMEBUFFER, // The target is always a FRAMEBUFFER.
gl.COLOR_ATTACHMENT0, // We are providing the color buffer.
gl.TEXTURE_2D, // This is a 2D image texture.
texture, // The texture.
0); // 0, we aren't using MIPMAPs
gl.viewport(0, 0, width, height);
... // reading from output texture
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.framebufferTexture2D(
gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, texture,
0);
gl.readPixels(0, 0, width, height, gl.FLOAT, gl.RED, buffer);
new answer
I'm just saying the same thing yet again (3rd time?)
Copied from below
WebGL is destination based. That means it's going to iterate over the pixels of the line/point/triangle it's drawing and for each point call the fragment shader and ask 'what value should I store here`?
It's destination based. It's going to draw each pixel exactly once. For that pixel it's going to ask "what color should I make this"
destination based loop
for (let i = start; i < end; ++i) {
fragmentShaderFunction(); // must set gl_FragColor
destinationTextureOrCanvas[i] = gl_FragColor;
You can see in the loop above there is no setting any random destination. There is no setting any part of destination twice. It's just going to run from start to end and exactly once for each pixel in the destination between start and end ask what color it should make that pixel.
How to do you set start and end? Again, to make it simple let's assume a 200x1 texture so we can ignore Y. It works like this
vertexShaderFunction(); // must set gl_Position
const start = clipspaceToArrayspaceViaViewport(viewport, gl_Position.x);
vertexShaderFunction(); // must set gl_Position
const end = clipspaceToArrayspaceViaViewport(viewport, gl_Position.x);
for (let i = start; i < end; ++i) {
fragmentShaderFunction(); // must set gl_FragColor
texture[i] = gl_FragColor;
}
see below for clipspaceToArrayspaceViaViewport
What is viewport? viewport is what you set when you called `gl.viewport(x, y, width, height)
So, set gl_Position.x to -1 and +1, viewport.x to 0 and viewport.width = 200 (the width of the texture) then start will be 0, end will be 200
set gl_Position.x to .25 and .75, viewport.x to 0 and viewport.width = 200 (the width of the texture). The start will be 125 and end will be 175
I honestly feel like this answer is leading you down the wrong path. It's not remotely this complicated. You don't have to understand any of this to use WebGL IMO.
The simple answer is
You set gl.viewport to the sub rectangle you want to affect in your destination (canvas or texture it doesn't matter)
You make a vertex shader that somehow sets gl_Position to clip space coordinates (they go from -1 to +1) across the texture
Those clip space coordinates get converted to the viewport space. It's basic math to map one range to another range but it's mostly not important. It's seems intuitive that -1 will draw to the viewport.x pixel and +1 will draw to the viewport.x + viewport.width - 1 pixel. That's what "maps from clip space to the viewport settings means".
It's most common for the viewport settings to be (x = 0, y = 0, width = width of destination texture or canvas, height = height of destination texture or canvas)
So that just leaves what you set gl_Position to. Those values are in clip space just like it explains in this article.
You can make it simple by doing if you want by converting from pixel space to clip space just like it explains in this article
zeroToOne = someValueInPixels / destinationDimensions;
zeroToTwo = zeroToOne * 2.0;
clipspace = zeroToTwo - 1.0;
gl_Position = clipspace;
If you continue the articles they'll also show adding a value (translation) and multiplying by a value (scale)
Using just those 2 things and a unit square (0 to 1) you can choose any rectangle on the screen. Want to effect 123 to 127. That's 5 units so scale = 5, translation = 123. Then apply the math above to convert from pixels to clips space and you'll get the rectangle you want.
If you continue further though those articles you'll eventually get the point where that math is done with matrices but you can do that math however you want. It's like asking "how do I compute the value 3". Well, 1 + 1 + 1, or 3 + 0, or 9 / 3, or 100 - 50 + 20 * 2 / 30, or (7^2 - 19) / 10, or ????
I can't tell you how to set gl_Position. I can only tell you make up whatever math you want and set it to *clip space* and then give an example of converting from pixels to clipspace (see above) as just one example of some possible math.
old answer
I get that this might not be clear I don't know how to help. WebGL draws lines, points, or triangles two a 2D array. That 2D array is either the canvas, a texture (as a framebuffer attachment) or a renderbuffer (as a framebuffer attachment).
The size of the area is defined by the size of the canvas, texture, renderbuffer.
You write a vertex shader. When you call gl.drawArrays(primitiveType, offset, count) you're telling WebGL to call your vertex shader count times. Assuming primitiveType is gl.TRIANGLES then for every 3 vertices generated by your vertex shader WebGL will draw a triangle. You specify that triangle by setting gl_Position in clip space.
Assuming gl_Position.w is 1, Clip space goes from -1 to +1 in X and Y across the destination canvas/texture/renderbuffer. (gl_Position.x and gl_Position.y are divided by gl_Position.w) which is not really important for your case.
To convert back to actually pixels your X and Y are converted based on the settings of gl.viewport. Let's just do X
pixelX = ((clipspace.x / clipspace.w) * .5 + .5) * viewport.width + viewport.x
WebGL is destination based. That means it's going to iterate over the pixels of the line/point/triangle it's drawing and for each point call the fragment shader and ask 'what value should I store here`?
Let's translate that to JavaScript in 1D. Let's assume you have an 1D array
const dst = new Array(100);
Let's make a function that takes a start and end and sets values between
function setRange(dst, start, end, value) {
for (let i = start; i < end; ++i) {
dst[i] = value;
}
}
You can fill the entire 100 element array with 123
const dst = new Array(100);
setRange(dst, 0, 99, 123);
To set the last half of the array to 456
const dst = new Array(100);
setRange(dst, 50, 99, 456);
Let's change that to use clip space like coordinates
function setClipspaceRange(dst, clipStart, clipEnd, value) {
const start = clipspaceToArrayspace(dst, clipStart);
const end = clipspaceToArrayspace(dst, clipEnd);
for (let i = start; i < end; ++i) {
dst[i] = value;
}
}
function clipspaceToArrayspace(array, clipspaceValue) {
// convert clipspace value (-1 to +1) to (0 to 1)
const zeroToOne = clipspaceValue * .5 + .5;
// convert zeroToOne value to array space
return Math.floor(zeroToOne * array.length);
}
This function now works just like the previous one except takes clip space values instead of array indices
// fill entire array with 123
const dst = new Array(100);
setClipspaceRange(dst, -1, +1, 123);
Set the last half of the array to 456
setClipspaceRange(dst, 0, +1, 456);
Now abstract one more time. Instead of using the array's length use a setting
// viewport looks like `{ x: number, width: number} `
function setClipspaceRangeViaViewport(dst, viewport, clipStart, clipEnd, value) {
const start = clipspaceToArrayspaceViaViewport(viewport, clipStart);
const end = clipspaceToArrayspaceViaViewport(viewport, clipEnd);
for (let i = start; i < end; ++i) {
dst[i] = value;
}
}
function clipspaceToArrayspaceViaViewport(viewport, clipspaceValue) {
// convert clipspace value (-1 to +1) to (0 to 1)
const zeroToOne = clipspaceValue * .5 + .5;
// convert zeroToOne value to array space
return Math.floor(zeroToOne * viewport.width) + viewport.x;
}
Now to fill the entire array with 123
const dst = new Array(100);
const viewport = { x: 0, width: 100; }
setClipspaceRangeViaViewport(dst, viewport, -1, 1, 123);
Set the last half of the array to 456 there are now 2 ways. Way one is just like the previous using 0 to +1
setClipspaceRangeViaViewport(dst, viewport, 0, 1, 456);
You can also set the viewport to start half way through the array
const halfViewport = { x: 50, width: 50; }
setClipspaceRangeViaViewport(dst, halfViewport, -1, +1, 456);
I don't know if that was helpful or not.
The only other thing to add is instead of value replace that with a function that gets called every iteration to supply value
function setClipspaceRangeViaViewport(dst, viewport, clipStart, clipEnd, fragmentShaderFunction) {
const start = clipspaceToArrayspaceViaViewport(viewport, clipStart);
const end = clipspaceToArrayspaceViaViewport(viewport, clipEnd);
for (let i = start; i < end; ++i) {
dst[i] = fragmentShaderFunction();
}
}
Note this is the exact same thing that is said in this article and clearified somewhat in this article.

glTexImage2D reads beyond bounds of buffer (iOS)

In the following simple code, I load a 1-channel data to a texture. I use glTexImage2D() with GL_LUMINANCE (which is a 1-channel format) and GL_UNSIGNED_BYTE, so it should take one byte per pixel. I allocate a buffer with size equal the number of pixels (2 x 2) which represents the input pixel data (the values of the pixels don't matter for our purposes).
When you run the following code with Address Sanitizer enabled, it detects a heap buffer overflow in the call to glTexImage2D(), saying that it tried to read beyond the bounds of the heap-allocated buffer:
#import <OpenGLES/ES2/gl.h>
//...
EAGLContext* context = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
[EAGLContext setCurrentContext:context];
GLsizei width = 2, height = 2;
void *data = malloc(width * height); // contents don't matter for now
glTexImage2D(GL_TEXTURE_2D,
0,
GL_LUMINANCE,
width,
height,
0,
GL_LUMINANCE,
GL_UNSIGNED_BYTE,
data);
This is 100% reproducible and happens on both iOS simulator and device. Only if you increase the size of the buffer to 6 will it not overflow (2 bigger than the expected size of 4).
Sizes of 1x1 and 4x4 don't seem to have this problem, but 2x2 and 3x3 do. It seems kind of arbitrary.
What is wrong?
I have solved it thanks to #genpfault's comment.
I need to set the unpack alignment to 1:
glPixelStorei(GL_UNPACK_ALIGNMENT, 1);
Specifically, the unpack alignment determines the alignment for the start of each row. The default value is 4. Since my rows don't have any special alignment, and there are no gaps between row bytes, the alignment should be 1.
The first row will always be aligned because malloc allocates 16-aligned buffers. But the second and subsequent rows were misaligned with the default alignment of 4 unless the row length was a multiple of 4 (this explains why 2x2 and 3x3 don't work, but 4x4 does). 1x1 happens to work because it has no second row.

Render Large Texture To Smaller Renderbuffer

I have a render buffer that is 852x640 and a texture that is 1280x720. When I render the texture, it is getting cropped, not just stretched. I know the aspect ratio needs correcting, but how can I get it so that the full texture displays in the render buffer?
//-------------------------------------
glGenFramebuffers(1, &frameBufferHandle);
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
glGenRenderbuffers(1, &renderBufferHandle);
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferHandle);
[oglContext renderbufferStorage:GL_RENDERBUFFER fromDrawable:(CAEAGLLayer *)self.layer];
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_WIDTH, &renderBufferWidth);
glGetRenderbufferParameteriv(GL_RENDERBUFFER, GL_RENDERBUFFER_HEIGHT, &renderBufferHeight);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, renderBufferHandle);
//-------------------------------------
static const GLfloat squareVertices[] = {
-1.0f, 1.0f,
1.0f, 1.0f,
-1.0f, -1.0f,
1.0f, -1.0f
};
static const GLfloat horizontalFlipTextureCoordinates[] = {
0.0f, 1.0f,
1.0f, 1.0f,
0.0f, 0.0f,
1.0f, 0.0f,
};
size_t frameWidth = CVPixelBufferGetWidth(pixelBuffer);
size_t frameHeight = CVPixelBufferGetHeight(pixelBuffer);
CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
frameWidth,
frameHeight,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
if (!texture || err) {
NSLog(#"CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);
return;
}
glBindTexture(CVOpenGLESTextureGetTarget(texture), CVOpenGLESTextureGetName(texture));
glViewport(0, 0, renderBufferWidth, renderBufferHeight); // setting this to 1280x720 fixes the aspect ratio but still crops
glBindFramebuffer(GL_FRAMEBUFFER, frameBufferHandle);
glUseProgram(shaderPrograms[PASSTHROUGH]);
// Update attribute values.
glVertexAttribPointer(ATTRIB_VERTEX, 2, GL_FLOAT, 0, 0, squareVertices);
glEnableVertexAttribArray(ATTRIB_VERTEX);
glVertexAttribPointer(ATTRIB_TEXTUREPOSITON, 2, GL_FLOAT, 0, 0, horizontalFlipTextureCoordinates);
glEnableVertexAttribArray(ATTRIB_TEXTUREPOSITON);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
// Present
glBindRenderbuffer(GL_RENDERBUFFER, renderBufferHandle);
[oglContext presentRenderbuffer:GL_RENDERBUFFER];
EDIT
I'm still running into issues. I've included more source. Basically, I need the entire raw input texture to display in wide screen while also writing the raw texture to disk.
When rendering to a smaller texture, things are automatically scaled, is this not the case with a renderbuffer?
I guess I could make another passthrough to a smaller texture, but that would slow things down.
First of all, keep glViewport(0, 0, renderBufferWidth, renderBufferHeight); with 852x640.
The problem is in your squareVertices - looks like it keeps coordinates that represent texture size. You need to set it equal to renderbuffer size.
The idea is that texture is mapped on your squareVertices rect. So you can render texture of any size mapped to rect of any size - texture image will be scaled to fit the rect.
[Update: square vertices]
In your case it should be:
{
0.0f, (float)renderBufferWidth/frameHeight,
(float)renderBufferWidth/frameWidth, (float)renderBufferHeight/frameHeight,
0.0f, 0.0f,
(float)renderBufferWidth/frameWidth, 0.0f,
};
But this is not good solution in common. From theory, the rectangle size on screen is determined by vertices position and transformation matrix. Each vertice is multiplied with matrix before rendering on screen. Looks like you don't set OpenGL projection matrix. With correct orthogonal projection your vertices should have pixel-equivalent positions.
Since I, Being new to OpenGL, remembers that the texture to be mapped should be in the powers of 2 by 2.
for eg the image resolution should be... 256x256, 512x512.
You can then SCALE the image using
gl.glScalef(x,y,z); function accordingly as per your requirements.
get the height and width accordingly and put these in your scalef function.
Try this, i hope this works.
Try these functions. My answer can be validated from the info #songhoa.ca.com
glGenFramebuffers()
void glGenFramebuffers(GLsizei n, GLuint* ids)
number of frame-buffers to create
void glDeleteFramebuffers(GLsizei n, const GLuint* ids)
pointer to a GLuint variable or an array to store a number of IDs.It returns the IDs of unused framebuffer objects. ID 0 means the default framebuffer, which is the window-system-provided framebuffer.
FBO may be deleted by calling glDeleteFramebuffers() when it is not used anymore.
glBindFramebuffer()
Once a FBO is created, it has to be bound before using it.
void glBindFramebuffer(GLenum target, GLuint id)
First parameter is The target should be GL_FRAMEBUFFER.
Second parameter is the ID of a framebuffer object.
Once a FBO is bound, all OpenGL operations affect onto the current bound framebuffer object. The object ID 0 is reserved for the default window-system provided framebuffer. Therefore, in order to unbind the current framebuffer (FBO), use ID 0 in glBindFramebuffer().
Try using those, or at least visit the link which could help you a lot. Sorry, i'm not experienced in OpenGL but I wanted to contribute the link, and explain the 2 functions. I think you can use the info to write your code.
Oh boy, so the answer is that this was working all along ;) It turns out the high resolution preset mode on the iPhone 4 actually covers less area than the medium resolution preset. This threw me in for a loop until Brigadir suggested what I should have done first all along, check the GPU snapshots.
I figured out the aspect ratio issue too by hacking the appropriate code in the GPUImage framework. https://github.com/bradLarson/GPUImage

Overlapping 2 transparent Texture2Ds in OpenGL ES

I'm trying to make a 2D game for the iPad with OpenGL. I'm new to OpenGL in general so this blending stuff is new.
My drawing code looks like this:
static CGFloat r=0;
r+=2.5;
r=remainder(r, 360);
glLoadIdentity();
//you can ignore the rotating and scaling
glRotatef(90, 0,0, -1);
glScalef(1, -1, 1);
glTranslatef(-1024, -768, 0);
glClearColor(0.3,0.8,1, 1.0);
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
glEnable(GL_TEXTURE_2D);
glEnable (GL_BLEND);
glBlendFunc (GL_ONE,GL_ONE_MINUS_SRC_ALPHA);
[texture drawInRect:CGRectMake(512-54, abs(sin(((r+45)/180)*3.14)*500), 108, 108)];
[texture drawInRect:CGRectMake(512-54, abs(sin((r/180)*3.14)*500), 108, 108)];
("texture" is a Texture2D that has a transparent background)
All I need to know how to do is make it so that a blue box around the texture doesnt cover up the other one.
Sounds like you just need to open up the texture image in your favourite image editor and set the blue area to be 0% opaque (i.e. 0) in the alpha channel. The SRC_ALPHA part of GL_ONE_MINUS_SRC_ALPHA means the alpha value in the source texture.
Chances are you're using 32-bit colour, in which case you'll have four channels, 8 bits for Red, Green, Blue and Alpha.

Resources