Read pixels from off-screen OpenGL pixel buffer in iOS (OopenGL-ES) - ios

I want to read pixels from an off-screen (not backed by a CAEAGLLayer) Framebuffer. My code to create the buffer looks like:
glGenFramebuffersOES(1, &_storeFramebuffer);
glGenRenderbuffersOES(1, &_storeRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, _storeFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _storeRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, _storeRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGBA8_OES, w, h);
I read raw pixels with:
glBindFramebufferOES(GL_FRAMEBUFFER_OES, _storeFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _storeRenderbuffer);
glReadPixels(0, 0, _videoDimensions.width, _videoDimensions.height, GL_BGRA_EXT, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(outPixelBuffer));
This works well. I can render to this buffer, and copy from it to the screen. But I can't get raw pixels. glReadPixels always returns zeros, and glReadBuffer seems not to exist. I can read from the on-screen frame buffer with glReadPixels. Any ideas?

Solved. RGBA to BGRA conversion is not supported by glReadPixels on iOS.
Changing
glReadPixels(0, 0, _videoDimensions.width, _videoDimensions.height, GL_BGRA_EXT, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(outPixelBuffer));
to
glReadPixels(0, 0, _videoDimensions.width, _videoDimensions.height, GL_RGBA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(outPixelBuffer));
Solves the problem. glGetError is my new friend ;)

Related

How to view a renderbuffer of GLuints on the screen?

To get a sort of index of the elements drawn on the screen, I've created a framebuffer that will draw objects with solid colors of type GL_R32UI.
The framebuffer I created has two renderbuffer attached. One of color and one of depth. Here is a schematic of how it was created using python:
my_fbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
rbo = glGenRenderbuffers(2) # GL_DEPTH_COMPONENT16 and GL_COLOR_ATTACHMENT0
glBindRenderbuffer(GL_RENDERBUFFER, rbo[0])
glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT16, width, height)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, rbo[0])
glBindRenderbuffer(GL_RENDERBUFFER, rbo[1])
glRenderbufferStorage(GL_RENDERBUFFER, GL_R32UI, width, height)
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, rbo[1])
glBindRenderbuffer(GL_RENDERBUFFER, 0)
glBindFramebuffer(GL_FRAMEBUFFER, 0)
I read the indexes with readpixel like this:
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
glReadPixels(x, y, threshold, threshold, GL_RED_INTEGER, GL_UNSIGNED_INT, r_data)
glBindFramebuffer(GL_FRAMEBUFFER, 0)
The code works perfectly, I have no problem with that.
But for debugging, I'd like to see the indexes on the screen
With the data obtained below, how could I see the result of drawing the indices (unsigned int) on the screen?*
active_fbo = glGetIntegerv(GL_FRAMEBUFFER_BINDING)
my_indices_fbo = my_fbo
my_rbo_depth = rbo[0]
my_rbo_color = rbo[1]
## how mix my_rbo_color and cur_fbo??? ##
glBindFramebuffer(gl.GL_FRAMEBUFFER, active_fbo)
glBlitFramebuffer transfer a rectangle of pixel values from one region of a read framebuffer to another region of a draw framebuffer.
glBindFramebuffer( GL_READ_FRAMEBUFFER, my_fbo );
glBindFramebuffer( GL_DRAW_FRAMEBUFFER, active_fbo );
glBlitFramebuffer( 0, 0, width, height, 0, 0, width, height, GL_COLOR_BUFFER_BIT, GL_NEAREST );
Note, you have to be careful, because an GL_INVALID_OPERATION error will occur, if the read buffer contains unsigned integer values and any draw buffer does not contain unsigned integer values. Since the internal format of the frame buffers color attachment is GL_R32UI, and the internal format of the drawing buffer is usually something like GL_RGBA8, this maybe not works, or it even will not do what you have expected.
But you can create a frame buffer with a texture attached to its color plane an use the texture as an input to a post pass, where you draw a quad over the whole canvas.
First you have to create the texture with the size as the frame buffer:
ColorMap0 = glGenTextures(1);
glBindTexture(GL_TEXTURE_2D, ColorMap0);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R32UI, width, height, 0, GL_R, GL_UNSIGNED_INT, 0);
You have to attach the texture to the frame buffer:
my_fbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, my_fbo)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, ColorMap0, 0);
When you have drawn the scene then you have to release the framebuffer.
glBindFramebuffer(GL_FRAMEBUFFER, 0)
Now you can use the texture as an input for a final pass. Simply bind the texture, enable 2D textures and draw a quad over the whole canvas. The quad should range from from (-1,-1) to (1,1), with texture coordinates in range from (0, 0) to (1, 1). Of course you can use a shader, with a texture sampler uniform in the fragment shader, for that. You can read the texel from the texture a write to the fragment in an way you want.
Extension to the answer
If performance is not important, then you can convert the buffer on the CPU and draw it on the canvas, after reading the frame buffer with glReadPixels. For that you can leave your code as it is and read the frame buffer with glReadPixels, but you have to convert the buffer to a format appropriate to the drawing buffer. I suggest to use the
internal format GL_RGBA8 or GL_RGB8. You have to create a new texture with the convert buffer data.
debugTexturePlane = ...;
debugTexture = glGenTextures(1);
glBindTexture(GL_TEXTURE_2D, debugTexture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, debugTexturePlane);
From now on you have 2 possibilities.
Either you create a new frame buffer and attach the texture to its color plane
debugFbo = glGenFramebuffers(1)
glBindFramebuffer(GL_FRAMEBUFFER, debugFbo)
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, debugTexture, 0);
and you use glBlitFramebuffer as described above to copy from the debug frame buffer to the color plane.
This should not be any problem, because the internal formats of the buffers should be equal.
Or you draw a textured quad over the whole viewport. The code may look like this (old school):
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
glMatrixMode(GL_TEXTURE);
glLoadIdentity();
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, debugTexture);
glBegin(GL_QUADS);
glTexCoord2f(0.0, 0.0); glVertex2f(-1.0, -1.0);
glTexCoord2f(0.0, 1.0); glVertex2f(-1.0, 1.0);
glTexCoord2f(1.0, 1.0); glVertex2f( 1.0, 1.0);
glTexCoord2f(1.0, 0.0); glVertex2f( 1.0, -1.0);
glEnd();

glTexSubImage2D 1282 - invalid operation in gl es 3.1

I am trying to use:
layout (binding = 0, rgba8ui) readonly uniform uimage2D input;
in a compute shader. In order to to bind a texture to this I am using:
glBindImageTexture(0, texture_name, 0, GL_FALSE, 0, GL_READ_ONLY, GL_RGBA8);
and it seems that in order for this bind to work the texture has to be immutable so I've switched from:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
to:
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA8UI, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
But this generates "Invalid operation" (specifically the glTexSubImage2D() call generates it). Looking in the documentation I discovered that this call may cause 1282 for the following reasons:
GL_INVALID_OPERATION is generated if the texture array has not been defined by a previous glTexImage2D or glCopyTexImage2D operation whose internalformat matches the format of glTexSubImage2D.
GL_INVALID_OPERATION is generated if type is GL_UNSIGNED_SHORT_5_6_5 and format is not GL_RGB.
GL_INVALID_OPERATION is generated if type is GL_UNSIGNED_SHORT_4_4_4_4 or GL_UNSIGNED_SHORT_5_5_5_1 and format is not GL_RGBA
but none of these is my case.
The first of them might seem to be the problem (considering I am using glTexStorage2D(), not glTexImage2D() )but this is not the problem because in case of float texture the same mechanism works:
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA32F, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGBA, GL_FLOAT, pixels);
instead of:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, pixels);
This is probably irrelevant but both methods work well on PC.
Any suggestions on why is this happening?
The internalFormat you use in glTexImage2D and glBindImageTexture should be the same and be compatible with your sampler. For a uimage2D, try using GL_RGBA8UI everywhere.
Also, for transfers to GL_RGBA8UI (and other integer formats) you need to use GL_RGBA_INTEGER as format.
glBindImageTexture(0, texture_name, 0, GL_FALSE, 0, GL_READ_ONLY, GL_RGBA8UI);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8UI, width, height, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, pixels);
Using the format GL_RGBA_INTEGER should also make the glTexSubImage2D variant work.

Migrating from glReadPixels to CVOpenGLESTextureCache

Curretly, I use glReadPixels in an iPad app to save the contents of an OpenGL texture in a frame buffer, which is terribly slow. The texture has a size of 1024x768, and I plan on supporting Retina display at 2048x1536. The data retrieved is saved in a file.
After reading from several sources, using CVOpenGLESTextureCache seems to be the only faster alternative. However, I could not find any guide or documentation as a good starting point.
How do I rewrite my code so it uses CVOpenGLESTextureCache? What parts of the code need to be rewritten? Using third-party libraries is not a preferred option unless there is already documentation on how to do this.
Code follows below:
//Generate a framebuffer for drawing to the texture
glGenFramebuffers(1, &textureFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, textureFramebuffer);
//Create the texture itself
glGenTextures(1, &drawingTexture);
glBindTexture(GL_TEXTURE_2D, drawingTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F_EXT, pixelWidth, pixelHeight, 0, GL_RGBA32F_EXT, GL_UNSIGNED_BYTE, NULL);
//When drawing to or reading the texture, change the active buffer like that:
glBindFramebuffer(GL_FRAMEBUFFER, textureFramebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
//When the data of the texture needs to be retrieved, use glReadPixels:
GLubyte *buffer = (GLubyte *) malloc(pixelWidth * pixelHeight * 4);
glReadPixels(0, 0, pixelWidth, pixelHeight, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)buffer);

Frame capture in Xcode fails

I am using Xcode 4.5.1 and testing on an iPhone5 with iOS6.
I was using the frame capture function without problem, but suddenly it stopped working.
When I press the frame capture button, it seems the frame is captured, and the phone switches to a blank screen, only to suddenly switch back to the application screen, and the application keeps running. I can still debug and pause the application, but there's no way to get the frame capture. I don't see any errors in the console either.
The reason it stopped working is this piece of code. This code is supposed to render something to a rendertexture, but the rendertexture seems blank. I wanted to use the frame capture function to find out what's wrong, but the code itself won't let me capture... :(
Any idea why?
// ------------- init function -----------------
// Create the framebuffer and bind it
glGenFramebuffers(1, &g_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, g_framebuffer);
//Create the destination texture, and attach it to the framebuffer’s color attachment point.
glGenTextures(1, &g_texture);
glBindTexture(GL_TEXTURE_2D, g_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, g_texture, 0);
//Test the framebuffer for completeness
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
if(status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"failed to make complete framebuffer object %x", status);
} else {
NSLog(#"SkyPlugin initialized");
}
// ----------------- on update ------------------
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &oldFBO);
glGetIntegerv(GL_VIEWPORT, oldViewPort);
// set the framebuffer and clear
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, g_framebuffer);
glViewport(0, 0, 32, 32);
//glClearColor(0.9f, 0.1f, 0.1f, 1.0f);
glDisable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT);
// Set shader
glUseProgram(m_program);
// do some glEnableVertexAttribArray
// ...
// texture setting
glActiveTexture(GL_TEXTURE0);
glUniform1i(m_uniform, 0);
ResourceManager* resourceManager = ResourceManager::GetInstance();
glBindTexture(GL_TEXTURE_2D, m_texture[0]);
// ----------- Draw -----------
// Draws a full-screen quad to copy textures
static const vertexDataUV quad[] = {
{/*v:*/{-1.f,-1.f,0}, /*t:*/{0,0}},
{/*v:*/{-1.f,1,0}, /*t:*/{0,1}},
{/*v:*/{1,-1.f,0}, /*t:*/{1,0}},
{/*v:*/{1,1,0}, /*t:*/{1,1}}
};
static const GLubyte indeces[] = {0,2,1,3};
glVertexAttribPointer(m_posAttrib, 3, GL_FLOAT, 0, sizeof(vertexDataUV), &quad[0].vertex);
glVertexAttribPointer(m_texCoordAttrib, 2, GL_FLOAT, 0, sizeof(vertexDataUV), &quad[0].uv);
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_BYTE, indeces);
// ------------ End
// go back to the main framebuffer!
glBindFramebuffer(GL_FRAMEBUFFER, oldFBO);
glViewport(oldViewPort[0], oldViewPort[1], oldViewPort[2], oldViewPort[3]);
glEnable(GL_DEPTH_TEST);
//glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
Edit: (2012/October/28)
I found out why the above code was not working. I forgot to bind a render buffer! The code below works, but still the frame capture fails when this code is active...
On init,
// Create the renderbuffer and bind it
glGenRenderbuffers(1, &g_renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, g_renderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8_OES, w, h);
// Create the framebuffer and bind it
glGenFramebuffers(1, &g_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, g_framebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, g_renderbuffer);
//Create the destination texture, and attach it to the framebuffer’s color attachment point.
glGenTextures(1, &g_texture);
glBindTexture(GL_TEXTURE_2D, g_texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, g_texture, 0);
On update,
glGetIntegerv(GL_RENDERBUFFER_BINDING, &oldRBO);
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &oldFBO);
glGetIntegerv(GL_VIEWPORT, oldViewPort);
// set the framebuffer and clear
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, g_framebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, g_renderbuffer);
glViewport(0, 0, 32, 32);
// ... draw stuff ...
End of update,
// go back to the main framebuffer!
glBindFramebuffer(GL_FRAMEBUFFER, oldFBO);
glBindRenderbuffer(GL_RENDERBUFFER, oldRBO);
It seems it was a bug of Xcode.
The latest version, Xcode 4.5.2 lets me capture the frame :)
After I capture the frame, I get an error in this part of the code:
// ... draw stuff ...
glActiveTexture(GL_TEXTURE0);
glUniform1i(MY_TEXTURE, 0);
On the glUniform1i I get this error: "The specified operation is invalid for the current OpenGL state".
No idea why I get this error (the rendering it's working), but I suspect this error may be the reason why I wasn't able to capture a frame in the previous version of Xcode...
I've seen very similar behavior, and while it was a bit erratic, it did seem to be related to memory usage. Usually when it failed, the replay application seemed to be running out of memory (I'd see a memory warning in the console when it failed).
Switching to a device with more memory fixed it (iPad 4 from iPad 3), but I also was able to occasionally work around it by reducing the amount of texture memory used -- skipping setting the top mipmap for all my textures would usually free up enough.

glTexSubImage2D -> GL_INVALID_OPERATION

Why is glTexSubImage2D() suddenly causing GL_INVALID_OPERATION?
I'm trying to upgrade my hopelessly outdated augmented reality app from iOS4.x to iOS5.x, but I'm having difficulties. I run iOS5.0. Last week I ran iOS4.3. My device is an iPhone4.
Here is a snippet from my captureOutput:didOutputSampleBuffer:fromConnection: code
uint8_t *baseAddress = /* pointer to camera buffer */
GLuint texture = /* the texture name */
glBindTexture(GL_TEXTURE_2D, texture);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 480, 360, GL_BGRA, GL_UNSIGNED_BYTE, baseAddress);
/* now glGetError(); -> returns 0x0502 GL_INVALID_OPERATION on iOS5.0, works fine on iOS4.x */
Here is a snippet from my setup code
GLuint texture = /* the texture name */
glGenTextures(1, &texture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
For simplicity I have inserted hardcoded values here. In my actual code I obtain these values with CVPixelBufferGetWidth/Height/BaseAddress. The EAGLContext is initialized with kEAGLRenderingAPIOpenGLES2.
Ah.. I fixed it immediately after posting this question. Had to change GL_RGBA into GL_BRGA.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 512, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
Hope it helps someone.
BTW. If you want to write AR apps then consider using CVOpenGLESTextureCache instead of using glTexSubImage2d. It's supposed to be faster.

Resources