Migrating from glReadPixels to CVOpenGLESTextureCache - ios

Curretly, I use glReadPixels in an iPad app to save the contents of an OpenGL texture in a frame buffer, which is terribly slow. The texture has a size of 1024x768, and I plan on supporting Retina display at 2048x1536. The data retrieved is saved in a file.
After reading from several sources, using CVOpenGLESTextureCache seems to be the only faster alternative. However, I could not find any guide or documentation as a good starting point.
How do I rewrite my code so it uses CVOpenGLESTextureCache? What parts of the code need to be rewritten? Using third-party libraries is not a preferred option unless there is already documentation on how to do this.
Code follows below:
//Generate a framebuffer for drawing to the texture
glGenFramebuffers(1, &textureFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, textureFramebuffer);
//Create the texture itself
glGenTextures(1, &drawingTexture);
glBindTexture(GL_TEXTURE_2D, drawingTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F_EXT, pixelWidth, pixelHeight, 0, GL_RGBA32F_EXT, GL_UNSIGNED_BYTE, NULL);
//When drawing to or reading the texture, change the active buffer like that:
glBindFramebuffer(GL_FRAMEBUFFER, textureFramebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
//When the data of the texture needs to be retrieved, use glReadPixels:
GLubyte *buffer = (GLubyte *) malloc(pixelWidth * pixelHeight * 4);
glReadPixels(0, 0, pixelWidth, pixelHeight, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)buffer);

Related

Create MipMap image manually (iOS OpenGL)

Currently I'm loading a texture with this code :
GLKTextureInfo * t = [GLKTextureLoader textureWithContentsOfFile:path options:#{GLKTextureLoaderGenerateMipmaps: [NSNumber numberWithBool:YES]} error:&error];
But the result is not that good when I scaled down the image (jagged edged).
Can I create my own mipmap using image software like Adobe Illustrator? But what is the rule to do that?
And how do I load this image using the code ?
Thanks!
-- Edited --
Thanks for the answer, I got it using :
GLuint texName;
glGenTextures(1, &texName);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// load image data here
...
// set up mipmap
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData0);
glTexImage2D(GL_TEXTURE_2D, 1, GL_RGBA, 128,128, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData1);
...
glTexImage2D(GL_TEXTURE_2D, 8, GL_RGBA, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData8);
Yes, you can manually make the mipmaps and upload them yourself. If you're using Illustrator, presumably it has some method to output an image at a particular resolution. I'm not that familiar with Illustrator, so I don't know how that part works.
Once you have the various resolutions, you can upload them as part of your main image. You can use glTexImage2D() to upload the original image to a texture. Then you can upload additional mipmap levels using glTexImage2D(), but setting the level parameter to other values. For example:
glTexImage2D (GL_TEXTURE_2D, level, etc...);
where level is the mipmap level for this particular image. Note that you will probably have to set the various texture parameters appropriately for mipmaps, such as:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, <whatever the max is here>);
See the section on mipmaps on this page for details.

Depth texture as color attachment

I d'like to attach a depth texture as a color attachment to the framebuffer. (I'm on iOS and GL_OES_depth_texture is supported)
So I setup a texture like this:
glGenTextures(1, &TextureName);
glBindTexture(GL_TEXTURE_2D, TextureName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT, ImageSize.Width, ImageSize.Height, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_SHORT, 0);
glGenFramebuffers(1, &ColorFrameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, ColorFrameBuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, TextureName, 0);
But now if I check the framebuffer status I get a GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT
What am I doing wrong here?
I also tried some combinations with GL_DEPTH_COMPONENT16, GL_DEPTH_COMPONENT24_OES, GL_DEPTH_COMPONENT32_OES but non of these worked (GL_OES_depth24 is also supported)
You can't. Textures with depth internal formats can only be attached to depth attachments. Textures with color internal formats can only be attached to color attachments.
As previous answer mentioned, you cannot attach a texture with depth format as a color surface. Now looking at your comment, you're really after rendering to a 1-channel float format.
You could look at http://www.khronos.org/registry/gles/extensions/OES/OES_texture_float.txt which allows you to have a Texture format of float format.
You can then initialize the texture to be a Alpha map, which would only include 1 channel.
glTexImage2D(GL_TEXTURE_2D, 0, GL_ALPHA, ImageSize.Width, ImageSize.Height, 0, GL_ALPHA, GL_FLOAT, 0);
This may or may not work depending on what extensions supported by your device.

Frame capture in Xcode fails

I am using Xcode 4.5.1 and testing on an iPhone5 with iOS6.
I was using the frame capture function without problem, but suddenly it stopped working.
When I press the frame capture button, it seems the frame is captured, and the phone switches to a blank screen, only to suddenly switch back to the application screen, and the application keeps running. I can still debug and pause the application, but there's no way to get the frame capture. I don't see any errors in the console either.
The reason it stopped working is this piece of code. This code is supposed to render something to a rendertexture, but the rendertexture seems blank. I wanted to use the frame capture function to find out what's wrong, but the code itself won't let me capture... :(
Any idea why?
// ------------- init function -----------------
// Create the framebuffer and bind it
glGenFramebuffers(1, &g_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, g_framebuffer);
//Create the destination texture, and attach it to the framebuffer’s color attachment point.
glGenTextures(1, &g_texture);
glBindTexture(GL_TEXTURE_2D, g_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, g_texture, 0);
//Test the framebuffer for completeness
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
if(status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"failed to make complete framebuffer object %x", status);
} else {
NSLog(#"SkyPlugin initialized");
}
// ----------------- on update ------------------
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &oldFBO);
glGetIntegerv(GL_VIEWPORT, oldViewPort);
// set the framebuffer and clear
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, g_framebuffer);
glViewport(0, 0, 32, 32);
//glClearColor(0.9f, 0.1f, 0.1f, 1.0f);
glDisable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT);
// Set shader
glUseProgram(m_program);
// do some glEnableVertexAttribArray
// ...
// texture setting
glActiveTexture(GL_TEXTURE0);
glUniform1i(m_uniform, 0);
ResourceManager* resourceManager = ResourceManager::GetInstance();
glBindTexture(GL_TEXTURE_2D, m_texture[0]);
// ----------- Draw -----------
// Draws a full-screen quad to copy textures
static const vertexDataUV quad[] = {
{/*v:*/{-1.f,-1.f,0}, /*t:*/{0,0}},
{/*v:*/{-1.f,1,0}, /*t:*/{0,1}},
{/*v:*/{1,-1.f,0}, /*t:*/{1,0}},
{/*v:*/{1,1,0}, /*t:*/{1,1}}
};
static const GLubyte indeces[] = {0,2,1,3};
glVertexAttribPointer(m_posAttrib, 3, GL_FLOAT, 0, sizeof(vertexDataUV), &quad[0].vertex);
glVertexAttribPointer(m_texCoordAttrib, 2, GL_FLOAT, 0, sizeof(vertexDataUV), &quad[0].uv);
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_BYTE, indeces);
// ------------ End
// go back to the main framebuffer!
glBindFramebuffer(GL_FRAMEBUFFER, oldFBO);
glViewport(oldViewPort[0], oldViewPort[1], oldViewPort[2], oldViewPort[3]);
glEnable(GL_DEPTH_TEST);
//glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
Edit: (2012/October/28)
I found out why the above code was not working. I forgot to bind a render buffer! The code below works, but still the frame capture fails when this code is active...
On init,
// Create the renderbuffer and bind it
glGenRenderbuffers(1, &g_renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, g_renderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8_OES, w, h);
// Create the framebuffer and bind it
glGenFramebuffers(1, &g_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, g_framebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, g_renderbuffer);
//Create the destination texture, and attach it to the framebuffer’s color attachment point.
glGenTextures(1, &g_texture);
glBindTexture(GL_TEXTURE_2D, g_texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, g_texture, 0);
On update,
glGetIntegerv(GL_RENDERBUFFER_BINDING, &oldRBO);
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &oldFBO);
glGetIntegerv(GL_VIEWPORT, oldViewPort);
// set the framebuffer and clear
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, g_framebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, g_renderbuffer);
glViewport(0, 0, 32, 32);
// ... draw stuff ...
End of update,
// go back to the main framebuffer!
glBindFramebuffer(GL_FRAMEBUFFER, oldFBO);
glBindRenderbuffer(GL_RENDERBUFFER, oldRBO);
It seems it was a bug of Xcode.
The latest version, Xcode 4.5.2 lets me capture the frame :)
After I capture the frame, I get an error in this part of the code:
// ... draw stuff ...
glActiveTexture(GL_TEXTURE0);
glUniform1i(MY_TEXTURE, 0);
On the glUniform1i I get this error: "The specified operation is invalid for the current OpenGL state".
No idea why I get this error (the rendering it's working), but I suspect this error may be the reason why I wasn't able to capture a frame in the previous version of Xcode...
I've seen very similar behavior, and while it was a bit erratic, it did seem to be related to memory usage. Usually when it failed, the replay application seemed to be running out of memory (I'd see a memory warning in the console when it failed).
Switching to a device with more memory fixed it (iPad 4 from iPad 3), but I also was able to occasionally work around it by reducing the amount of texture memory used -- skipping setting the top mipmap for all my textures would usually free up enough.

Multisampled rendering to texture

I am working with the following architecture:
OpenGL ES 2 on iOS
Two EAGL contexts with the same ShareGroup
Two threads (server, client = main thread); the server renders stuff to textures, the client displays the textures using simple textured quads.
Additional detail to the server thread (working code)
An fbo is created during initialization:
void init(void) {
glGenFramebuffer(1, &fbo);
}
The render loop of the server looks roughly like this:
GLuint loop(void) {
glBindFrameBuffer(GL_FRAMEBUFFER, fbo);
glViewport(0,0,width,height);
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0);
// Framebuffer completeness check omitted
glClear(GL_COLOR_BUFFER_BIT);
// actual drawing code omitted
// the drawing code bound other textures, so..
glBindTexture(GL_TEXTURE_2D, tex);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, GL_NONE);
glFlush();
return tex;
}
All this works fine so far.
New (buggy) code
Now i want to add Multisampling to the server thread, using the GL_APPLE_framebuffer_multisample extension and modified the the initialization code like this:
void init(void) {
glGenFramebuffer(1, &resolve_fbo);
glGenFramebuffers(1, &sample_fbo);
glBindFramebuffer(GL_FRAMEBUFFER, sample_fbo);
glGenRenderbuffers(1, &sample_colorRenderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, sample_colorRenderbuffer);
glRenderbufferStorageMultisampleAPPLE(GL_RENDERBUFFER, 4, GL_RGBA8_OES, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, sample_colorRenderbuffer);
// Framebuffer completeness check (sample_fbo) omitted
glBindRenderbuffer(GL_RENDERBUFFER, GL_NONE);
glBindFramebuffer(GL_FRAMEBUFFER, GL_NONE);
}
The main loop has been changed to:
GLuint loop(void) {
glBindFrameBuffer(GL_FRAMEBUFFER, sample_fbo);
glViewport(0,0,width,height);
GLuint tex;
glGenTextures(1, &tex);
glBindTexture(GL_TEXTURE_2D, tex);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB, width, height, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
glClear(GL_COLOR_BUFFER_BIT);
// actual drawing code omitted
glBindFramebuffer(GL_FRAMEBUFFER, resolve_fbo);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex, 0);
// Framebuffer completeness check (resolve_fbo) omitted
// resolve multisampling
glBindFramebuffer(GL_DRAW_FRAMEBUFFER_APPLE, resolve_fbo);
glBindFramebuffer(GL_READ_FRAMEBUFFER_APPLE, sample_fbo);
glResolveMultisampleFramebufferAPPLE();
// the drawing code bound other textures, so..
glBindTexture(GL_TEXTURE_2D, tex);
glGenerateMipmap(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, GL_NONE);
glFlush();
return tex;
}
What i see now is that a texture contains data from multiple loop() calls, blended together. I guess I'm either missing an 'unbind' of some sort, or probably a glFinish() call (I previously had such a problem at a different point, I set texture data with glTexImage2D() and used it right afterwards - that required a glFinish() call to force the texture to be updated).
However inserting a glFinish() after the drawing code didn't change anything here..
Oh nevermind, such a stupid mistake. I omitted the detail that the loop() method actually contains a for loop and renders multiple textures, the mistake was that i bound the sample fbo only before this loop, so after the first run the resolve fbo was bound..
Moving the fbo binding inside the loop fixed the problem.
Anyway, thanks # all the readers and sorry for wasting your time :)

glTexSubImage2D -> GL_INVALID_OPERATION

Why is glTexSubImage2D() suddenly causing GL_INVALID_OPERATION?
I'm trying to upgrade my hopelessly outdated augmented reality app from iOS4.x to iOS5.x, but I'm having difficulties. I run iOS5.0. Last week I ran iOS4.3. My device is an iPhone4.
Here is a snippet from my captureOutput:didOutputSampleBuffer:fromConnection: code
uint8_t *baseAddress = /* pointer to camera buffer */
GLuint texture = /* the texture name */
glBindTexture(GL_TEXTURE_2D, texture);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 480, 360, GL_BGRA, GL_UNSIGNED_BYTE, baseAddress);
/* now glGetError(); -> returns 0x0502 GL_INVALID_OPERATION on iOS5.0, works fine on iOS4.x */
Here is a snippet from my setup code
GLuint texture = /* the texture name */
glGenTextures(1, &texture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
For simplicity I have inserted hardcoded values here. In my actual code I obtain these values with CVPixelBufferGetWidth/Height/BaseAddress. The EAGLContext is initialized with kEAGLRenderingAPIOpenGLES2.
Ah.. I fixed it immediately after posting this question. Had to change GL_RGBA into GL_BRGA.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 512, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
Hope it helps someone.
BTW. If you want to write AR apps then consider using CVOpenGLESTextureCache instead of using glTexSubImage2d. It's supposed to be faster.

Resources