glTexSubImage2D -> GL_INVALID_OPERATION - ios

Why is glTexSubImage2D() suddenly causing GL_INVALID_OPERATION?
I'm trying to upgrade my hopelessly outdated augmented reality app from iOS4.x to iOS5.x, but I'm having difficulties. I run iOS5.0. Last week I ran iOS4.3. My device is an iPhone4.
Here is a snippet from my captureOutput:didOutputSampleBuffer:fromConnection: code
uint8_t *baseAddress = /* pointer to camera buffer */
GLuint texture = /* the texture name */
glBindTexture(GL_TEXTURE_2D, texture);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, 480, 360, GL_BGRA, GL_UNSIGNED_BYTE, baseAddress);
/* now glGetError(); -> returns 0x0502 GL_INVALID_OPERATION on iOS5.0, works fine on iOS4.x */
Here is a snippet from my setup code
GLuint texture = /* the texture name */
glGenTextures(1, &texture);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texture);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 512, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
For simplicity I have inserted hardcoded values here. In my actual code I obtain these values with CVPixelBufferGetWidth/Height/BaseAddress. The EAGLContext is initialized with kEAGLRenderingAPIOpenGLES2.

Ah.. I fixed it immediately after posting this question. Had to change GL_RGBA into GL_BRGA.
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 512, 512, 0, GL_BGRA, GL_UNSIGNED_BYTE, NULL);
Hope it helps someone.
BTW. If you want to write AR apps then consider using CVOpenGLESTextureCache instead of using glTexSubImage2d. It's supposed to be faster.

Related

glTexSubImage2D 1282 - invalid operation in gl es 3.1

I am trying to use:
layout (binding = 0, rgba8ui) readonly uniform uimage2D input;
in a compute shader. In order to to bind a texture to this I am using:
glBindImageTexture(0, texture_name, 0, GL_FALSE, 0, GL_READ_ONLY, GL_RGBA8);
and it seems that in order for this bind to work the texture has to be immutable so I've switched from:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
to:
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA8UI, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGBA, GL_UNSIGNED_BYTE, pixels);
But this generates "Invalid operation" (specifically the glTexSubImage2D() call generates it). Looking in the documentation I discovered that this call may cause 1282 for the following reasons:
GL_INVALID_OPERATION is generated if the texture array has not been defined by a previous glTexImage2D or glCopyTexImage2D operation whose internalformat matches the format of glTexSubImage2D.
GL_INVALID_OPERATION is generated if type is GL_UNSIGNED_SHORT_5_6_5 and format is not GL_RGB.
GL_INVALID_OPERATION is generated if type is GL_UNSIGNED_SHORT_4_4_4_4 or GL_UNSIGNED_SHORT_5_5_5_1 and format is not GL_RGBA
but none of these is my case.
The first of them might seem to be the problem (considering I am using glTexStorage2D(), not glTexImage2D() )but this is not the problem because in case of float texture the same mechanism works:
glTexStorage2D(GL_TEXTURE_2D, 1, GL_RGBA32F, width, height);
glTexSubImage2D(GL_TEXTURE_2D, 0, 0, 0, width, height, GL_RGBA, GL_FLOAT, pixels);
instead of:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, pixels);
This is probably irrelevant but both methods work well on PC.
Any suggestions on why is this happening?
The internalFormat you use in glTexImage2D and glBindImageTexture should be the same and be compatible with your sampler. For a uimage2D, try using GL_RGBA8UI everywhere.
Also, for transfers to GL_RGBA8UI (and other integer formats) you need to use GL_RGBA_INTEGER as format.
glBindImageTexture(0, texture_name, 0, GL_FALSE, 0, GL_READ_ONLY, GL_RGBA8UI);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8UI, width, height, 0, GL_RGBA_INTEGER, GL_UNSIGNED_BYTE, pixels);
Using the format GL_RGBA_INTEGER should also make the glTexSubImage2D variant work.

Migrating from glReadPixels to CVOpenGLESTextureCache

Curretly, I use glReadPixels in an iPad app to save the contents of an OpenGL texture in a frame buffer, which is terribly slow. The texture has a size of 1024x768, and I plan on supporting Retina display at 2048x1536. The data retrieved is saved in a file.
After reading from several sources, using CVOpenGLESTextureCache seems to be the only faster alternative. However, I could not find any guide or documentation as a good starting point.
How do I rewrite my code so it uses CVOpenGLESTextureCache? What parts of the code need to be rewritten? Using third-party libraries is not a preferred option unless there is already documentation on how to do this.
Code follows below:
//Generate a framebuffer for drawing to the texture
glGenFramebuffers(1, &textureFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, textureFramebuffer);
//Create the texture itself
glGenTextures(1, &drawingTexture);
glBindTexture(GL_TEXTURE_2D, drawingTexture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F_EXT, pixelWidth, pixelHeight, 0, GL_RGBA32F_EXT, GL_UNSIGNED_BYTE, NULL);
//When drawing to or reading the texture, change the active buffer like that:
glBindFramebuffer(GL_FRAMEBUFFER, textureFramebuffer);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureId, 0);
//When the data of the texture needs to be retrieved, use glReadPixels:
GLubyte *buffer = (GLubyte *) malloc(pixelWidth * pixelHeight * 4);
glReadPixels(0, 0, pixelWidth, pixelHeight, GL_RGBA, GL_UNSIGNED_BYTE, (GLvoid *)buffer);

Create MipMap image manually (iOS OpenGL)

Currently I'm loading a texture with this code :
GLKTextureInfo * t = [GLKTextureLoader textureWithContentsOfFile:path options:#{GLKTextureLoaderGenerateMipmaps: [NSNumber numberWithBool:YES]} error:&error];
But the result is not that good when I scaled down the image (jagged edged).
Can I create my own mipmap using image software like Adobe Illustrator? But what is the rule to do that?
And how do I load this image using the code ?
Thanks!
-- Edited --
Thanks for the answer, I got it using :
GLuint texName;
glGenTextures(1, &texName);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texName);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
// load image data here
...
// set up mipmap
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 256, 256, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData0);
glTexImage2D(GL_TEXTURE_2D, 1, GL_RGBA, 128,128, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData1);
...
glTexImage2D(GL_TEXTURE_2D, 8, GL_RGBA, 1, 1, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageData8);
Yes, you can manually make the mipmaps and upload them yourself. If you're using Illustrator, presumably it has some method to output an image at a particular resolution. I'm not that familiar with Illustrator, so I don't know how that part works.
Once you have the various resolutions, you can upload them as part of your main image. You can use glTexImage2D() to upload the original image to a texture. Then you can upload additional mipmap levels using glTexImage2D(), but setting the level parameter to other values. For example:
glTexImage2D (GL_TEXTURE_2D, level, etc...);
where level is the mipmap level for this particular image. Note that you will probably have to set the various texture parameters appropriately for mipmaps, such as:
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_BASE_LEVEL, 0);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAX_LEVEL, <whatever the max is here>);
See the section on mipmaps on this page for details.

Frame capture in Xcode fails

I am using Xcode 4.5.1 and testing on an iPhone5 with iOS6.
I was using the frame capture function without problem, but suddenly it stopped working.
When I press the frame capture button, it seems the frame is captured, and the phone switches to a blank screen, only to suddenly switch back to the application screen, and the application keeps running. I can still debug and pause the application, but there's no way to get the frame capture. I don't see any errors in the console either.
The reason it stopped working is this piece of code. This code is supposed to render something to a rendertexture, but the rendertexture seems blank. I wanted to use the frame capture function to find out what's wrong, but the code itself won't let me capture... :(
Any idea why?
// ------------- init function -----------------
// Create the framebuffer and bind it
glGenFramebuffers(1, &g_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, g_framebuffer);
//Create the destination texture, and attach it to the framebuffer’s color attachment point.
glGenTextures(1, &g_texture);
glBindTexture(GL_TEXTURE_2D, g_texture);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, g_texture, 0);
//Test the framebuffer for completeness
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
if(status != GL_FRAMEBUFFER_COMPLETE) {
NSLog(#"failed to make complete framebuffer object %x", status);
} else {
NSLog(#"SkyPlugin initialized");
}
// ----------------- on update ------------------
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &oldFBO);
glGetIntegerv(GL_VIEWPORT, oldViewPort);
// set the framebuffer and clear
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, g_framebuffer);
glViewport(0, 0, 32, 32);
//glClearColor(0.9f, 0.1f, 0.1f, 1.0f);
glDisable(GL_DEPTH_TEST);
glClear(GL_COLOR_BUFFER_BIT);
// Set shader
glUseProgram(m_program);
// do some glEnableVertexAttribArray
// ...
// texture setting
glActiveTexture(GL_TEXTURE0);
glUniform1i(m_uniform, 0);
ResourceManager* resourceManager = ResourceManager::GetInstance();
glBindTexture(GL_TEXTURE_2D, m_texture[0]);
// ----------- Draw -----------
// Draws a full-screen quad to copy textures
static const vertexDataUV quad[] = {
{/*v:*/{-1.f,-1.f,0}, /*t:*/{0,0}},
{/*v:*/{-1.f,1,0}, /*t:*/{0,1}},
{/*v:*/{1,-1.f,0}, /*t:*/{1,0}},
{/*v:*/{1,1,0}, /*t:*/{1,1}}
};
static const GLubyte indeces[] = {0,2,1,3};
glVertexAttribPointer(m_posAttrib, 3, GL_FLOAT, 0, sizeof(vertexDataUV), &quad[0].vertex);
glVertexAttribPointer(m_texCoordAttrib, 2, GL_FLOAT, 0, sizeof(vertexDataUV), &quad[0].uv);
glDrawElements(GL_TRIANGLE_STRIP, 4, GL_UNSIGNED_BYTE, indeces);
// ------------ End
// go back to the main framebuffer!
glBindFramebuffer(GL_FRAMEBUFFER, oldFBO);
glViewport(oldViewPort[0], oldViewPort[1], oldViewPort[2], oldViewPort[3]);
glEnable(GL_DEPTH_TEST);
//glClearColor(0.1f, 0.1f, 0.1f, 1.0f);
Edit: (2012/October/28)
I found out why the above code was not working. I forgot to bind a render buffer! The code below works, but still the frame capture fails when this code is active...
On init,
// Create the renderbuffer and bind it
glGenRenderbuffers(1, &g_renderbuffer);
glBindRenderbuffer(GL_RENDERBUFFER, g_renderbuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA8_OES, w, h);
// Create the framebuffer and bind it
glGenFramebuffers(1, &g_framebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, g_framebuffer);
glFramebufferRenderbuffer(GL_FRAMEBUFFER,
GL_COLOR_ATTACHMENT0,
GL_RENDERBUFFER, g_renderbuffer);
//Create the destination texture, and attach it to the framebuffer’s color attachment point.
glGenTextures(1, &g_texture);
glBindTexture(GL_TEXTURE_2D, g_texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, w, h, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, g_texture, 0);
On update,
glGetIntegerv(GL_RENDERBUFFER_BINDING, &oldRBO);
glGetIntegerv(GL_FRAMEBUFFER_BINDING, &oldFBO);
glGetIntegerv(GL_VIEWPORT, oldViewPort);
// set the framebuffer and clear
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, g_framebuffer);
glBindRenderbuffer(GL_RENDERBUFFER, g_renderbuffer);
glViewport(0, 0, 32, 32);
// ... draw stuff ...
End of update,
// go back to the main framebuffer!
glBindFramebuffer(GL_FRAMEBUFFER, oldFBO);
glBindRenderbuffer(GL_RENDERBUFFER, oldRBO);
It seems it was a bug of Xcode.
The latest version, Xcode 4.5.2 lets me capture the frame :)
After I capture the frame, I get an error in this part of the code:
// ... draw stuff ...
glActiveTexture(GL_TEXTURE0);
glUniform1i(MY_TEXTURE, 0);
On the glUniform1i I get this error: "The specified operation is invalid for the current OpenGL state".
No idea why I get this error (the rendering it's working), but I suspect this error may be the reason why I wasn't able to capture a frame in the previous version of Xcode...
I've seen very similar behavior, and while it was a bit erratic, it did seem to be related to memory usage. Usually when it failed, the replay application seemed to be running out of memory (I'd see a memory warning in the console when it failed).
Switching to a device with more memory fixed it (iPad 4 from iPad 3), but I also was able to occasionally work around it by reducing the amount of texture memory used -- skipping setting the top mipmap for all my textures would usually free up enough.

Read pixels from off-screen OpenGL pixel buffer in iOS (OopenGL-ES)

I want to read pixels from an off-screen (not backed by a CAEAGLLayer) Framebuffer. My code to create the buffer looks like:
glGenFramebuffersOES(1, &_storeFramebuffer);
glGenRenderbuffersOES(1, &_storeRenderbuffer);
glBindFramebufferOES(GL_FRAMEBUFFER_OES, _storeFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _storeRenderbuffer);
glFramebufferRenderbufferOES(GL_FRAMEBUFFER_OES, GL_COLOR_ATTACHMENT0_OES, GL_RENDERBUFFER_OES, _storeRenderbuffer);
glRenderbufferStorageOES(GL_RENDERBUFFER_OES, GL_RGBA8_OES, w, h);
I read raw pixels with:
glBindFramebufferOES(GL_FRAMEBUFFER_OES, _storeFramebuffer);
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _storeRenderbuffer);
glReadPixels(0, 0, _videoDimensions.width, _videoDimensions.height, GL_BGRA_EXT, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(outPixelBuffer));
This works well. I can render to this buffer, and copy from it to the screen. But I can't get raw pixels. glReadPixels always returns zeros, and glReadBuffer seems not to exist. I can read from the on-screen frame buffer with glReadPixels. Any ideas?
Solved. RGBA to BGRA conversion is not supported by glReadPixels on iOS.
Changing
glReadPixels(0, 0, _videoDimensions.width, _videoDimensions.height, GL_BGRA_EXT, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(outPixelBuffer));
to
glReadPixels(0, 0, _videoDimensions.width, _videoDimensions.height, GL_RGBA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(outPixelBuffer));
Solves the problem. glGetError is my new friend ;)

Resources