CVOpenGLESTextureCache vs glTexSubImage2D on iOS - ios

My OpenGL app uses OpenGL to render a texture in full screen and updates part of it at regular intervals. So far, I've been using glTexImage2D to push my initial texture and then I update the dirty regions with glTexSubImage2D. To do that, I'm using single buffering. This works well.
I've seen that there might be another way to achieve the same thing using CVOpenGLESTextureCache. The textures held in the texture cache reference a CVPixelBuffer. I'd like to know if I can mutate these cached textures. I tried to recreate a CVOpenGLESTexture for each update but this decreases my frame rate dramatically (not surprising after all since I'm not specifying the dirty region anywhere). Maybe I totally misunderstood the use case for this texture cache.
Can someone provide some guidance?
UPDATE: Here is the code I'm using. The first update works fine. The subsequent updates don't (nothing happens). Between each update I modify the raw bitmap.
if (firstUpdate) {
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, ctx, NULL, &texCache);
CVPixelBufferRef pixelBuffer;
CVPixelBufferCreateWithBytes(NULL, width_, height_, kCVPixelFormatType_32BGRA, bitmap, width_*4, NULL, 0, NULL, &pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
CVOpenGLESTextureRef texture = NULL;
CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault, texCache, pixelBuffer, NULL, GL_TEXTURE_2D, GL_RGBA, width_, height_, GL_BGRA, GL_UNSIGNED_BYTE, 0, &texture);
texture_[0] = CVOpenGLESTextureGetName(texture);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
}
CVOpenGLESTextureCacheFlush(texCache, 0);
if (firstUpdate) {
glBindTexture(GL_TEXTURE_2D, texture_[0]);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MIN_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D,GL_TEXTURE_MAG_FILTER,GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
}
glEnableClientState(GL_VERTEX_ARRAY);
glEnableClientState(GL_TEXTURE_COORD_ARRAY);
if (firstUpdate) {
static const float textureVertices[] = {
-1.0, -1.0,
1.0, -1.0,
-1.0, 1.0,
1.0, 1.0
};
static const float textureCoords[] = {
0.0, 0.0,
1.0, 0.0,
0.0, 1.0,
1.0, 1.0
};
glVertexPointer(2, GL_FLOAT, 0, &textureVertices[0]);
glTexCoordPointer(2, GL_FLOAT, 0, textureCoords);
}
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_TEXTURE_COORD_ARRAY);
firstUpdate = false;

I have been doing quite a bit of hacking with these texture APIs and I finally was able to produce a working example of writing to a texture via memory using the texture cache API. These APIs work on the iOS device but not on the simulator, so a special workaround was needed (basically just calling glTexSubImage2D() explicitly in the simulator). The code needed to double buffer the texture loading done in another thread to avoid updating while rendering was going on. The full source code and timing results are at opengl_write_texture_cache. The linked Xcode project decodes from PNGs and the performance on older iPhone hardware is a little poor as a result. But the code is free to do whatever you want with, so it should not be hard to adapt to some other pixel source. To only write a dirty region, only write to that portion of the memory buffer in the background thread.

Related

What might cause a GL_INVALID_OPERTATION error on a call to EAGLContext presentRenderBuffer?

Any ideas what might cause EAGLContext presentRenderbuffer to result in a GL_INVALID_OPERATION error, even though the call returns YES and the buffer seems to have the correct content?
Some details:
I am using a 3rd party library to render a texture to an off-screen framebuffer. Because the third-party renders are slow, this is done asynchronously on a background thread using a separate EAGLContext, but using a sharegroup in common with the main thread's rendering context so that the final texture may be picked up from there for use in on-screen rendering.
There is a single framebuffer that is set up once and re-used:
GLuint _frameBuffer = 0;
GLuint _stencilRenderBuffer = 0;
glGenFramebuffers(1, &_frameBuffer);
glBindFramebuffer(GL_FRAMEBUFFER, _frameBuffer);
glGenRenderbuffers(1, &_stencilRenderBuffer);
glBindRenderbuffer(GL_RENDERBUFFER, stencilRenderBuffer);
glRenderbufferStorage(GL_RENDERBUFFER, GL_STENCIL_INDEX8, width, height);
glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_STENCIL_ATTACHMENT, GL_RENDERBUFFER, stencilRenderBuffer);
And then, for each new texture rendered by the 3rd party library, a new texture is created to receive the final rendered texture:
GLuint _texture = 0;
glGenTextures(1, &_texture);
glBindTexture(GL_TEXTURE_2D, _texture);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, _frameBufferTexture, 0);
GLenum status = glCheckFramebufferStatus(GL_FRAMEBUFFER) ;
if(status != GL_FRAMEBUFFER_COMPLETE) {
// handle it
}
glClearColor( 1, 1, 1, 1 );
glClear( GL_COLOR_BUFFER_BIT | GL_STENCIL_BUFFER_BIT);
// {CALL TO 3rd PARTY LIBRARY IS MADE HERE}
glFinish(); // ensure third-party library is finished
BOOL result = [context presentRenderbuffer:GL_RENDERBUFFER]; // XCode breaks on this line
This has been working seemingly fine; I only discovered the error after enabling an "OpenGL ES Error" breakpoint in XCode earlier today to hunt down a different error; XCode will break on the the last line shown with GL_INVALID_OPERATION, and a subsequent call to glGetError returns 1282. There are no GL errors showing before the call to presentRenderbuffer (I've put in glGetError calls, but have omitted them here for brevity.)
The curious thing is that the presentRenderbuffer call returns YES (success) even though the breakpoint shows an invalid operation error on the call. Every new texture is presented, and used downstream to render to screen.
Still, I would like to know what is causing this, and fix it, so that it is not causing other error down the line.

"invalid framebuffer operation" on glClear - using sRGB in OpenGL ES3

Using openGL-ES3, running on an iPhone5s (hardware, not in the simulator) in Xcode 7.3 I receive an "invalid framebuffer operation" when doing a glClear.
The texture in question is a "final" texture for my GBuffer, much like in this tutorial http://ogldev.atspace.co.uk/www/tutorial37/tutorial37.html.
Key difference being that I'm requesting an sRGB texture and that I use GL_COLOR_ATTACHMENT3 (instead of 4), due to ES3 limitations.
glTexImage2D(GL_TEXTURE_2D, 0, GL_SRGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
// glTexParameteri ...
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT3, GL_TEXTURE_2D, m_finalTexture, 0);
GLenum Status = glCheckFramebufferStatus(GL_FRAMEBUFFER); // No errors here
Now when I try to clear it, I get an "invalid framebuffer operation"
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, m_fbo);
// Element at index**[i]** needs to match GL_COLOR_ATTACHMENT**i** on GL-ES3!
GLenum drawbuf[4] = { GL_NONE, GL_NONE, GL_NONE, GL_COLOR_ATTACHMENT3 };
glDrawBuffers(sizeof(drawbuf)/sizeof(drawbuf[0]), drawbuf);
GLCheckError(); // no errors
glClear(GL_COLOR_BUFFER_BIT);
GLCheckError(); // => glGetError 506 GL_INVALID_FRAMEBUFFER_OPERATION
Now if instead I initialise the texture like this (so without sRGB), OpenGL doesn't give an error on the clear:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGB8, WindowWidth, WindowHeight, 0, GL_RGB, GL_UNSIGNED_BYTE, NULL);
Now as I understood it, sRGB is supported on OpenGL ES3... so why does glClear fail?
Any ideas anyone?
GL_SRGB8 is not a color-renderable format in ES 3.0. In the spec document:
In the "Required Texture Format" section starting on page 128, SRGB8 is listed under "Texture-only color formats".
In table 3.13, starting on page 130, SRGB8 does not have a checkmark in the "Color-renderable" column.
This also matches the EXT_srgb extension specification, under "Issues":
Do we require SRGB8_EXT be supported for RenderbufferStorage?
No. Some hardware would need to pad this out to RGBA and instead of adding that unknown for application developers we will simply not support that format in this extension.
glCheckFramebufferStatus() should return GL_FRAMEBUFFER_INCOMPLETE_ATTACHMENT in this case. If it's not doing that, that looks like a bug in the OpenGL implementation.
The closest alternative that is color-renderable is GL_SRGB8_ALPHA8.
try this
#define GL_COLOR_BUFFER_BIT 0x00004000
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

How to emulate an accumulation buffer in OpenGL es 2.0 (Trailing Particles Effect)

So I have been trying to create a trailing particle effect (seen here) with OpenGL ES 2.0. Unfortunately it appears that the OpenGL command (accumulation buffer) that makes this possible is not available in OpenGL es. This means that it will be necessary to go the LONG way.
This topic described a possible method to do such a thing. However I am quite confused about how to store things inside a buffer and combine buffers. So my thought was to do the following.
Draw the current frame into a texture using a buffer that writes to a texture
Draw the previous frames (but faded) into another buffer.
Put step 1 ontop of step 2. And display that.
Save whatever is displayed for use next frame.
My understanding so far is that buffers store pixel data in the same way textures do, just that buffers can more easily be drawn to using shaders.
So the idea would probably be to render to a buffer THEN move it into a texture.
One theory for doing this that I found is this
In retrospect, you should create two FBOs (each with its own texture);
using the default framebuffer isn't reliable (the contents aren't
guaranteed to be preserved between frames).
After binding the first FBO, clear it then render the scene normally.
Once the scene has been rendered, use the texture as a source and
render it to the second FBO with blending (the second FBO is never
cleared). This will result in the second FBO containing a mix of the
new scene and what was there before. Finally, the second FBO should be
rendered directly to the window (this can be done by rendering a
textured quad, similarly to the previous operation, or by using
glBlitFramebuffer).
Essentially, the first FBO takes the place of the default framebuffer
while the second FBO takes the place of the accumulation buffer.
In summary:
Initialisation:
For each FBO:
- glGenTextures
- glBindTexture
- glTexImage2D
- glBindFrameBuffer
- glFramebufferTexture2D
Each frame:
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo1) glClear glDraw* // scene
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo2) glBindTexture(tex1)
glEnable(GL_BLEND) glBlendFunc glDraw* // full-screen quad
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, 0)
glBindFrameBuffer(GL_READ_FRAMEBUFFER, fbo2) glBlitFramebuffer
unfortunately it didnt have quite enough code (especially for initialization to get me started).
But I have tried, and so far all I have gotten is a disappointing blank screen. I dont really know what I am doing, so probably this code is quite wrong.
var fbo1:GLuint = 0
var fbo2:GLuint = 0
var tex1:GLuint = 0
Init()
{
//...Loading shaders OpenGL etc.
//FBO 1
glGenFramebuffers(1, &fbo1)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), fbo1)
//Create texture for shader output
glGenTextures(1, &tex1)
glBindTexture(GLenum(GL_TEXTURE_2D), tex1)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGB, width, height, 0, GLenum(GL_RGB), GLenum(GL_UNSIGNED_BYTE), nil)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), tex1, 0)
//FBO 2
glGenFramebuffers(1, &fbo2)
glBindFramebuffer(GLenum(GL_FRAMEBUFFER), fbo2)
//Create texture for shader output
glGenTextures(1, &tex1)
glBindTexture(GLenum(GL_TEXTURE_2D), tex1)
glTexImage2D(GLenum(GL_TEXTURE_2D), 0, GL_RGB, width, height, 0, GLenum(GL_RGB), GLenum(GL_UNSIGNED_BYTE), nil)
glFramebufferTexture2D(GLenum(GL_FRAMEBUFFER), GLenum(GL_COLOR_ATTACHMENT0), GLenum(GL_TEXTURE_2D), tex1, 0)
}
func drawFullScreenTex()
{
glUseProgram(texShader)
let rect:[GLint] = [0, 0, GLint(width), GLint(height)]
glBindTexture(GLenum(GL_TEXTURE_2D), tex1)
//Texture is allready
glTexParameteriv(GLenum(GL_TEXTURE_2D), GLenum(GL_TEXTURE_CROP_RECT_OES), rect)
glDrawTexiOES(0, 0, 0, width, height)
}
fun draw()
{
//Prep
glBindFramebuffer(GLenum(GL_DRAW_FRAMEBUFFER), fbo1)
glClearColor(0, 0.1, 0, 1.0)
glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
//1
glUseProgram(pointShader);
passTheStuff() //Just passes in uniforms
drawParticles(glGetUniformLocation(pointShader, "color"), size_loc: glGetUniformLocation(pointShader, "pointSize")) //Draws particles
//2
glBindFramebuffer(GLenum(GL_DRAW_FRAMEBUFFER), fbo2)
drawFullScreenTex()
//3
glBindFramebuffer(GLenum(GL_DRAW_FRAMEBUFFER), 0)
glBindFramebuffer(GLenum(GL_READ_FRAMEBUFFER), fbo2)
glBlitFramebuffer(0, 0, width, height, 0, 0, width, height, GLbitfield(GL_COLOR_BUFFER_BIT), GLenum(GL_NEAREST))
}
BTW here are some sources I found useful.
Site 1
Site 2
Site 3
Site 4
My main question is: Could someone please write out the code for this. I think I understand the theory involved, but I have spent so much time trying in vain to apply it.
If you want a place to start I have the Xcode project that draws dots, and has a blue one that moves across the screen periodically here, also the code that isn't working is in their as well.
Note: If you are going to write code you can use any language c++, java, swift, objective-c it will be perfectly fine. As long as it is for OpenGL-ES
You call glGenTextures(1, &tex1) twice with the same variable tex1. This overwrites the variable. When you later call glBindTexture(GLenum(GL_TEXTURE_2D), tex1), it does not bind the texture corresponding to fbo1, but rather that of fbo2. You need a different texture for every fbo.
As for a reference, below is a sample from a working program of mine which uses multiple FBOs and renders to texture.
GLuint fbo[n];
GLuint tex[n];
init() {
glGenFramebuffers(n, fbo);
glGenTextures(n, tex);
for (int i = 0; i < n; ++i) {
glBindFramebuffer(GL_FRAMEBUFFER, fbo[i]);
glBindTexture(GL_TEXTURE_2D, tex[i]);
glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glFramebufferTexture2D(GL_DRAW_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, tex[i], 0);
}
}
render() {
glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo[0]);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
// Draw scene into buffer 0
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, fbo[1]);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(cbo[0]);
//Draw full screen tex
...
glBindFrameBuffer(GL_DRAW_FRAMEBUFFER, 0);
glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClear(GL_COLOR_BUFFER_BIT);
glBindTexture(cbo[n - 1]);
// Draw to screen
return;
}
A few notes. In order to get it to work I had to add the texture parameters.
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
This is because on my system they defaulted to GL_NEAREST_MIPMAP_LINEAR. This did not work for the FBO texture, as no mipmap was generated. Set these to anything you like.
Also, make sure you have textures enabled with
glEnable(GL_TEXTURE_2D)
I hope this will help.

OpenGL ES to video in iOS (rendering to a texture with iOS 5 texture cache)

You know the sample code of Apple with the CameraRipple effect? Well I'm trying to record the camera output in a file after openGL has done all the cool effect of water.
I've done it with glReadPixels, where I read all the pixels in a void * buffer , create CVPixelBufferRef and append it to the AVAssetWriterInputPixelBufferAdaptor, but it's too slow, coz readPixels takes tons of time. I found out that using FBO and texture cash you can do the same, but faster. Here is my code in drawInRect method that Apple use:
CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &coreVideoTextureCashe);
if (err)
{
NSAssert(NO, #"Error at CVOpenGLESTextureCacheCreate %d");
}
CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs2;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs2 = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs2,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
//CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
CVPixelBufferRef pixiel_bufer4e = NULL;
CVPixelBufferCreate(kCFAllocatorDefault,
(int)_screenWidth,
(int)_screenHeight,
kCVPixelFormatType_32BGRA,
attrs2,
&pixiel_bufer4e);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
coreVideoTextureCashe, pixiel_bufer4e,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)_screenWidth,
(int)_screenHeight,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
CFRelease(attrs2);
CFRelease(empty);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
CVPixelBufferLockBaseAddress(pixiel_bufer4e, 0);
if([pixelAdapter appendPixelBuffer:pixiel_bufer4e withPresentationTime:currentTime]) {
float result = currentTime.value;
NSLog(#"\n\n\4eta danni i current time e : %f \n\n",result);
currentTime = CMTimeAdd(currentTime, frameLength);
}
CVPixelBufferUnlockBaseAddress(pixiel_bufer4e, 0);
CVPixelBufferRelease(pixiel_bufer4e);
CFRelease(renderTexture);
CFRelease(coreVideoTextureCashe);
It records a video and it's pretty quick, yet the video is just black I think the textureCasheRef is not the right one or am I filling it wrong.
As an update, here is another way I've tried. I must be missing something. In viewDidLoad, after I set the openGL context I do this:
CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &coreVideoTextureCashe);
if (err)
{
NSAssert(NO, #"Error at CVOpenGLESTextureCacheCreate %d");
}
//creats the pixel buffer
pixel_buffer = NULL;
CVPixelBufferPoolCreatePixelBuffer (NULL, [pixelAdapter pixelBufferPool], &pixel_buffer);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault, coreVideoTextureCashe, pixel_buffer,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)screenWidth,
(int)screenHeight,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
Then in drawInRect: I do this:
if(isRecording&&writerInput.readyForMoreMediaData) {
CVPixelBufferLockBaseAddress(pixel_buffer, 0);
if([pixelAdapter appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
currentTime = CMTimeAdd(currentTime, frameLength);
}
CVPixelBufferLockBaseAddress(pixel_buffer, 0);
CVPixelBufferRelease(pixel_buffer);
}
Yet it crashes with bad_acsess on the renderTexture, which is not nil but 0x000000001.
UPDATE
With the code below I actually managed to pull the video file, but there are some green and red flashes. I use BGRA pixelFormatType.
Here I create the texture Cache:
CVReturn err2 = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL, (__bridge void *)_context, NULL, &coreVideoTextureCashe);
if (err2)
{
NSLog(#"Error at CVOpenGLESTextureCacheCreate %d", err);
return;
}
And then in drawInRect I call this:
if(isRecording&&writerInput.readyForMoreMediaData) {
[self cleanUpTextures];
CFDictionaryRef empty; // empty value for attr value.
CFMutableDictionaryRef attrs2;
empty = CFDictionaryCreate(kCFAllocatorDefault, // our empty IOSurface properties dictionary
NULL,
NULL,
0,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
attrs2 = CFDictionaryCreateMutable(kCFAllocatorDefault,
1,
&kCFTypeDictionaryKeyCallBacks,
&kCFTypeDictionaryValueCallBacks);
CFDictionarySetValue(attrs2,
kCVPixelBufferIOSurfacePropertiesKey,
empty);
//CVPixelBufferPoolCreatePixelBuffer (NULL, [assetWriterPixelBufferInput pixelBufferPool], &renderTarget);
CVPixelBufferRef pixiel_bufer4e = NULL;
CVPixelBufferCreate(kCFAllocatorDefault,
(int)_screenWidth,
(int)_screenHeight,
kCVPixelFormatType_32BGRA,
attrs2,
&pixiel_bufer4e);
CVOpenGLESTextureRef renderTexture;
CVOpenGLESTextureCacheCreateTextureFromImage (kCFAllocatorDefault,
coreVideoTextureCashe, pixiel_bufer4e,
NULL, // texture attributes
GL_TEXTURE_2D,
GL_RGBA, // opengl format
(int)_screenWidth,
(int)_screenHeight,
GL_BGRA, // native iOS format
GL_UNSIGNED_BYTE,
0,
&renderTexture);
CFRelease(attrs2);
CFRelease(empty);
glBindTexture(CVOpenGLESTextureGetTarget(renderTexture), CVOpenGLESTextureGetName(renderTexture));
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, CVOpenGLESTextureGetName(renderTexture), 0);
CVPixelBufferLockBaseAddress(pixiel_bufer4e, 0);
if([pixelAdapter appendPixelBuffer:pixiel_bufer4e withPresentationTime:currentTime]) {
float result = currentTime.value;
NSLog(#"\n\n\4eta danni i current time e : %f \n\n",result);
currentTime = CMTimeAdd(currentTime, frameLength);
}
CVPixelBufferUnlockBaseAddress(pixiel_bufer4e, 0);
CVPixelBufferRelease(pixiel_bufer4e);
CFRelease(renderTexture);
// CFRelease(coreVideoTextureCashe);
}
I know I can optimize this a lot by not doing all these things here, yet I use wanted to make it work. In cleanUpTextures I flush the textureCache with:
CVOpenGLESTextureCacheFlush(coreVideoTextureCashe, 0);
Something might be wrong with the RGBA stuff or I don't know but it seems that it's still getting kind of wrong Cache.
For recording video, this isn't the approach I'd use. You're creating a new pixel buffer for each rendered frame, which will be slow, and you're never releasing it, so it's no surprise you're getting memory warnings.
Instead, follow what I describe in this answer. I create a pixel buffer for the cached texture once, assign that texture to the FBO I'm rendering to, then append that pixel buffer using the AVAssetWriter's pixel buffer input on every frame. It's far faster to use the single pixel buffer than recreating one every frame. You also want to leave the pixel buffer associated with your FBO's texture target, rather than associating it on every frame.
I encapsulate this recording code within the GPUImageMovieWriter in my open source GPUImage framework, if you want to see how this works in practice. As I indicate in the above-linked answer, doing the recording in this fashion leads to extremely fast encodes.

OpenGL ES 2.0 Object Picking on iOS (Using Color Coding)

This might appear as a related question:
OpenGL ES 2.0 Object Picking on iOS
Which says Color picker is a good solution, and in deed after reading about it:
http://www.lighthouse3d.com/opengl/picking/index.php?color1
It does seem like a very simple solution so this brings me to this question
OpenGL ES color picking on the iPhone
Which unfortunately uses opengl es 1.0, I am trying to do it in 2.0 so I have no access
to the functions described in that question.
But the theory seems simple and here is what I think I should do:
On touches begin I render my objects with a unique color.
On touches ended I get the pixel from that position and check it for the color
to get my object. (probably with glReadPixels)
The problem is that I dont know how to do the "Render to the back buffer and read from it".
My code so far simply uses "Draw", I suspect I have to do something like glBindthe other buffer but I would appreciate some help.
My Drawing code is like this:
glClearColor(0, 0, 0, 0.0);
glClear(GL_COLOR_BUFFER_BIT);
// Set the Projection Matrix
GLKMatrix4 projectionMatrix = GLKMatrix4MakePerspective(GLKMathDegreesToRadians(60), 2.0/3.0, 0, 50);
glUseProgram(_programHD);
glBindVertexArrayOES(_vao);
glActiveTexture(GL_TEXTURE1);
glBindTexture(GL_TEXTURE_2D, _textureBuffer[1]);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glUniform1i(uniforms[UNIFORM_TEXTURE_HD], 1);
// Drawing starts here //
// Pass the Model View Matrix to Open GL
_modelViewProjectionMatrix = GLKMatrix4Multiply(projectionMatrix,rotationMatrix);
glUniformMatrix4fv(uniforms[UNIFORM_MODELVIEWPROJECTION_MATRIX_HD], 1, GL_FALSE, _modelViewProjectionMatrix.m);
// Change texture coordinates to draw a different image
glUniform2fv(uniforms[TEXTURE_OFFSET_HD], 1, offSet.v);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, 0);
//glUniform2i(uniforms[TEXTURE_OFFSET], 7, -5);
glUniform2fv(uniforms[TEXTURE_OFFSET_HD], 1, borderHD.v);
glDrawElements(GL_LINE_STRIP, 6, GL_UNSIGNED_SHORT, 0);
glBindVertexArrayOES(0);
glUseProgram(0);
I have stripped the drawing calculations to make it more understandable.
The point is I do not see anywhere where I specify to "where" am i drawing.
Thanks for your help.
I've actually just finished implementing a colour picking function into my iPhone game, using openGL ES 2.0, using the lighthouse tutorial funny enough.
You should be drawing to the frame buffer.
If you want to read from the frame buffer, then you're correct in that you want to use glReadPixels. More information is here:
http://www.opengl.org/sdk/docs/man/xhtml/glReadPixels.xml
The only thing that's different from the lighthouse tutorial is that you also want to store the alpha values.
Here's a quick function to get the colour of a specific pixel. Feel free to improve it or change it, but it does the job.
+ (void) ProcessColourPick : (GLubyte*) out : (Float32) x : (Float32) y
{
GLint viewport[4];
//Get size of screen
glGetIntegerv(GL_VIEWPORT,viewport);
GLubyte pixel[4];
//Read pixel from a specific point
glReadPixels(x,viewport[3] - y,1,1,
GL_RGBA,GL_UNSIGNED_BYTE,(void *)pixel);
out[0] = pixel[0];
out[1] = pixel[1];
out[2] = pixel[2];
out[3] = pixel[3];
}
Hope this helps.

Resources