Make zoom on NSData or on CMSampleBufferRef - ios

I'm developing an iOS application with latest SDK.
This app will work with OpenCV and I have to make zoom on camera but this, it isn't available on iOS SDK, so I think to do it programmatically.
I have to do 'zoom' on every video frame. This is where I have to do it:
#pragma mark - AVCaptureSession delegate
- (void)captureOutput:(AVCaptureOutput *)captureOutput
didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer
fromConnection:(AVCaptureConnection *)connection
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
//size_t stride = CVPixelBufferGetBytesPerRow(imageBuffer);
//put buffer in open cv, no memory copied
cv::Mat image = cv::Mat(height, width, CV_8UC4, baseAddress);
// copy the image
//cv::Mat copied_image = image.clone();
_lastFrame = [NSData dataWithBytes:image.data
length:image.elemSize() * image.total()];
[DataExchanger postFrame];
/*We unlock the image buffer*/
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
}
Do you know how to make zoom on a NSData or on a CMSampleBufferRef?

One way would be to put your picture into a CGImageRef, choose a square in that picture and draw it again to a normal size. Something like this (though there may be better ways):
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CGContextRelease(context);
CGImageRef smallQuartzImage = CGImageCreateWithImageInRect(quartzImage, CGRectMake(200, 200, 600, 600));
cv::Mat image(height, width, CV_8UC4 );
CGContextRef contextRef = CGBitmapContextCreate( image.data, width, height, 8, cvMat.step[0], colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst );
CGContextDrawImage(contextRef, CGRectMake(0, 0, width, height), smallQuartzImage);
CGContextRelease( contextRef );
CGColorSpaceRelease( colorSpace );

Related

UIImage to MTLTexture when downloaded as PNG

My use case is that a user takes a photo of themself on their phone, and uploads it to an image hosting service as a JPEG. Other uses can then download that image, and that image is then mapped to a metal texture for use in a game.
My issue is that if i download that image and simply display it in a UIImageView, it looks correct, but when I take the downloaded image and transform it into a metal texture it gets mirrored and rotated 90 degrees clockwise. I understand the image getting mirrored is due to metal having a different coordinate system but I don't understand the rotation issues. When I print the details for the image that has been passed into my function it has all the same orientation details as the UIImageView that is displaying correctly, so I have no idea where the issue is. Attached is my function that gives me my MTLTexture.
- (id<MTLTexture>) createTextureFromImage:(UIImage*) image device:(id<MTLDevice>) device
{
image =[UIImage imageWithCGImage:[image CGImage]
scale:[image scale]
orientation: UIImageOrientationLeft];
NSLog(#"orientation and size and stuff %ld %f %f", (long)image.imageOrientation, image.size.width, image.size.height);
CGImageRef imageRef = image.CGImage;
size_t width = self.view.frame.size.width;
size_t height = self.view.frame.size.height;
size_t bitsPerComponent = CGImageGetBitsPerComponent(imageRef);
size_t bitsPerPixel = CGImageGetBitsPerPixel(imageRef);
CGColorSpaceRef colorSpace = CGImageGetColorSpace(imageRef);
CGImageAlphaInfo alphaInfo = CGImageGetAlphaInfo(imageRef);
// NSLog(#"%# %u", colorSpace, alphaInfo);
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | alphaInfo;
// NSLog(#"bitmap info %u", bitmapInfo);
CGContextRef context = CGBitmapContextCreate( NULL, width, height, bitsPerComponent, (bitsPerPixel / 8) * width, colorSpace, bitmapInfo);
if( !context )
{
NSLog(#"Failed to load image, probably an unsupported texture type");
return nil;
}
CGContextDrawImage( context, CGRectMake( 0, 0, width, height ), image.CGImage);
MTLPixelFormat format = MTLPixelFormatRGBA8Unorm;
MTLTextureDescriptor *texDesc = [MTLTextureDescriptor texture2DDescriptorWithPixelFormat:format
width:width
height:height
mipmapped:NO];
id<MTLTexture> texture = [device newTextureWithDescriptor:texDesc];
[texture replaceRegion:MTLRegionMake2D(0, 0, width, height)
mipmapLevel:0
withBytes:CGBitmapContextGetData(context)
bytesPerRow:4 * width];
return texture;
}
In Metal coordinates are reversed. However, you now have a much simpler way to load textures with MTKTextureLoader:
import MetalKit
let textureLoader = MTKTextureLoader(device: device)
let texture: MTLTexture = textureLoader.newTextureWithContentsOfURL(filePath, options: nil)
This will create a new texture for you with the appropriate coordinates using the image located at filePath. If you don't want to use a NSURL you also have the newTextureWithData and newTextureWithCGImage options.

Create CVPixelBuffer with pixels data, but the final image is distorted

I get pixels by OpenGLES method(glReadPixels) or other way, then create CVPixelBuffer (with or without CGImage) for video recording, but the final picture is distorted. This happens on iPhone6 when I test on iPhone 5c, 5s and 6.
It looks like:
Here is the code:
CGSize viewSize=self.glView.bounds.size;
NSInteger myDataLength = viewSize.width * viewSize.height * 4;
// allocate array and read pixels into it.
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glReadPixels(0, 0, viewSize.width, viewSize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// gl renders "upside down" so swap top to bottom into new array.
// there's gotta be a better way, but this works.
GLubyte *buffer2 = (GLubyte *) malloc(myDataLength);
for(int y = 0; y < viewSize.height; y++)
{
for(int x = 0; x < viewSize.width* 4; x++)
{
buffer2[(int)((viewSize.height-1 - y) * viewSize.width * 4 + x)] = buffer[(int)(y * 4 * viewSize.width + x)];
}
}
free(buffer);
// make data provider with data.
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer2, myDataLength, NULL);
// prep the ingredients
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * viewSize.width;
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
// make the cgimage
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGImageRef imageRef = CGImageCreate(viewSize.width , viewSize.height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
//UIImage *photo = [UIImage imageWithCGImage:imageRef];
int width = CGImageGetWidth(imageRef);
int height = CGImageGetHeight(imageRef);
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, _recorder.pixelBufferAdaptor.pixelBufferPool, &pixelBuffer);
NSAssert((status == kCVReturnSuccess && pixelBuffer != NULL), #"create pixel buffer failed.");
CVPixelBufferLockBaseAddress(pixelBuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pixelBuffer);
NSParameterAssert(pxdata != NULL);//CGContextRef
CGContextRef context = CGBitmapContextCreate(pxdata,
width,
height,
CGImageGetBitsPerComponent(imageRef),
CGImageGetBytesPerRow(imageRef),
colorSpaceRef,
kCGImageAlphaPremultipliedLast);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0, 0, width, height), imageRef);
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
CGColorSpaceRelease(colorSpaceRef);
CGContextRelease(context);
CGImageRelease(imageRef);
free(buffer2);
//CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
// ...
CVPixelBufferRelease(pixelBuffer);
NOTE - this answer relates the overall problem with the image and not to the specific code.
This sort of problem is usually the 'stride' and relates to the memory layout used to hold the image where each row of pixels are not packed tightly together
As an example the source image may be 240 pixels wide.
The CMPixelBuffer may allocate 320 pixels for each rows where the first 240 pixels hold the image and the extra 80 pixels are padding.
In this case the width is 240 pixels, the stride is 320 pixels.
Strides usually mean you have to copy over each row of pixels one in a loop
Use this size everywhere in the code
int width_16 = (int)yourImage.size.width - (int)yourImage.size.width%16;
int height_ = (int)(yourImage.size.height/yourImage.size.width * width_16) ;
CGSize video_size_ = CGSizeMake(width_16, height_);
I had the same problem and I think the solution is following:
Try to change CGImageGetBytesPerRow(imageRef) to CVPixelBufferGetBytesPerRow(pxbuffer) in CGBitmapContextCreate call. The reason is that your context is backed with the raw data of pixel buffer you have created, not CGImage you are drawing. CVPixelBuffer's bytes per row count may be greater than bytes per pixel * pixel buffer width.

UIImage from CMSampleBufferRef conversion, resulting UIImage not rendering properly

I am working with AV Foundation, i am attempting to save a particular
output CMSampleBufferRef as UIImage in some variable. i am using manatee works sample code and it uses
kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange for
kCVPixelBufferPixelFormatTypeKey
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[captureOutput setVideoSettings:videoSettings];
but when i save the image, the output is just nil or whatever is the background of ImageView. I also tried not to set the output setting and just use whatever is the default but of no use. the image is still not rendered. i also tried to set kCVPixelFormatType_32BGRAbut then manatee works stops detecting bar code.
I am using the context settings from sample code provided by apple on developer website
// Create a bitmap graphics context with the sample buffer data
CGContextRef context = CGBitmapContextCreate(NULL,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer),
8,
0,
CGColorSpaceCreateDeviceRGB(),
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
Can anybody help me on what is going wrong here? It should be simple but i don't have much of experience with AVFoundation Framework. Is this is some color space problem as the context is using CGColorSpaceCreateDeviceRGB() ?
I can provide more info if needed. I searched StackOverflow and there were many entries regarding this but none solved my problem
Is there a reason you are passing 0 for bytesPerRow to CGBitmapContextCreate?
Also, you are passing NULL as the buffer instead of the CMSampleBufferRef address.
Creating the bitmap context should look approximately like this when sampleBuffer is your CMSampleBufferRef:
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(baseAddress,
CVPixelBufferGetWidth(imageBuffer),
CVPixelBufferGetHeight(imageBuffer),
8,
CVPixelBufferGetBytesPerRow(imageBuffer),
colorSpace,
kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
// Create a Quartz image from the pixel data in the bitmap graphics context
CGImageRef quartzImage = CGBitmapContextCreateImage(context);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
CGContextRelease(newContext);
Here is how I used to do it. The code is written in swift. But it works.
You should notice the orientation parameter at the last line, it depends on the video settings.
extension UIImage {
/**
Creates a new UIImage from the video frame sample buffer passed.
#param sampleBuffer the sample buffer to be converted into a UIImage.
*/
convenience init?(sampleBuffer: CMSampleBufferRef) {
// Get a CMSampleBuffer's Core Video image buffer for the media data
let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
// Lock the base address of the pixel buffer
CVPixelBufferLockBaseAddress(imageBuffer, 0)
// Get the number of bytes per row for the pixel buffer
let baseAddress = CVPixelBufferGetBaseAddress(imageBuffer)
// Get the number of bytes per row for the pixel buffer
let bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer)
// Get the pixel buffer width and height
let width = CVPixelBufferGetWidth(imageBuffer)
let height = CVPixelBufferGetHeight(imageBuffer)
// Create a device-dependent RGB color space
let colorSpace = CGColorSpaceCreateDeviceRGB()
// Create a bitmap graphics context with the sample buffer data
let bitmap = CGBitmapInfo(CGBitmapInfo.ByteOrder32Little.rawValue|CGImageAlphaInfo.PremultipliedFirst.rawValue)
let context = CGBitmapContextCreate(baseAddress, width, height, 8,
bytesPerRow, colorSpace, bitmap)
// Create a Quartz image from the pixel data in the bitmap graphics context
let quartzImage = CGBitmapContextCreateImage(context)
// Unlock the pixel buffer
CVPixelBufferUnlockBaseAddress(imageBuffer,0)
// Create an image object from the Quartz image
self.init(CGImage: quartzImage, scale: 1, orientation: UIImageOrientation.LeftMirrored)
}
}
I use this regularly:
UIImage *image = [UIImage imageWithData:[self imageToBuffer:sampleBuffer]];
- (NSData *) imageToBuffer:(CMSampleBufferRef)source {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(source);
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void *src_buff = CVPixelBufferGetBaseAddress(imageBuffer);
NSData *data = [NSData dataWithBytes:src_buff length:bytesPerRow * height];
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
return data;
}

Handling various types of Bitmaps in iOS

ios-converting-uiimage-to-rgba8-bitmaps-and-back is a very good article that I found which describes how we can deal with bitmap buffer and UIImages. This article deals with RGBA32/ RGBA8 bitmap images. The bitmap images are created with char * buffer with size width * height * 4. ie, each pixel information will have 4 bytes of data, 1 byte each for storing red, green, blue and alpha respectively. On creating the bitmap image the bitmap info 'kCGImageAlphaPremultipliedLast' is given. CGColorSpaceCreateDeviceRGB() is used for converting back the bitmapbuffer to UIImage. By changing the bitmap info we can also deal with RGBA 24 images.
I need to deal with RGBA 5551 bitmap images. red, green and blue colors are given 5 bits to represent the respetive colors and 1 bit for storing alpha value. If we are creating such a bitmap, how we can allocate buffer for a char * bitmap. Is it possible to convert into a UIImage data type?. Any help will be appreciated.
Here the BITS_PER_COMPONENT is 5 and BITS_PER_COMPONENT is 16. With this code I have successfully. kCGBitmapByteOrder16Little indicates the byte order of the char buffer.
size_t bufferLength = width * height * 2;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, buffer, bufferLength, NULL);
size_t bitsPerComponent = BITS_PER_COMPONENT;
size_t bitsPerPixel = BITS_PER_PIXEL;
size_t bytesPerRow = 2 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
if(colorSpaceRef == NULL) {
NSLog(#"Error allocating color space");
CGDataProviderRelease(provider);
return nil;
}
CGBitmapInfo bitmapInfo = kCGBitmapByteOrder16Little | kCGImageAlphaNoneSkipFirst;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider, // data provider
NULL, // decode
YES, // should interpolate
renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
if(pixels == NULL) {
NSLog(#"Error: Memory not allocated for bitmap");
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
return nil;
}
CGContextRef context = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
you need to create 16 bit image, so provide bpp as 16, bpc as 5, below will be the code:
size_t width = CGImageGetWidth(screenShotImageRef);
size_t height = CGImageGetHeight(screenShotImageRef);
size_t bytesPerRow = width * (bpc == 5 ? 2 : 4);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef bitmapContext = CGBitmapContextCreate(buffer(provide buffer where you want to write image into), width, height, bpc, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrderDefault);
CGContextDrawImage(bitmapContext, CGRectMake(0, 0, width, height), screenShotImageRef);
CGContextRelease(bitmapContext);
CGColorSpaceRelease(colorSpace);

Fastest way to render OpenGL texture into CGContext

Here's the question in brief:
For some layer compositing, I have to render an OpenGL texture in a CGContext. What's the fastest way to do that?
Thoughts so far:
Obviously, calling renderInContext won't capture OpenGL content, and glReadPixels is too slow.
For some 'context', I'm calling this method in a delegate class of a layer:
- (void) drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx
I've considered using a CVOpenGLESTextureCache, but that requires an additional rendering, and it seems like some complicated conversion would be necessary post-rendering.
Here's my (terrible) implemention right now:
glBindRenderbuffer(GL_RENDERBUFFER, displayRenderbuffer);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte *) malloc(dataLength * sizeof(GLubyte));
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
CGFloat scale = self.contentScaleFactor;
NSInteger widthInPoints, heightInPoints;
widthInPoints = width / scale;
heightInPoints = height / scale;
CGContextSetBlendMode(context, kCGBlendModeCopy);
CGContextDrawImage(context, CGRectMake(0.0, 0.0, widthInPoints, heightInPoints), iref);
// Clean up
free(data);
CFRelease(ref);
CFRelease(colorspace);
CGImageRelease(iref);
For anyone curious, the method shown above is not the fastest way.
When a UIView is asked for its contents, it will ask its layer (usually a CALayer) to draw them for it. The exception: OpenGL-based views, which use a CAEAGLLayer (a subclass of CALayer), use the same method but returns nothing. No drawing happens.
So, if you call:
[someUIView.layer drawInContext:someContext];
it will work, while
[someOpenGLView.layer drawInContext:someContext];
won't.
This also becomes an issue if you're asking a superview of any OpenGL-based view for its content: it will recursively ask each of its subviews for theirs, and any subview that uses a CAEAGLLayer will hand back nothing (you'll see a black rectangle).
I set out above to find an implementation of a delegate method of CALayer, drawLayer:inContext:, which I could use in any OpenGL-based views so that the view object itself would provide its contents (rather than the layer). The delegate method is called automatically: Apple expects it to work this way.
Where performance isn't an issue, you can implement a variation of a simple snapshot method in your view. The method would look like this:
- (void) drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
GLint backingWidth, backingHeight;
glBindRenderbufferOES(GL_RENDERBUFFER_OES, _colorRenderbuffer);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_WIDTH_OES, &backingWidth);
glGetRenderbufferParameterivOES(GL_RENDERBUFFER_OES, GL_RENDERBUFFER_HEIGHT_OES, &backingHeight);
NSInteger x = 0, y = 0, width = backingWidth, height = backingHeight;
NSInteger dataLength = width * height * 4;
GLubyte *data = (GLubyte*)malloc(dataLength * sizeof(GLubyte));
// Read pixel data from the framebuffer
glPixelStorei(GL_PACK_ALIGNMENT, 4);
glReadPixels(x, y, width, height, GL_RGBA, GL_UNSIGNED_BYTE, data);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, data, dataLength, NULL);
CGColorSpaceRef colorspace = CGColorSpaceCreateDeviceRGB();
CGImageRef iref = CGImageCreate(width, height, 8, 32, width * 4, colorspace, kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
ref, NULL, true, kCGRenderingIntentDefault);
CGContextDrawImage(ctx, self.bounds, iref);
}
BUT! This is not a performance effective.
glReadPixels, as noted just about everywhere, is not a fast call. Starting in iOS 5, Apple exposed CVOpenGLESTextureCacheRef - basically, a shared buffer that can be used both as a CVPixelBufferRef and as an OpenGL texture. Originally, it was designed to be used as a way of getting an OpenGL texture from a video frame: now it's more often used in reverse, to get a video frame from a texture.
So a much better implementation of the above idea is to use the CVPixelBufferRef you get from CVOpenGLESTextureCacheCreateTextureFromImage, get direct access to those pixels, draw them into a CGImage which you cache and which is drawn into your context in the delegate method above.
The code is here. On each rendering pass, you draw your texture into the texturecache, which is linked to the CVPixelBuffer Ref:
- (void) renderToCGImage {
// Setup the drawing
[ochrContext useProcessingContext];
glBindFramebuffer(GL_FRAMEBUFFER, layerRenderingFramebuffer);
glViewport(0, 0, (int) self.frame.size.width, (int) self.frame.size.height);
[ochrContext setActiveShaderProgram:layerRenderingShaderProgram];
// Do the actual drawing
glActiveTexture(GL_TEXTURE4);
glBindTexture(GL_TEXTURE_2D, self.inputTexture);
glUniform1i(layerRenderingInputTextureUniform, 4);
glVertexAttribPointer(layerRenderingShaderPositionAttribute, 2, GL_FLOAT, 0, 0, kRenderTargetVertices);
glVertexAttribPointer(layerRenderingShaderTextureCoordinateAttribute, 2, GL_FLOAT, 0, 0, kRenderTextureVertices);
// Draw and finish up
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
glFinish();
// Try running this code asynchronously to improve performance
dispatch_async(PixelBufferReadingQueue, ^{
// Lock the base address (can't get the address without locking it).
CVPixelBufferLockBaseAddress(renderTarget, 0);
// Get a pointer to the pixels
uint32_t * pixels = (uint32_t*) CVPixelBufferGetBaseAddress(renderTarget);
// Wrap the pixel data in a data-provider object.
CGDataProviderRef pixelWrapper = CGDataProviderCreateWithData(NULL, pixels, CVPixelBufferGetDataSize(renderTarget), NULL);
// Get a color-space ref... can't this be done only once?
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
// Release the exusting CGImage
CGImageRelease(currentCGImage);
// Get a CGImage from the data (the CGImage is used in the drawLayer: delegate method above)
currentCGImage = CGImageCreate(self.frame.size.width,
self.frame.size.height,
8,
32,
4 * self.frame.size.width,
colorSpaceRef,
kCGBitmapByteOrder32Big | kCGImageAlphaPremultipliedLast,
pixelWrapper,
NULL,
NO,
kCGRenderingIntentDefault);
// Clean up
CVPixelBufferUnlockBaseAddress(renderTarget, 0);
CGDataProviderRelease(pixelWrapper);
CGColorSpaceRelease(colorSpaceRef);
});
}
And then implement the delegate method very simply:
- (void) drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx {
CGContextDrawImage(ctx, self.bounds, currentCGImage);
}

Resources