Receive memory warning when I create video from images - ios

I was working with create video from images and its working very well but undortunatelly application getting crash in iPhone 4s. see my code and provide your suggestion please.
- (void)createMovieFromImages:(NSArray *)images withCompletion:(CEMovieMakerCompletion)completion;
{
self.completionBlock = completion;
[self.assetWriter startWriting];
[self.assetWriter startSessionAtSourceTime:kCMTimeZero];
dispatch_queue_t mediaInputQueue = dispatch_queue_create("mediaInputQueue", NULL);
__block NSInteger i = 0;
NSInteger frameNumber = [images count];
[self.writerInput requestMediaDataWhenReadyOnQueue:mediaInputQueue usingBlock:^{
while (YES){
if (i >= frameNumber) {
break;
}
if ([self.writerInput isReadyForMoreMediaData]) {
CVPixelBufferRef sampleBuffer = [self newPixelBufferFromCGImage:[[images objectAtIndex:i] CGImage]];
if (sampleBuffer) {
if (i == 0) {
[self.bufferAdapter appendPixelBuffer:sampleBuffer withPresentationTime:kCMTimeZero];
}else{
CMTime lastTime = CMTimeMake(i-1, self.frameTime.timescale);
CMTime presentTime = CMTimeAdd(lastTime, self.frameTime);
[self.bufferAdapter appendPixelBuffer:sampleBuffer withPresentationTime:presentTime];
}
CFRelease(sampleBuffer);
i++;
}
}
}
[self.writerInput markAsFinished];
[self.assetWriter finishWritingWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
self.completionBlock(self.fileURL);
});
}];
CVPixelBufferPoolRelease(self.bufferAdapter.pixelBufferPool);
}];
}
- (CVPixelBufferRef)newPixelBufferFromCGImage:(CGImageRef)image
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CGFloat frameWidth = [[self.videoSettings objectForKey:AVVideoWidthKey] floatValue];
CGFloat frameHeight = [[self.videoSettings objectForKey:AVVideoHeightKey] floatValue];
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
frameWidth,
frameHeight,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata,
frameWidth,
frameHeight,
8,
4 * frameWidth,
rgbColorSpace,
(CGBitmapInfo)kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformIdentity);
CGContextDrawImage(context, CGRectMake(0,
0,
CGImageGetWidth(image),
CGImageGetHeight(image)),
image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I have pass image array and get video but it was crash in iPhone 4s please help me.

This issue is becoming more common because of some of the following:
"iPhone 4S has 512 MB of DDR2 RAM" (wiki) , and with the numerous processes of the OS and their own demands of the hardware, and the addition of age of the device (wear and tear), an iPhone 4s is unlikely to be capable of something as demanding as this.
This question (ios app maximum memory budget) assumes that no more than 200MB (to be safe keep around 120MB) of memory should be consumed by the app at any given time.
To try and make this work, place as many functions not relevant to User Interface on background threads. Your entire - (CVPixelBufferRef)newPixelBufferFromCGImage:(CGImageRef)image is handled on the main thread, as your method - (void)createMovieFromImages:(NSArray *)images...
There is no guarantee placing these methods on a background thread will work, but it is worth trying. The following questions/answer/links have some relevant points of interest regarding threading, if you are not aware of them, and even if you are some of the points are interesting to read as a developer:
GCD - main vs background thread for updating a UIImageView
NSOperation and NSOperationQueue working thread vs main thread
http://pinkstone.co.uk/how-to-execute-a-method-on-a-background-thread-in-ios/

Related

CGContextDrawImage huge memory peak

I'm developing a movie maker application which make some effects on imported videos.
I'm using AVAssetWriter to code my application.
Everything works very good but I have a big problem in memory.
My app takes over 500 MB of the RAM in buffering process.
Simply the algorithm for making a filtered video is going like this:
1- import video.
2- extract all the frames for the video as CMSampleBuffer objects.
3- convert CMSampleBuffer object to uiimage.
4- implement the filter on the uiimage.
5- convert the uiimage back to a new CMSAmpleBuffer object.
6- Append the new buffer to a writer output.
7- Finally save the new movie to PhotoGallery.
The problem is in step5 I have a function which converts a UIImage to cvpixelBuffer object and return it.
Then I convert the CVPixelBuffer object to CMSampleBuffer.
The function increases the memory a lot and the application crashes at the end.
This is my code:
-(CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize)size
{
double height = CGImageGetHeight(image);
double width = CGImageGetWidth(image);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
if (status != kCVReturnSuccess) {
return NULL;
}
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata,size.width ,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
CGFloat Y ;
if (height == size.height)
Y = 0;
else
Y = (size.height /2) - (height/2) ;
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, Y,width,height), image);
CGContextSetInterpolationQuality(context, kCGInterpolationHigh);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
CGContextDrawImage increases the memory by 2~5 MB per frame conversion.
I tried the following solutions:
1- releasing pxbuffer using CFRelease.
2- I used CGImageRelease to release the image ref.
3- I surrounded the code with #autoreleasepool block.
4- I used CGContextRelease.
5- UIGraphicsEndImageContext.
6- Used Analyze too in Xcode and fixed all the points.
Here is the full code for Video filtering:
- (void)assetFilteringMethod:(FilterType)filterType AndAssetURL:(NSURL *)assetURL{
CMSampleBufferRef sbuff ;
[areader addOutput:rout];
[areader startReading];
UIImage* bufferedImage;
while ([areader status] != AVAssetReaderStatusCompleted) {
sbuff = [rout copyNextSampleBuffer];
if (sbuff == nil)
[areader cancelReading];
else{
if (writerInput.readyForMoreMediaData) {
#autoreleasepool {
bufferedImage = [self imageFromSampleBuffer:sbuff];
bufferedImage = [FrameFilterClass convertImageToFilterWithFilterType:filterType andImage: bufferedImage];
CVPixelBufferRef buffer = NULL;
buffer = [self pixelBufferFromCGImage:[bufferedImage CGImage] andSize:CGSizeMake(320,240)];
[adaptor appendPixelBuffer:buffer withPresentationTime:CMSampleBufferGetPresentationTimeStamp(sbuff)];
CFRelease(buffer);
CFRelease(sbuff);
}
}
}
}
//Finished buffering
[videoWriter finishWritingWithCompletionHandler:^{
if (videoWriter.status != AVAssetWriterStatusFailed && videoWriter.status == AVAssetWriterStatusCompleted){
dispatch_async(dispatch_get_main_queue(), ^{
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
if ([library
videoAtPathIsCompatibleWithSavedPhotosAlbum:[NSURL fileURLWithPath:moviePath]]) {
[library writeVideoAtPathToSavedPhotosAlbum:[NSURL fileURLWithPath:moviePath]
completionBlock:^(NSURL *assetURL, NSError *error){
}];
}
});
}
else
NSLog(#"Video writing failed: %#", videoWriter.error);
}];
}
I spent around 3 to 4 days trying to solve this problem...
Any help would be appreciated.
You have to release the image using this line:
cgimagerelease(image.cgimage)

Memory warning OpenGL iOS Application

I am working on a rich graphics iOS application. At an instance, the memory taken by our application is 250 MB. I would take each frame from Camera, process it with OpenGL shaders and extract some data. Each time I use the camera to get the frames for processing I see an increase in the memory up to 280 MB. When I stop capturing the frames, memory comes back to normal to 250 MB. If I repeat the process of starting the camera and exiting for 10 times (lets say), I receive a memory warning (Though no memory leak being observed). I am not using ARC here. I am maintaing an auto release pool that includes the entire processing of a frame. I don't see any leaks while profiling. After 10 times, the memory seems to stand at 250 MB. I am not sure of the reason for memory warning. Any insights? I am happy to provide further information. Opengl version - ES 2.0, iOS version - 7.0
you have to use ARC, it will automatically release the bad memory, and make your application optimized
According to some other questions like this one (Crash running OpenGL on iOS after memory warning) and this one (instruments with iOS: Why does Memory Monitor disagree with Allocations?) the problem may be that you aren't deleting OpenGL resources (VBOs, textures, renderbuffers, whatever) when you're done with them.
Without seeing code, who knows? Are you simply rendering the frame buffer using the presentRenderbuffer method of EAGLContext? Then, what are you doing with the pixelBuffer you passed to CVOpenGLESTextureCacheCreateTextureFromImage? The pixel buffer is the only source of substantial memory in a typical use scenario.
However, if you're swapping the data in the render buffer to another buffer with, say, glReadPixels, then you've introduced one of several memory hogs. If the buffer you swapped to was a CoreGraphics buffer via, say, a CGDataProvider, did you include a data release callback, or did you pass nil as the parameter when you created the provider? Did you glFlush after you swapped buffers?
These are questions for which I could ascertain answers if you provided code; if you think you can tackle this without doing so, but would like to see working code that successfully manages memory in the most arduous use-case scenario there could possibly be:
https://demonicactivity.blogspot.com/2016/11/tech-serious-ios-developers-use-every.html
For your convenience, I've provided some code below. Place it after any call to the presentRenderbuffer method, commenting out the call if you do not want to render the buffer to the display in the CAEAGLLayer (as I did in the sample below):
// [_context presentRenderbuffer:GL_RENDERBUFFER];
dispatch_async(dispatch_get_main_queue(), ^{
#autoreleasepool {
// To capture the output to an OpenGL render buffer...
NSInteger myDataLength = _backingWidth * _backingHeight * 4;
GLubyte *buffer = (GLubyte *) malloc(myDataLength);
glPixelStorei(GL_UNPACK_ALIGNMENT, 8);
glReadPixels(0, 0, _backingWidth, _backingHeight, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
// To swap the pixel buffer to a CoreGraphics context (as a CGImage)
CGDataProviderRef provider;
CGColorSpaceRef colorSpaceRef;
CGImageRef imageRef;
CVPixelBufferRef pixelBuffer;
#try {
provider = CGDataProviderCreateWithData(NULL, buffer, myDataLength, &releaseDataCallback);
int bitsPerComponent = 8;
int bitsPerPixel = 32;
int bytesPerRow = 4 * _backingWidth;
colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
imageRef = CGImageCreate(_backingWidth, _backingHeight, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
} #catch (NSException *exception) {
NSLog(#"Exception: %#", [exception reason]);
} #finally {
if (imageRef) {
// To convert the CGImage to a pixel buffer (for writing to a file using AVAssetWriter)
pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:imageRef];
// To verify the integrity of the pixel buffer (by converting it back to a CGIImage, and thendisplaying it in a layer)
imageLayer.contents = (__bridge id)[CVCGImageUtil cgImageFromPixelBuffer:pixelBuffer context:_ciContext];
}
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(imageRef);
}
}
});
.
.
.
The callback to free the data in the instance of the CGDataProvider class:
static void releaseDataCallback (void *info, const void *data, size_t size) {
free((void*)data);
}
The CVCGImageUtil class interface and implementation files, respectively:
#import Foundation;
#import CoreMedia;
#import CoreGraphics;
#import QuartzCore;
#import CoreImage;
#import UIKit;
#interface CVCGImageUtil : NSObject
+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context;
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image;
+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image;
#end
#import "CVCGImageUtil.h"
#implementation CVCGImageUtil
+ (CGImageRef)cgImageFromPixelBuffer:(CVPixelBufferRef)pixelBuffer context:(CIContext *)context
{
// CVPixelBuffer to CoreImage
CIImage *image = [CIImage imageWithCVPixelBuffer:pixelBuffer];
image = [image imageByApplyingTransform:CGAffineTransformMakeRotation(M_PI)];
CGPoint origin = [image extent].origin;
image = [image imageByApplyingTransform:CGAffineTransformMakeTranslation(-origin.x, -origin.y)];
// CoreImage to CGImage via CoreImage context
CGImageRef cgImage = [context createCGImage:image fromRect:[image extent]];
// CGImage to UIImage (OPTIONAL)
//UIImage *uiImage = [UIImage imageWithCGImage:cgImage];
//return (CGImageRef)uiImage.CGImage;
return cgImage;
}
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
{
CGSize frameSize = CGSizeMake(CGImageGetWidth(image),
CGImageGetHeight(image));
NSDictionary *options =
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES],
kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES],
kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status =
CVPixelBufferCreate(
kCFAllocatorDefault, frameSize.width, frameSize.height,
kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(
pxdata, frameSize.width, frameSize.height,
8, CVPixelBufferGetBytesPerRow(pxbuffer),
rgbColorSpace,
(CGBitmapInfo)kCGBitmapByteOrder32Little |
kCGImageAlphaPremultipliedFirst);
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
+ (CMSampleBufferRef)sampleBufferFromCGImage:(CGImageRef)image
{
CVPixelBufferRef pixelBuffer = [CVCGImageUtil pixelBufferFromCGImage:image];
CMSampleBufferRef newSampleBuffer = NULL;
CMSampleTimingInfo timimgInfo = kCMTimingInfoInvalid;
CMVideoFormatDescriptionRef videoInfo = NULL;
CMVideoFormatDescriptionCreateForImageBuffer(
NULL, pixelBuffer, &videoInfo);
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
pixelBuffer,
true,
NULL,
NULL,
videoInfo,
&timimgInfo,
&newSampleBuffer);
return newSampleBuffer;
}
#end

Alpha channel gone when converting UIImage to CVPixelBufferRef

I'm using this code to create a movie from different uiimages with an AVAssetWriter. The codes works great but the problem is that the Alpha channel is gone when I add the images to the writer. I can't figure out if the alpha doesn't exists in the CVPixelBufferRef or that the AVAssetWriter isn't able to process.
My end result isn't a movie with an alpha channel but multiple images on top of each other and merged in a movie file. I can put images on top of other images in a single frame but all the images (pixel buffers) have a black background...
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize) size {
#autoreleasepool {
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, (CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
}
This is never going to work because h.264 does not support an alpha channel. You cannot encode a movie with an alpha channel using the built in iOS logic, end of story. It is possible to composite layers before encoding though. It is also possible to compose encode with a 3rd party library that does support an alpha channel. See this question for more info.

AVAssetWriter slowdown

I am using AVAssetWriter to save the live feed from the camera. This works well using this code
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if(videoWriter.status != AVAssetWriterStatusWriting){
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:lastSampleTime];
}
if(adaptor.assetWriterInput.readyForMoreMediaData) [adaptor appendPixelBuffer:imageBuffer withPresentationTime:lastSampleTime];
else NSLog(#"adaptor not ready",);
}
I am usually getting close to 30 fps (however not 60 fps on iPhone 4s as noted by others) and when timing [adaptor appendPixelBuffer] it only takes a few ms.
However, I don't need the full frame, but I need high quality (low compression, key frame every frame) and I am going to read it back a process several times later. I therefore would like to crop the image before writing. Fortunately I only need a strip in the middle so I can do a simple memcpy of the buffer. To do this I am creating a CVPixelBufferRef that I am copying into and writing with the adaptor:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CMTime lastSampleTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer);
if(videoWriter.status != AVAssetWriterStatusWriting){
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:lastSampleTime];
}
CVPixelBufferLockBaseAddress(imageBuffer,0);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
void * buffIn = CVPixelBufferGetBaseAddress(imageBuffer);
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32BGRA, nil, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *buffOut = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(buffOut != NULL);
//Copy the whole buffer while testing
memcpy(buffOut, buffIn, width * height * 4);
//memcpy(buffOut, buffIn+sidecrop, width * 100 * 4);
if (adaptor.assetWriterInput.readyForMoreMediaData) [adaptor appendPixelBuffer:pxbuffer withPresentationTime:lastSampleTime];
else NSLog(#"adaptor not ready");
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
}
This also works and the video looks OK. However it is very slow and the frame rate becomes unacceptable. And strangely, the big slowdown isn't the copying but that the [adaptor appendPixelBuffer] step now takes 10-100 times longer than before. So I guess that it doesn't like the pxbuffer I create, but I can see why. I am using kCVPixelFormatType_32BGRA when setting up both the video out and the adaptor.
Can anyone suggest a better way to do the copying/cropping? Can you do that directly on the ImageBuffer?
I found a solution. In iOS5 (I had missed the updates) you can set AVAssetWriter to crop your video (as also noted by Steve). Set AVVideoScalingModeKey to AVVideoScalingModeResizeAspectFill
videoWriter = [[AVAssetWriter alloc] initWithURL:filmurl
fileType:AVFileTypeQuickTimeMovie
error:&error];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:1280], AVVideoWidthKey,
[NSNumber numberWithInt:200], AVVideoHeightKey,
AVVideoScalingModeResizeAspectFill, AVVideoScalingModeKey,// This turns the
// scale into a crop
nil];
videoWriterInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain];

CVOpenGLESTextureCacheCreateTextureFromImage on iPad2 is too slow ,it needs almost 30 ms, too crazy

I use opengl es to display bgr24 data on iPad, I am new about opengl es ,so in display video part I use code from RosyWriter one APPLE sample. It works, but the CVOpenGLESTextureCacheCreateTextureFromImage function cost more than 30ms, while in RosyWriter
its cost is negligible.
what I do is first transform BGR24 to BGRA pixel format, then use CVPixelBufferCreateWithBytes function create a CVPixelBufferRef, and then get a CVOpenGLESTextureRef by CVOpenGLESTextureCacheCreateTextureFromImage. My codes as following,
- (void)transformBGRToBGRA:(const UInt8 *)pict width:(int)width height:(int)height
{
rgb.data = (void *)pict;
vImage_Error error = vImageConvert_RGB888toARGB8888(&rgb,NULL,0,&argb,NO,kvImageNoFlags);
if (error != kvImageNoError) {
NSLog(#"vImageConvert_RGB888toARGB8888 error");
}
const uint8_t permuteMap[4] = {1,2,3,0};
error = vImagePermuteChannels_ARGB8888(&argb,&bgra,permuteMap,kvImageNoFlags);
if (error != kvImageNoError) {
NSLog(#"vImagePermuteChannels_ARGB8888 error");
}
free((void *)pict);
}
and after transform, will generate CVPixelBufferRef, codes as following,
[self transformBGRToBGRA:pict width:width height:height];
CVPixelBufferRef pixelBuffer;
CVReturn err = CVPixelBufferCreateWithBytes(NULL,
width,
height,
kCVPixelFormatType_32BGRA,
(void*)bgraData,
bytesByRow,
NULL,
0,
NULL,
&pixelBuffer);
if(!pixelBuffer || err)
{
NSLog(#"CVPixelBufferCreateWithBytes failed (error: %d)", err);
return;
}
CVOpenGLESTextureRef texture = NULL;
err = CVOpenGLESTextureCacheCreateTextureFromImage(kCFAllocatorDefault,
videoTextureCache,
pixelBuffer,
NULL,
GL_TEXTURE_2D,
GL_RGBA,
width,
height,
GL_BGRA,
GL_UNSIGNED_BYTE,
0,
&texture);
if (!texture || err) {
NSLog(#"CVOpenGLESTextureCacheCreateTextureFromImage failed (error: %d)", err);
CVPixelBufferRelease(pixelBuffer);
return;
}
The other codes is almost similar RosyWriter sample, include shaders. So I want to know why,
how to fix this problem.
With my research in these day, I find why CVOpenGLESTextureCacheCreateTextureFromImage cost much time, when the data is big, here is 3M, the allocation, copy and move operation which cost is considerable, especially Copy operation. Then with pixel buffer pool greatly improve performance of CVOpenGLESTextureCacheCreateTextureFromImage from 30ms to 5ms, the same level with glTexImage2D(). My solution as following:
NSMutableDictionary* attributes;
attributes = [NSMutableDictionary dictionary];
[attributes setObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA] forKey:(NSString*)kCVPixelBufferPixelFormatTypeKey];
[attributes setObject:[NSNumber numberWithInt:videoWidth] forKey: (NSString*)kCVPixelBufferWidthKey];
[attributes setObject:[NSNumber numberWithInt:videoHeight] forKey: (NSString*)kCVPixelBufferHeightKey];
CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (CFDictionaryRef) attributes, &bufferPool);
CVPixelBufferPoolCreatePixelBuffer (NULL,bufferPool,&pixelBuffer);
CVPixelBufferLockBaseAddress(pixelBuffer,0);
UInt8 * baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer);
memcpy(baseAddress, bgraData, bytesByRow * videoHeight);
CVPixelBufferUnlockBaseAddress(pixelBuffer,0);
with this new created pixelBuffer you can make it fast.
Add following configures to attribtes can make its performance to the best, less than 1ms.
NSDictionary *IOSurfaceProperties = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], #"IOSurfaceOpenGLESFBOCompatibility",[NSNumber numberWithBool:YES], #"IOSurfaceOpenGLESTextureCompatibility",nil];
[attributes setObject:IOSurfaceProperties forKey:(NSString*)kCVPixelBufferIOSurfacePropertiesKey];

Resources