I am using this code to create CVPixelBufferRef:
NSDictionary *videoSettings = #{AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: [NSNumber numberWithInt:size.width],
AVVideoHeightKey: [NSNumber numberWithInt:size.height]};
self.writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
self.adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.writerInput sourcePixelBufferAttributes:nil];
CVPixelBufferRef buffer;
CVPixelBufferPoolCreatePixelBuffer(NULL, self.adaptor.pixelBufferPool, &buffer);
buffer = [self pixelBufferFromCGImage:[frame CGImage] size:self.videoSize];
this is the pixelBufferFromCGImage function:
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
size:(CGSize)imageSize
{
NSDictionary *options = #{(id)kCVPixelBufferCGImageCompatibilityKey: #YES,
(id)kCVPixelBufferCGBitmapContextCompatibilityKey: #YES};
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, imageSize.width,
imageSize.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, imageSize.width,
imageSize.height, 8, 4*imageSize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0 + (imageSize.width-CGImageGetWidth(image))/2,
(imageSize.height-CGImageGetHeight(image))/2,
CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
The problem is that the buffer is allways empty, any idea why its happen?
I guess that the buffer is always nil because the "videoSize" is not valid. I have tested your code in 3 scenarios and here is the results
The first one I passed valid parameters with an image and size 200x200. The buffer was not nil.
The second one I passed a nil for image and video size 200x200. The buffer was not nil.
The last one I passed a valid image but invalid video size 0x0 and the buffer was nil. The return status is
kCVReturnInvalidArgument
Invalid function parameter. For example, out of range or the wrong type.
Value
-6661
Description
Invalid function parameter. For example, out of range or the wrong type.
Hope this will help you.
This is the code that I tested.
- (void)viewDidLoad
{
[super viewDidLoad];
UIImage *image = [UIImage imageNamed:#"Doge-Meme.jpg"];
CVPixelBufferRef bufferRef = [[self class] pixelBufferFromCGImage:image.CGImage size:CGSizeMake(200, 200)];
NSLog(#"## %#", bufferRef);//not nil
CVPixelBufferRef bufferRef1 = [[self class] pixelBufferFromCGImage:nil size:CGSizeMake(200, 200)];
NSLog(#"## %#", bufferRef1);//not nil
CVPixelBufferRef bufferRef2 = [[self class] pixelBufferFromCGImage:image.CGImage size:CGSizeMake(0, 0)];
NSLog(#"## %#", bufferRef2);//nil
}
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
size:(CGSize)imageSize
{
NSDictionary *options = #{(id)kCVPixelBufferCGImageCompatibilityKey: #YES,
(id)kCVPixelBufferCGBitmapContextCompatibilityKey: #YES};
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, imageSize.width,
imageSize.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, imageSize.width,
imageSize.height, 8, 4*imageSize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0 + (imageSize.width-CGImageGetWidth(image))/2,
(imageSize.height-CGImageGetHeight(image))/2,
CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
Related
convert uiimages to mp4 using HJImagesToVideo (source code from github), but i fund it may have memory leak. more than 200 images converting, there will be memory warning, and then crash. the source code here:
+ (void)writeImageAsMovie:(NSArray )array
toPath:(NSString)path
size:(CGSize)size
fps:(int)fps
animateTransitions:(BOOL)shouldAnimateTransitions
withCallbackBlock:(SuccessBlock)callbackBlock
{
NSLog(#"%#", path);
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path]
fileType:AVFileTypeMPEG4
error:&error];
if (error)
{
if (callbackBlock)
{
callbackBlock(NO);
}
return;
}
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = #{AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: [NSNumber numberWithInt:size.width],
AVVideoHeightKey: [NSNumber numberWithInt:size.height]};
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer;
CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &buffer);
CMTime presentTime = CMTimeMake(0, fps);
int i = 0;
while (1)
{
if(writerInput.readyForMoreMediaData)
{
presentTime = CMTimeMake(i, fps);
if (i >= [array count])
{
buffer = NULL;
}
else
{
buffer = [HJImagesToVideo pixelBufferFromCGImage:[array[i] CGImage] size:CGSizeMake(480, 320)];
}
if (buffer)
{
//append buffer
BOOL appendSuccess = [HJImagesToVideo appendToAdapter:adaptor
pixelBuffer:buffer
atTime:presentTime
withInput:writerInput];
NSAssert(appendSuccess, #"Failed to append");
if (shouldAnimateTransitions && i + 1 < array.count)
{
//Create time each fade frame is displayed
CMTime fadeTime = CMTimeMake(1, fps*TransitionFrameCount);
//Add a delay, causing the base image to have more show time before fade begins.
for (int b = 0; b < FramesToWaitBeforeTransition; b++)
{
presentTime = CMTimeAdd(presentTime, fadeTime);
}
//Adjust fadeFrameCount so that the number and curve of the fade frames and their alpha stay consistant
NSInteger framesToFadeCount = TransitionFrameCount - FramesToWaitBeforeTransition;
//Apply fade frames
for (double j = 1; j < framesToFadeCount; j++)
{
buffer = [HJImagesToVideo crossFadeImage:[array[i] CGImage]
toImage:[array[i + 1] CGImage]
atSize:CGSizeMake(480, 320)
withAlpha:j/framesToFadeCount];
BOOL appendSuccess = [HJImagesToVideo appendToAdapter:adaptor
pixelBuffer:buffer
atTime:presentTime
withInput:writerInput];
presentTime = CMTimeAdd(presentTime, fadeTime);
NSAssert(appendSuccess, #"Failed to append");
}
}
i++;
}
else
{
//Finish the session:
[writerInput markAsFinished];
[videoWriter finishWritingWithCompletionHandler:^{
NSLog(#"Successfully closed video writer");
if (videoWriter.status == AVAssetWriterStatusCompleted) {
if (callbackBlock) {
callbackBlock(YES);
}
} else {
if (callbackBlock) {
callbackBlock(NO);
}
}
}];
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
//CVPixelBufferRelease(buffer);
NSLog (#"Done");
break;
}
}
}
}
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
size:(CGSize)imageSize
{
NSDictionary *options = #{(id)kCVPixelBufferCGImageCompatibilityKey: #YES,
(id)kCVPixelBufferCGBitmapContextCompatibilityKey: #YES};
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, imageSize.width,
imageSize.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, imageSize.width,
imageSize.height, 8, 4*imageSize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0 + (imageSize.width-CGImageGetWidth(image))/2,
(imageSize.height-CGImageGetHeight(image))/2,
CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I'm building an app using AVFoundation.
Just before I call [assetWriterInput appendSampleBuffer:sampleBuffer] in
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection-method.
I manipulate the pixels in the sample buffer (using a pixelbuffer to apply an effect).
But the client wants me to put in a text (timestamp & framecounter) as well on the frames, but I haven't found a way to do this yet.
I tried to convert the samplebuffer to an Image, apply text on the image, and convert the image back to a samplebuffer, but then
CMSampleBufferDataIsReady(sampleBuffer)
fails.
Here are my UIImage category methods:
+ (UIImage *) imageFromSampleBuffer:(CMSampleBufferRef) sampleBuffer
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0);
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
UIImage *newUIImage = [UIImage imageWithCGImage:newImage];
CFRelease(newImage);
return newUIImage;
}
And
- (CMSampleBufferRef) cmSampleBuffer
{
CGImageRef image = self.CGImage;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
self.size.width,
self.size.height,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, self.size.width,
self.size.height, 8, 4*self.size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
CMVideoFormatDescriptionRef videoInfo = NULL;
CMSampleBufferRef sampleBuffer = NULL;
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault,
pxbuffer, true, NULL, NULL, videoInfo, NULL, &sampleBuffer);
return sampleBuffer;
}
Any ideas?
EDIT:
I changed my code with Tony's answer. (Thank you!)
This code works:
CVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
EAGLContext *eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
CIContext *ciContext = [CIContext contextWithEAGLContext:eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]} ];
UIFont *font = [UIFont fontWithName:#"Helvetica" size:40];
NSDictionary *attributes = #{NSFontAttributeName: font,
NSForegroundColorAttributeName: [UIColor lightTextColor]};
UIImage *img = [UIImage imageFromText:#"01 - 13/02/2014 15:18:21:654" withAttributes:attributes];
CIImage *filteredImage = [[CIImage alloc] initWithCGImage:img.CGImage];
[ciContext render:filteredImage toCVPixelBuffer:pixelBuffer bounds:[filteredImage extent] colorSpace:CGColorSpaceCreateDeviceRGB()];
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
You should refer the CIFunHouse sample from apple, and you may use this api to draw directly to the buffer
-(void)render:(CIImage *)image toCVPixelBuffer:(CVPixelBufferRef)buffer bounds:(CGRect)r colorSpace:(CGColorSpaceRef)cs
You can download it here WWDC2013
Create the context
_eaglContext = [[EAGLContext alloc] initWithAPI:kEAGLRenderingAPIOpenGLES2];
_ciContext = [CIContext contextWithEAGLContext:_eaglContext options:#{kCIContextWorkingColorSpace : [NSNull null]} ];
Now render the image
CVPixelBufferRef renderedOutputPixelBuffer = NULL;
OSStatus err = CVPixelBufferPoolCreatePixelBuffer(nil, self.pixelBufferAdaptor.pixelBufferPool, &renderedOutputPixelBuffer);
[_ciContext render:filteredImage toCVPixelBuffer:renderedOutputPixelBuffer bounds:[filteredImage extent]
I have been able to create the playing .mov, by following the various tutorials on stack.
However, my image contains alpha.
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = (CGBitmapInfo) kCGImageAlphaNoneSkipLast;
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
bitmapInfo);
This produces
I have a feeling, its due to the bitmapInfo, but when I use kCGImageAlphaLast my app crashes with
This error
CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 32 bits/pixel; 3-component color space; kCGImageAlphaLast; 688 bytes/row.
I'm hoping there is an easy solution, I'm not sure what I'm missing.
Here is my full pixelBufferCode
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image {
CGSize size = CGSizeMake(self.imageWidth, self.imageHeight);
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
size.width,
size.height,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
if (status != kCVReturnSuccess){
NSLog(#"Failed to create pixel buffer");
}
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = (CGBitmapInfo) kCGImageAlphaLast;
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
bitmapInfo);
//kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I'm build an iOS app that does some basic detection.
I get the raw frames from AVCaptureVideoDataOutput, convert the CMSampleBufferRef to a UIImage, resize the UIImage, then convert it to a CVPixelBufferRef.
As far as I can detect with Instruments the leak is the last part where I convert the CGImage to a CVPixelBufferRef.
Here's the code I use:
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
videof = [[ASMotionDetect alloc] initWithSampleImage:[self resizeSampleBuffer:sampleBuffer]];
// ASMotionDetect is my class for detection and I use videof to calculate the movement
}
-(UIImage*)resizeSampleBuffer:(CMSampleBufferRef) sampleBuffer {
UIImage *img;
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
CVPixelBufferLockBaseAddress(imageBuffer,0); // Lock the image buffer
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddressOfPlane(imageBuffer, 0); // Get information of the image
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef newContext = CGBitmapContextCreate(baseAddress, width, height, 8, bytesPerRow, colorSpace, kCGBitmapByteOrder32Little | kCGImageAlphaPremultipliedFirst);
CGImageRef newImage = CGBitmapContextCreateImage(newContext);
CGContextRelease(newContext);
CGColorSpaceRelease(colorSpace);
CVPixelBufferUnlockBaseAddress(imageBuffer,0);
/* CVBufferRelease(imageBuffer); */ // do not call this!
img = [UIImage imageWithCGImage:newImage];
CGImageRelease(newImage);
newContext = nil;
img = [self resizeImageToSquare:img];
return img;
}
-(UIImage*)resizeImageToSquare:(UIImage*)_temp {
UIImage *img;
int w = _temp.size.width;
int h = _temp.size.height;
CGRect rect;
if (w>h) {
rect = CGRectMake((w-h)/2,0,h,h);
} else {
rect = CGRectMake(0, (h-w)/2, w, w);
}
//
img = [self crop:_temp inRect:rect];
return img;
}
-(UIImage*) crop:(UIImage*)image inRect:(CGRect)rect{
UIImage *sourceImage = image;
CGRect selectionRect = rect;
CGRect transformedRect = TransformCGRectForUIImageOrientation(selectionRect, sourceImage.imageOrientation, sourceImage.size);
CGImageRef resultImageRef = CGImageCreateWithImageInRect(sourceImage.CGImage, transformedRect);
UIImage *resultImage = [[UIImage alloc] initWithCGImage:resultImageRef scale:1.0 orientation:image.imageOrientation];
CGImageRelease(resultImageRef);
return resultImage;
}
And in my detection class I have:
- (id)initWithSampleImage:(UIImage*)sampleImage {
if ((self = [super init])) {
_frame = new CVMatOpaque();
_histograms = new CVMatNDOpaque[kGridSize *
kGridSize];
[self extractFrameFromImage:sampleImage];
}
return self;
}
- (void)extractFrameFromImage:(UIImage*)sampleImage {
CGImageRef imageRef = [sampleImage CGImage];
CVImageBufferRef imageBuffer = [self pixelBufferFromCGImage:imageRef];
CVPixelBufferLockBaseAddress(imageBuffer, 0);
// Collect some information required to extract the frame.
void *baseAddress = CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t height = CVPixelBufferGetHeight(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
// Extract the frame, convert it to grayscale, and shove it in _frame.
cv::Mat frame(height, width, CV_8UC4, baseAddress, bytesPerRow);
cv::cvtColor(frame, frame, CV_BGR2GRAY);
_frame->matrix = frame;
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
CGImageRelease(imageRef);
}
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image
{
CVPixelBufferRef pxbuffer = NULL;
int width = CGImageGetWidth(image)*2;
int height = CGImageGetHeight(image)*2;
NSMutableDictionary *attributes = [NSMutableDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, [NSNumber numberWithInt:width], kCVPixelBufferWidthKey, [NSNumber numberWithInt:height], kCVPixelBufferHeightKey, nil];
CVPixelBufferPoolRef pixelBufferPool;
CVReturn theError = CVPixelBufferPoolCreate(kCFAllocatorDefault, NULL, (__bridge CFDictionaryRef) attributes, &pixelBufferPool);
NSParameterAssert(theError == kCVReturnSuccess);
CVReturn status = CVPixelBufferPoolCreatePixelBuffer(NULL, pixelBufferPool, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, width,
height, 8, width*4, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
/* here is the problem: */
CGContextDrawImage(context, CGRectMake(0, 0, width, height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
With Instrument I found out that the problem is with CVPixelBufferRef allocations but I don't understand why - can someone see the problem?
Thank you
In -pixelBufferFromCGImage:, both pxBuffer and pixelBufferPool are not released. That makes sense for pxBuffer, as it is a return value, but not for pixelBufferPool – you create and leak one per call of the method.
A quick fix should be to
Release pixelBufferPool in -pixelBufferFromCGImage:
Release pxBuffer (the return value of -pixelBufferFromCGImage:) in -extractFrameFromImage:
You should also rename -pixelBufferFromCGImage: to -createPixelBufferFromCGImage: to make clear that it returns a retained object.
Can somebody help me trace these CoreVideo memory leaks when running Instruments in XCode?
Basically, the memory leak happens when I press the "Record Video" button on my custom motion jpeg player. I cannot tell exactly which part of my code is leaking as Leaks Instruments is not pointing to any of my calls. BTW, I'm using the iPad device to test the leaks.
Heres the messages from the Leaks Instruments:
Responsible Library = CoreVideo
Responsible Frame:
CVPixelBufferBacking::initWithPixelBufferDescription(..)
CVObjectAlloc(...)
CVBuffer::init()
Here's my code that handles each motion jpeg frames streamed by the server:
-(void)processServerData:(NSData *)data{
/*
//render the video in the UIImage control
*/
UIImage *image =[UIImage imageWithData:data];
self.imageCtrl.image = image;
/*
//check if we are recording
*/
if (myRecorder.isRecording) {
//create initial sample: todo:check if this is still needed
if (counter==0) {
self.buffer = [Recorder pixelBufferFromCGImage:image.CGImage size:myRecorder.imageSize];
CVPixelBufferPoolCreatePixelBuffer (NULL, myRecorder.adaptor.pixelBufferPool, &buffer);
if(buffer)
{
CVBufferRelease(buffer);
}
}
if (counter < myRecorder.maxFrames)
{
if([myRecorder.writerInput isReadyForMoreMediaData])
{
CMTime frameTime = CMTimeMake(1, myRecorder.timeScale);
CMTime lastTime=CMTimeMake(counter, myRecorder.timeScale);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
self.buffer = [Recorder pixelBufferFromCGImage:image.CGImage size:myRecorder.imageSize];
[myRecorder.adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
if(buffer)
{
CVBufferRelease(buffer);
}
counter++;
if (counter==myRecorder.maxFrames)
{
[myRecorder finishSession];
counter=0;
myRecorder.isRecording = NO;
}
}
else
{
NSLog(#"adaptor not ready counter=%d ",counter );
}
}
}
}
Here's the pixelBufferFromCGImage function:
+ (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image size:(CGSize) size{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
Aprpeciate any help! Thanks
I refactored the processFrame method and I'm no longer getting the leaks.
-(void) processFrame:(UIImage *) image {
if (myRecorder.frameCounter < myRecorder.maxFrames)
{
if([myRecorder.writerInput isReadyForMoreMediaData])
{
CMTime frameTime = CMTimeMake(1, myRecorder.timeScale);
CMTime lastTime=CMTimeMake(myRecorder.frameCounter, myRecorder.timeScale);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
buffer = [Recorder pixelBufferFromCGImage:image.CGImage size:myRecorder.imageSize];
if(buffer)
{
[myRecorder.adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
myRecorder.frameCounter++;
CVBufferRelease(buffer);
if (myRecorder.frameCounter==myRecorder.maxFrames)
{
[myRecorder finishSession];
myRecorder.frameCounter=0;
myRecorder.isRecording = NO;
}
}
else
{
NSLog(#"Buffer is empty");
}
}
else
{
NSLog(#"adaptor not ready frameCounter=%d ",myRecorder.frameCounter );
}
}
}
I don't see anything too obvious. I did notice you use self.buffer and buffer here. If it is retained, you might be leaking there. If CVPixelBufferPoolCreatePixelBuffer if allocating memory in the second line after self.buffer retained in the first line, the first might be leaking.
self.buffer = [Recorder pixelBufferFromCGImage:image.CGImage size:myRecorder.imageSize];
CVPixelBufferPoolCreatePixelBuffer (NULL, myRecorder.adaptor.pixelBufferPool, &buffer);
Hope that helps.