Custom video size causes error in AVAssetWriter - ios

I used the below code to render a simple video with a red rectangle. Everything works fine with _CanvasSize = CGSizeMake(320, 200);. However, the video is tearing if I change the size to _CanvasSize = CGSizeMake(321, 200); or (100, 100).
Does anyone know why and which size should I choose? (I use XCode 7.3.1 iOS 9 SDK).
NSString *fileNameOut = #"temp.mp4";
NSString *directoryOut = #"tmp/";
NSString *outFile = [NSString stringWithFormat:#"%#%#",directoryOut,fileNameOut];
NSString *path = [NSHomeDirectory() stringByAppendingPathComponent:[NSString stringWithFormat:#"%#",outFile]];
NSURL *videoTempURL = [NSURL fileURLWithPath:[NSString stringWithFormat:#"%#%#", NSTemporaryDirectory(), fileNameOut]];
// WARNING: AVAssetWriter does not overwrite files for us, so remove the destination file if it already exists
NSFileManager *fileManager = [NSFileManager defaultManager];
[fileManager removeItemAtPath:[videoTempURL path] error:NULL];
CGSize _CanvasSize;// = CGSizeMake(size.width, size.height);
NSError *error = nil;
NSInteger FPS = 30;
AVAssetWriter* VIDCtrl = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path] fileType:AVFileTypeMPEG4 error:&error];
if (!VIDCtrl || error)
{
NSLog(#"Can NOT Create Video Writer");
return;
}
_CanvasSize = CGSizeMake(321, 200);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:_CanvasSize.width], AVVideoWidthKey,
[NSNumber numberWithInt:_CanvasSize.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([VIDCtrl canAddInput:writerInput]);
[VIDCtrl addInput:writerInput];
[VIDCtrl startWriting];
[VIDCtrl startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
double ftime = 600.0 / FPS;
double currenttime = 0;
double frametime = 1.0 / FPS;
int i = 0;
while (1)
{
// Check if the writer is ready for more data, if not, just wait
if(writerInput.readyForMoreMediaData){
CMTime frameTime = CMTimeMake(ftime, 600);
// CMTime = Value and Timescale.
// Timescale = the number of tics per second you want
// Value is the number of tics
// For us - each frame we add will be 1/4th of a second
// Apple recommend 600 tics per second for video because it is a
// multiple of the standard video rates 24, 30, 60 fps etc.
CMTime lastTime=CMTimeMake(i*ftime, 600);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
if (i == 0) {presentTime = CMTimeMake(0, 600);}
// This ensures the first frame starts at 0.
buffer = NULL;
if (i < 30)
{
NSLog(#"%d %d",i, presentTime.value);
CGSize sz = _CanvasSize;
int height = sz.height, width = sz.width;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
if (!pxbuffer)
{
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, width,
height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
}
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
NSUInteger bytesPerPixel = 4;
NSUInteger bytesPerRow = bytesPerPixel * sz.width;
NSUInteger bitsPerComponent = 8;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef gc = CGBitmapContextCreate(pxdata, sz.width, sz.height,
bitsPerComponent, bytesPerRow, colorSpace,
kCGImageAlphaNoneSkipFirst);
UIGraphicsPushContext(gc);
CGContextTranslateCTM(gc, 0, sz.height);
CGContextScaleCTM(gc, 1.0, -1.0);
CGContextSetFillColorWithColor(gc, [UIColor whiteColor].CGColor);
CGContextFillRect(gc, (CGRect){0,0,sz});
CGContextSetStrokeColorWithColor(gc, [UIColor redColor].CGColor);
CGContextStrokeRect(gc, CGRectMake(10, 10, 30, 30));
CGColorSpaceRelease(colorSpace);
CGContextRelease(gc);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
buffer = pxbuffer;
i++;
}
currenttime+=frametime;
if (buffer)
{
// Give the CGImage to the AVAssetWriter to add to your video
[adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
// CVBufferRelease(buffer);
CFRelease(buffer);
}
else
{
//Finish the session:
// This is important to be done exactly in this order
[writerInput markAsFinished];
// WARNING: finishWriting in the solution above is deprecated.
// You now need to give a completion handler.
[VIDCtrl finishWritingWithCompletionHandler:^{
NSLog(#"Finished writing...checking completion status...");
if (VIDCtrl.status != AVAssetWriterStatusFailed && VIDCtrl.status == AVAssetWriterStatusCompleted)
{
NSLog(#"Video writing succeeded To %#",path);
} else
{
NSLog(#"Video writing failed: %#", VIDCtrl.error);
}
}]; // end videoWriter finishWriting Block
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
NSLog (#"Done");
break;
}
}
}
This is 320 x 200 Canvas:
This is 321 x 200 Canvas (Even 100x100):

Okay, After a day of testing. The width of Video should be divisible by 16. (32, 320, 144, 480, 1280, 1920, etc....)

Related

is it possible to set GIF image with video?

I am trying to combine video with GIF image, For this I am using MainCompositionInst.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer]; and in the video layer I was set GIF image but unfortunately it was not animating, So my question is that is it possible to do this ? please suggest me..
Thanks in advance.
Apple's support for GIF is fairly limited.
You could use this code to convert from GIF to Video:
(With the current code the gif will be cropped to 480x480. For some resolutions the output image's colors are distorted so try to use a fixed frame that you know works.
Usage:
#import "SCGIFConverter.h"
NSURL *tempFileURL = //create a NSURL to a tempfile for output
[SCGIFConverter processGIFData:data toFilePath:tempFileURL completed:^(NSString *outputFilePath, NSError *error)
{
//Now you can access your tempFileURL to read the movie
//outputFilePath can be 'nil' if there was a problem
}];
SCGIFConverter.h
FOUNDATION_EXTERN NSString * const kGIF2MP4ConversionErrorDomain;
typedef enum {
kGIF2MP4ConversionErrorInvalidGIFImage = 0,
kGIF2MP4ConversionErrorAlreadyProcessing,
kGIF2MP4ConversionErrorBufferingFailed,
kGIF2MP4ConversionErrorInvalidResolution,
kGIF2MP4ConversionErrorTimedOut,
} kGIF2MP4ConversionError;
typedef void (^kGIF2MP4ConversionCompleted) (NSString* outputFilePath, NSError* error);
#interface SCGIFConverter : NSObject
+ (BOOL) processGIFData: (NSData*) data
toFilePath: (NSURL*) outFilePath
completed: (kGIF2MP4ConversionCompleted)handler;
#end
SCGIFConverter.m
#import <AVFoundation/AVFoundation.h>
#import <ImageIO/ImageIO.h>
#import <MobileCoreServices/MobileCoreServices.h>
#import "SCGIFConverter.h"
#define FPS 30
NSString * const kGIF2MP4ConversionErrorDomain = #"GIF2MP4ConversionError";
#implementation SCGIFConverter
+ (BOOL) processGIFData: (NSData*) data
toFilePath: (NSURL*) outFilePath
completed: (kGIF2MP4ConversionCompleted) completionHandler {
[[NSFileManager defaultManager] removeItemAtURL:outFilePath error:nil];
CGImageSourceRef source = CGImageSourceCreateWithData((__bridge CFDataRef)data, NULL);
CGImageMetadataRef meta = CGImageSourceCopyMetadataAtIndex(source, 0, NULL);
NSLog(#"%#",meta);
unsigned char *bytes = (unsigned char*)data.bytes;
NSError* error = nil;
if( !CGImageSourceGetStatus(source) == kCGImageStatusComplete ) {
error = [NSError errorWithDomain: kGIF2MP4ConversionErrorDomain
code: kGIF2MP4ConversionErrorInvalidGIFImage
userInfo: nil];
CFRelease(source);
completionHandler(outFilePath.absoluteString, error);
return NO;
}
size_t sourceWidth = bytes[6] + (bytes[7]<<8), sourceHeight = bytes[8] + (bytes[9]<<8);
sourceWidth = 480;
sourceHeight = 480;
//size_t sourceFrameCount = CGImageSourceGetCount(source);
__block size_t currentFrameNumber = 0;
__block Float64 totalFrameDelay = 0.f;
AVAssetWriter* videoWriter = [[AVAssetWriter alloc] initWithURL: outFilePath
fileType: AVFileTypeQuickTimeMovie
error: &error];
if( error ) {
CFRelease(source);
completionHandler(outFilePath.absoluteString, error);
return NO;
}
if( sourceWidth > 6400 || sourceWidth == 0) {
CFRelease(source);
error = [NSError errorWithDomain: kGIF2MP4ConversionErrorDomain
code: kGIF2MP4ConversionErrorInvalidResolution
userInfo: nil];
completionHandler(outFilePath.absoluteString, error);
return NO;
}
if( sourceHeight > 4800 || sourceHeight == 0 ) {
CFRelease(source);
error = [NSError errorWithDomain: kGIF2MP4ConversionErrorDomain
code: kGIF2MP4ConversionErrorInvalidResolution
userInfo: nil];
completionHandler(outFilePath.absoluteString, error);
return NO;
}
size_t totalFrameCount = CGImageSourceGetCount(source);
if( totalFrameCount <= 0 ) {
CFRelease(source);
error = [NSError errorWithDomain: kGIF2MP4ConversionErrorDomain
code: kGIF2MP4ConversionErrorInvalidGIFImage
userInfo: nil];
completionHandler(outFilePath.absoluteString, error);
return NO;
}
NSDictionary *videoSettings = #{
AVVideoCodecKey : AVVideoCodecH264,
AVVideoWidthKey : #(sourceWidth),
AVVideoHeightKey : #(sourceHeight)
};
AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType: AVMediaTypeVideo
outputSettings: videoSettings];
videoWriterInput.expectsMediaDataInRealTime = YES;
NSAssert([videoWriter canAddInput: videoWriterInput], #"Video writer can not add video writer input");
[videoWriter addInput: videoWriterInput];
NSDictionary* attributes = #{
(NSString*)kCVPixelBufferPixelFormatTypeKey : #(kCVPixelFormatType_32ARGB),
(NSString*)kCVPixelBufferWidthKey : #(sourceWidth),
(NSString*)kCVPixelBufferHeightKey : #(sourceHeight),
(NSString*)kCVPixelBufferCGImageCompatibilityKey : #YES,
(NSString*)kCVPixelBufferCGBitmapContextCompatibilityKey : #YES
};
AVAssetWriterInputPixelBufferAdaptor* adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput: videoWriterInput
sourcePixelBufferAttributes: attributes];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime: CMTimeMakeWithSeconds(0, FPS)];
while(YES) {
if( videoWriterInput.isReadyForMoreMediaData ) {
#if DEBUG
//NSLog(#"Drawing frame %lu/%lu", currentFrameNumber, totalFrameCount);
#endif
NSDictionary* options = #{(NSString*)kCGImageSourceTypeIdentifierHint : (id)kUTTypeGIF};
CGImageRef imgRef = CGImageSourceCreateImageAtIndex(source, currentFrameNumber, (__bridge CFDictionaryRef)options);
if( imgRef ) {
CFDictionaryRef propertiesT = CGImageSourceCopyProperties(source, NULL);
CFDictionaryRef properties = CGImageSourceCopyPropertiesAtIndex(source, currentFrameNumber, NULL);
CFDictionaryRef gifProperties = CFDictionaryGetValue(properties, kCGImagePropertyGIFDictionary);
if( gifProperties ) {
CVPixelBufferRef pxBuffer = [self newBufferFrom: imgRef
withPixelBufferPool: adaptor.pixelBufferPool
andAttributes: adaptor.sourcePixelBufferAttributes];
if( pxBuffer ) {
NSNumber* delayTime = CFDictionaryGetValue(gifProperties, kCGImagePropertyGIFDelayTime);
if (currentFrameNumber!=0) {
totalFrameDelay += delayTime.floatValue;
}
CMTime time = CMTimeMakeWithSeconds(totalFrameDelay, FPS);
if( ![adaptor appendPixelBuffer: pxBuffer withPresentationTime: time] ) {
NSLog(#"Could not save pixel buffer!: %#", videoWriter.error);
CFRelease(properties);
CGImageRelease(imgRef);
CVBufferRelease(pxBuffer);
break;
}
CVBufferRelease(pxBuffer);
}
}
if( properties ) CFRelease(properties);
CGImageRelease(imgRef);
currentFrameNumber++;
}
else {
//was no image returned -> end of file?
[videoWriterInput markAsFinished];
void (^videoSaveFinished)(void) = ^{
AVAssetWriter * retainedVideoWriter = videoWriter;
completionHandler(outFilePath.absoluteString, nil);
retainedVideoWriter = nil;
};
if( [videoWriter respondsToSelector: #selector(finishWritingWithCompletionHandler:)]) {
[videoWriter finishWritingWithCompletionHandler: videoSaveFinished];
}
else {
[videoWriter finishWriting];
videoSaveFinished();
}
break;
}
}
else {
//NSLog(#"Was not ready...");
[NSThread sleepForTimeInterval: 0.1];
}
};
CFRelease(source);
return YES;
};
+ (CVPixelBufferRef) newBufferFrom: (CGImageRef) frame
withPixelBufferPool: (CVPixelBufferPoolRef) pixelBufferPool
andAttributes: (NSDictionary*) attributes {
NSParameterAssert(frame);
size_t width = 480;//CGImageGetWidth(frame);
size_t height = 480;//CGImageGetHeight(frame);
size_t frameHeight = height;
size_t frameWidth = CGImageGetWidth(frame)*height/CGImageGetHeight(frame);
if (frameWidth<width) {
frameWidth = width;
frameHeight = CGImageGetHeight(frame)*width/CGImageGetWidth(frame);
}
CGFloat relax = 0.12;
if (frameWidth>width) {
CGFloat factor = MAX(width/frameWidth,1-relax);
frameWidth*=factor;
}
if (frameHeight>height) {
CGFloat factor = MAX(height/frameHeight,1-relax);
frameHeight*=factor;
}
size_t bpc = 8;
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CVPixelBufferRef pxBuffer = NULL;
CVReturn status = kCVReturnSuccess;
if( pixelBufferPool )
status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pxBuffer);
else {
status = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef)attributes, &pxBuffer);
}
NSAssert(status == kCVReturnSuccess, #"Could not create a pixel buffer");
CVPixelBufferLockBaseAddress(pxBuffer, 0);
void *pxData = CVPixelBufferGetBaseAddress(pxBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(pxBuffer);
CGContextRef context = CGBitmapContextCreate(pxData,
width,
height,
bpc,
bytesPerRow,
colorSpace,
kCGImageAlphaNoneSkipFirst);
NSAssert(context, #"Could not create a context");
CGContextDrawImage(context,
CGRectMake(-(frameWidth-(CGFloat)width)/2, -(frameHeight-(CGFloat)height)/2, frameWidth, frameHeight), frame);
CVPixelBufferUnlockBaseAddress(pxBuffer, 0);
CGContextRelease(context);
CGColorSpaceRelease(colorSpace);
return pxBuffer;
}
#end

Make Video From Image iOS

I found this tutorial http://codethink.no-ip.org/wordpress/archives/673#comment-118063 from this SO question Screen capture video in iOS programmatically of how to do something like this, and it was a bit outdated for iOS, so I renewed it, and am very close to having it work, but putting the UIImages together just isn't quite working right now.
Here is how I call the method in viewDidLoad
[captureView performSelector:#selector(startRecording) withObject:nil afterDelay:1.0];
[captureView performSelector:#selector(stopRecording) withObject:nil afterDelay:5.0];
and captureView is an IBOutlet connected to my view.
And then I have the class ScreenCapture.h & .m
Here is .h
#protocol ScreenCaptureViewDelegate <NSObject>
- (void) recordingFinished:(NSString*)outputPathOrNil;
#end
#interface ScreenCaptureView : UIView {
//video writing
AVAssetWriter *videoWriter;
AVAssetWriterInput *videoWriterInput;
AVAssetWriterInputPixelBufferAdaptor *avAdaptor;
//recording state
BOOL _recording;
NSDate* startedAt;
void* bitmapData;
}
//for recording video
- (bool) startRecording;
- (void) stopRecording;
//for accessing the current screen and adjusting the capture rate, etc.
#property(retain) UIImage* currentScreen;
#property(assign) float frameRate;
#property(nonatomic, assign) id<ScreenCaptureViewDelegate> delegate;
#end
And here is my .m
#interface ScreenCaptureView(Private)
- (void) writeVideoFrameAtTime:(CMTime)time;
#end
#implementation ScreenCaptureView
#synthesize currentScreen, frameRate, delegate;
- (void) initialize {
// Initialization code
self.clearsContextBeforeDrawing = YES;
self.currentScreen = nil;
self.frameRate = 10.0f; //10 frames per seconds
_recording = false;
videoWriter = nil;
videoWriterInput = nil;
avAdaptor = nil;
startedAt = nil;
bitmapData = NULL;
}
- (id) initWithCoder:(NSCoder *)aDecoder {
self = [super initWithCoder:aDecoder];
if (self) {
[self initialize];
}
return self;
}
- (id) init {
self = [super init];
if (self) {
[self initialize];
}
return self;
}
- (id)initWithFrame:(CGRect)frame {
self = [super initWithFrame:frame];
if (self) {
[self initialize];
}
return self;
}
- (CGContextRef) createBitmapContextOfSize:(CGSize) size {
CGContextRef context = NULL;
CGColorSpaceRef colorSpace;
int bitmapByteCount;
int bitmapBytesPerRow;
bitmapBytesPerRow = (size.width * 4);
bitmapByteCount = (bitmapBytesPerRow * size.height);
colorSpace = CGColorSpaceCreateDeviceRGB();
if (bitmapData != NULL) {
free(bitmapData);
}
bitmapData = malloc( bitmapByteCount );
if (bitmapData == NULL) {
fprintf (stderr, "Memory not allocated!");
return NULL;
}
context = CGBitmapContextCreate (bitmapData,
size.width,
size.height,
8, // bits per component
bitmapBytesPerRow,
colorSpace,
(CGBitmapInfo) kCGImageAlphaNoneSkipFirst);
CGContextSetAllowsAntialiasing(context,NO);
if (context== NULL) {
free (bitmapData);
fprintf (stderr, "Context not created!");
return NULL;
}
CGColorSpaceRelease( colorSpace );
return context;
}
static int frameCount = 0; //debugging
- (void) drawRect:(CGRect)rect {
NSDate* start = [NSDate date];
CGContextRef context = [self createBitmapContextOfSize:self.frame.size];
//not sure why this is necessary...image renders upside-down and mirrored
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, self.frame.size.height);
CGContextConcatCTM(context, flipVertical);
[self.layer renderInContext:context];
CGImageRef cgImage = CGBitmapContextCreateImage(context);
UIImage* background = [UIImage imageWithCGImage: cgImage];
CGImageRelease(cgImage);
self.currentScreen = background;
//debugging
if (frameCount < 40) {
NSString* filename = [NSString stringWithFormat:#"Documents/frame_%d.png", frameCount];
NSString* pngPath = [NSHomeDirectory() stringByAppendingPathComponent:filename];
[UIImagePNGRepresentation(self.currentScreen) writeToFile: pngPath atomically: YES];
frameCount++;
}
//NOTE: to record a scrollview while it is scrolling you need to implement your UIScrollViewDelegate such that it calls
// 'setNeedsDisplay' on the ScreenCaptureView.
if (_recording) {
float millisElapsed = [[NSDate date] timeIntervalSinceDate:startedAt] * 1000.0;
[self writeVideoFrameAtTime:CMTimeMake((int)millisElapsed, 1000)];
}
float processingSeconds = [[NSDate date] timeIntervalSinceDate:start];
float delayRemaining = (1.0 / self.frameRate) - processingSeconds;
CGContextRelease(context);
//redraw at the specified framerate
[self performSelector:#selector(setNeedsDisplay) withObject:nil afterDelay:delayRemaining > 0.0 ? delayRemaining : 0.01];
}
- (void) cleanupWriter {
avAdaptor = nil;
videoWriterInput = nil;
videoWriter = nil;
startedAt = nil;
if (bitmapData != NULL) {
free(bitmapData);
bitmapData = NULL;
}
}
- (void)dealloc {
[self cleanupWriter];
}
- (NSURL*) tempFileURL {
NSString* outputPath = [[NSString alloc] initWithFormat:#"%#/%#", [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0], #"output.mp4"];
NSURL* outputURL = [[NSURL alloc] initFileURLWithPath:outputPath];
NSFileManager* fileManager = [NSFileManager defaultManager];
if ([fileManager fileExistsAtPath:outputPath]) {
NSError* error;
if ([fileManager removeItemAtPath:outputPath error:&error] == NO) {
NSLog(#"Could not delete old recording file at path: %#", outputPath);
}
}
return outputURL;
}
-(BOOL) setUpWriter {
NSError* error = nil;
videoWriter = [[AVAssetWriter alloc] initWithURL:[self tempFileURL] fileType:AVFileTypeQuickTimeMovie error:&error];
NSParameterAssert(videoWriter);
//Configure video
NSDictionary* videoCompressionProps = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithDouble:1024.0*1024.0], AVVideoAverageBitRateKey,
nil ];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:self.frame.size.width], AVVideoWidthKey,
[NSNumber numberWithInt:self.frame.size.height], AVVideoHeightKey,
videoCompressionProps, AVVideoCompressionPropertiesKey,
nil];
videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
NSParameterAssert(videoWriterInput);
videoWriterInput.expectsMediaDataInRealTime = YES;
NSDictionary* bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];
avAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput sourcePixelBufferAttributes:bufferAttributes];
//add input
[videoWriter addInput:videoWriterInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:CMTimeMake(0, 1000)];
return YES;
}
- (void) completeRecordingSession {
[videoWriterInput markAsFinished];
// Wait for the video
int status = videoWriter.status;
while (status == AVAssetWriterStatusUnknown) {
NSLog(#"Waiting...");
[NSThread sleepForTimeInterval:0.5f];
status = videoWriter.status;
}
#synchronized(self) {
[videoWriter finishWritingWithCompletionHandler:^{
[self cleanupWriter];
BOOL success = YES;
id delegateObj = self.delegate;
NSString *outputPath = [[NSString alloc] initWithFormat:#"%#/%#", [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0], #"output.mp4"];
NSURL *outputURL = [[NSURL alloc] initFileURLWithPath:outputPath];
NSLog(#"Completed recording, file is stored at: %#", outputURL);
if ([delegateObj respondsToSelector:#selector(recordingFinished:)]) {
[delegateObj performSelectorOnMainThread:#selector(recordingFinished:) withObject:(success ? outputURL : nil) waitUntilDone:YES];
}
}];
}
}
- (bool) startRecording {
bool result = NO;
#synchronized(self) {
if (! _recording) {
result = [self setUpWriter];
startedAt = [NSDate date];
_recording = true;
}
}
return result;
}
- (void) stopRecording {
#synchronized(self) {
if (_recording) {
_recording = false;
[self completeRecordingSession];
}
}
}
-(void) writeVideoFrameAtTime:(CMTime)time {
if (![videoWriterInput isReadyForMoreMediaData]) {
NSLog(#"Not ready for video data");
}
else {
#synchronized (self) {
UIImage *newFrame = self.currentScreen;
CVPixelBufferRef pixelBuffer = NULL;
CGImageRef cgImage = CGImageCreateCopy([newFrame CGImage]);
CFDataRef image = CGDataProviderCopyData(CGImageGetDataProvider(cgImage));
int status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, avAdaptor.pixelBufferPool, &pixelBuffer);
if(status != 0){
//could not get a buffer from the pool
NSLog(#"Error creating pixel buffer: status=%d", status);
}
// set image data into pixel buffer
CVPixelBufferLockBaseAddress( pixelBuffer, 0 );
uint8_t *destPixels = CVPixelBufferGetBaseAddress(pixelBuffer);
CFDataGetBytes(image, CFRangeMake(0, CFDataGetLength(image)), destPixels); //XXX: will work if the pixel buffer is contiguous and has the same bytesPerRow as the input data
if(status == 0){
BOOL success = [avAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:time];
if (!success)
NSLog(#"Warning: Unable to write buffer to video");
}
//clean up
CVPixelBufferUnlockBaseAddress( pixelBuffer, 0 );
CVPixelBufferRelease( pixelBuffer );
CFRelease(image);
CGImageRelease(cgImage);
}
}
}
And I as you can see in the drawRect method I save all the images, and they look great, but then when I try to make the video, it creates just a still image that looks like this, when the images look like this.
Here is the output, it is a video but just this. When the picture looks normal (not slanted and all weird)
My question is what is going wrong when the video is being made?
Thanks for the help and your time, I know this is a long question.
I found this post after having the same issue with certain resolutions causing the exact same video effect when I wanted to create a CVPixelBufferRef from a CGImageRef (coming from a UIImage.)
The very short answer in my case was that I had hard wired the bytes per row to be 4 times the width. Which used to work all the time! Now I query the CVPixelBuffer itself to get this value and poof, problem solved!
Code that created the problem was this:
CGContextRef context = CGBitmapContextCreate(pxdata, w, h, 8, 4*w, rgbColorSpace, bitMapInfo);
Code that fixed the problem was this:
CGContextRef context = CGBitmapContextCreate(
pxdata, w, h,
8, CVPixelBufferGetBytesPerRow(pxbuffer),
rgbColorSpace,bitMapInfo);
And in both cases, the bitMapInfo was set:
GBitmapInfo bitMapInfo =kCGImageAlphaPremultipliedFirst; // According to Apple's doc, this is safe: June 26, 2014
Pixel Buffer adaptors only work with certain pixel sizes of images. You're probably going to need to change the size of the images. You can imagine that what's happening in your video is that the writer is trying to write your, let's say, 361x241 images into a 360x240 size space. Each row starts with the last pixel of the last row so that it ends up getting diagonally skewed like you see. Check the apple docs for supported dimensions. I believe that I used 480x320 and it's supported. You can use this method to resize your images:
+(UIImage *)scaleImage:(UIImage*)image toSize:(CGSize)newSize {
CGRect scaledImageRect = CGRectZero;
CGFloat aspectWidth = newSize.width / image.size.width;
CGFloat aspectHeight = newSize.height / image.size.height;
CGFloat aspectRatio = 3.0 / 2;
scaledImageRect.size.width = image.size.width * aspectRatio;
scaledImageRect.size.height = image.size.height * aspectRatio;
scaledImageRect.origin.x = (newSize.width - scaledImageRect.size.width) / 2.0f;
scaledImageRect.origin.y = (newSize.height - scaledImageRect.size.height) / 2.0f;
UIGraphicsBeginImageContextWithOptions(CGSizeMake(480, 320), NO, 0 );
[image drawInRect:scaledImageRect];
UIImage* scaledImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return scaledImage;
}
I think this is because the pixelBuffer bytes per row does not match the UIImage bytes per row. In my case (iPhone 6 iOS8.3) the UIImage is 568 x 320 and the CFDataGetLength is 727040 so the bytes per row is 2272. But the pixelBuffer bytes per row is 2304. I think this extra 32 bytes is from padding so that bytes per row in the pixelBuffer is divisible by 64. How you force the pixelBuffer to match the input data, or vice versa, across all devices I'm not sure yet.
I've suffered a lot in this case. I tried so many ways to create video from the Image array but result was almost same as yours.
The problem was in the CVPixel buffer. The Buffer I used to create from the image was not correct.
But finally I got it working.
Main Function to create video at a url from an Array
You just have toinput array of images and fps, and size can be equal to size of images (if you want).
fps = num of images in array / desired duration
for example: fps = 90 / 3 = 30
- (void)getVideoFrom:(NSArray *)array
toPath:(NSString*)path
size:(CGSize)size
fps:(int)fps
withCallbackBlock:(void (^) (BOOL))callbackBlock
{
NSLog(#"%#", path);
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path]
fileType:AVFileTypeMPEG4
error:&error];
if (error) {
if (callbackBlock) {
callbackBlock(NO);
}
return;
}
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = #{AVVideoCodecKey: AVVideoCodecTypeH264,
AVVideoWidthKey: [NSNumber numberWithInt:size.width],
AVVideoHeightKey: [NSNumber numberWithInt:size.height]};
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer;
CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &buffer);
CMTime presentTime = CMTimeMake(0, fps);
int i = 0;
while (1)
{
if(writerInput.readyForMoreMediaData){
presentTime = CMTimeMake(i, fps);
if (i >= [array count]) {
buffer = NULL;
} else {
buffer = [self pixelBufferFromCGImage:[array[i] CGImage] size:CGSizeMake(480, 320)];
}
if (buffer) {
//append buffer
BOOL appendSuccess = [self appendToAdapter:adaptor
pixelBuffer:buffer
atTime:presentTime
withInput:writerInput];
NSAssert(appendSuccess, #"Failed to append");
i++;
} else {
//Finish the session:
[writerInput markAsFinished];
[videoWriter finishWritingWithCompletionHandler:^{
NSLog(#"Successfully closed video writer");
if (videoWriter.status == AVAssetWriterStatusCompleted) {
if (callbackBlock) {
callbackBlock(YES);
}
} else {
if (callbackBlock) {
callbackBlock(NO);
}
}
}];
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
NSLog (#"Done");
break;
}
}
}
}
Function to get CVPixelBuffer from CGImage
-(CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image size:(CGSize)imageSize
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, CVPixelBufferGetBytesPerRow(pxbuffer), rgbColorSpace,
(int)kCGImageAlphaNoneSkipFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image), CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
Function to append to adapter
-(BOOL)appendToAdapter:(AVAssetWriterInputPixelBufferAdaptor*)adaptor
pixelBuffer:(CVPixelBufferRef)buffer
atTime:(CMTime)presentTime
withInput:(AVAssetWriterInput*)writerInput
{
while (!writerInput.readyForMoreMediaData) {
usleep(1);
}
return [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
}

How to fix CVPixelBuffer memory leak

convert uiimages to mp4 using HJImagesToVideo (source code from github), but i fund it may have memory leak. more than 200 images converting, there will be memory warning, and then crash. the source code here:
+ (void)writeImageAsMovie:(NSArray )array
toPath:(NSString)path
size:(CGSize)size
fps:(int)fps
animateTransitions:(BOOL)shouldAnimateTransitions
withCallbackBlock:(SuccessBlock)callbackBlock
{
NSLog(#"%#", path);
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path]
fileType:AVFileTypeMPEG4
error:&error];
if (error)
{
if (callbackBlock)
{
callbackBlock(NO);
}
return;
}
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = #{AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: [NSNumber numberWithInt:size.width],
AVVideoHeightKey: [NSNumber numberWithInt:size.height]};
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer;
CVPixelBufferPoolCreatePixelBuffer(NULL, adaptor.pixelBufferPool, &buffer);
CMTime presentTime = CMTimeMake(0, fps);
int i = 0;
while (1)
{
if(writerInput.readyForMoreMediaData)
{
presentTime = CMTimeMake(i, fps);
if (i >= [array count])
{
buffer = NULL;
}
else
{
buffer = [HJImagesToVideo pixelBufferFromCGImage:[array[i] CGImage] size:CGSizeMake(480, 320)];
}
if (buffer)
{
//append buffer
BOOL appendSuccess = [HJImagesToVideo appendToAdapter:adaptor
pixelBuffer:buffer
atTime:presentTime
withInput:writerInput];
NSAssert(appendSuccess, #"Failed to append");
if (shouldAnimateTransitions && i + 1 < array.count)
{
//Create time each fade frame is displayed
CMTime fadeTime = CMTimeMake(1, fps*TransitionFrameCount);
//Add a delay, causing the base image to have more show time before fade begins.
for (int b = 0; b < FramesToWaitBeforeTransition; b++)
{
presentTime = CMTimeAdd(presentTime, fadeTime);
}
//Adjust fadeFrameCount so that the number and curve of the fade frames and their alpha stay consistant
NSInteger framesToFadeCount = TransitionFrameCount - FramesToWaitBeforeTransition;
//Apply fade frames
for (double j = 1; j < framesToFadeCount; j++)
{
buffer = [HJImagesToVideo crossFadeImage:[array[i] CGImage]
toImage:[array[i + 1] CGImage]
atSize:CGSizeMake(480, 320)
withAlpha:j/framesToFadeCount];
BOOL appendSuccess = [HJImagesToVideo appendToAdapter:adaptor
pixelBuffer:buffer
atTime:presentTime
withInput:writerInput];
presentTime = CMTimeAdd(presentTime, fadeTime);
NSAssert(appendSuccess, #"Failed to append");
}
}
i++;
}
else
{
//Finish the session:
[writerInput markAsFinished];
[videoWriter finishWritingWithCompletionHandler:^{
NSLog(#"Successfully closed video writer");
if (videoWriter.status == AVAssetWriterStatusCompleted) {
if (callbackBlock) {
callbackBlock(YES);
}
} else {
if (callbackBlock) {
callbackBlock(NO);
}
}
}];
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
//CVPixelBufferRelease(buffer);
NSLog (#"Done");
break;
}
}
}
}
+ (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
size:(CGSize)imageSize
{
NSDictionary *options = #{(id)kCVPixelBufferCGImageCompatibilityKey: #YES,
(id)kCVPixelBufferCGBitmapContextCompatibilityKey: #YES};
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, imageSize.width,
imageSize.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, imageSize.width,
imageSize.height, 8, 4*imageSize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextDrawImage(context, CGRectMake(0 + (imageSize.width-CGImageGetWidth(image))/2,
(imageSize.height-CGImageGetHeight(image))/2,
CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}

Capturing the screen with AVAssetWriter-- Works fine on simulator but on device creates black video

i m trying to capture the frame buffer data and convert into video for my iphone game..
i m using AVAssetWriter to accomplish this thing.
The code is working fine on simulator but not on device itself.. on device it generates a black video
i m using the follwing code :
//This code initializes the AVAsetWriter and other things
- (void) testVideoWriter {
CGRect screenBoundst = [[UIScreen mainScreen] bounds];
//initialize global info
MOVIE_NAME = #"Documents/Movie5.mp4";
//CGSize size = CGSizeMake(screenBoundst.size.width, screenBoundst.size.height);
CGSize size = CGSizeMake(320, 480);
frameLength = CMTimeMake(1, 5);
currentTime = kCMTimeZero;
currentFrame = 0;
MOVIE_PATH = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];
NSError *error = nil;
videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:MOVIE_PATH]
fileType:AVFileTypeMPEG4 error:&error];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width ], AVVideoWidthKey,
[NSNumber numberWithInt:size.height ], AVVideoHeightKey, nil];
writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
NSDictionary *sourcePixelBufferAttributesDictionary = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32BGRA],
kCVPixelBufferPixelFormatTypeKey, nil];
adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:
writerInput sourcePixelBufferAttributes:sourcePixelBufferAttributesDictionary];
[adaptor retain];
CGAffineTransform flipVertical = CGAffineTransformMake(1, 0, 0, -1, 0, self.frame.size.height);
flipVertical = CGAffineTransformRotate(flipVertical,(90.0*3.14f/180.0f));
[writerInput setTransform:flipVertical];
[videoWriter addInput:writerInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
VIDEO_WRITER_IS_READY = true;
}
//this code capture the screen data
- (void) captureScreenVideo {
if (!writerInput.readyForMoreMediaData) {
return;
}
CGRect screenBounds = [[UIScreen mainScreen] bounds];
NSLog(#"width : %f Height : %f",screenBounds.size.width,screenBounds.size.height);
CGSize esize = CGSizeMake(screenBounds.size.width, screenBounds.size.height);
NSInteger myDataLength = esize.width * esize.height * 4;
GLuint *buffer = (GLuint *) malloc(myDataLength);
glReadPixels(0, 0, esize.width, esize.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CVPixelBufferRef pixel_buffer = NULL;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, esize.width,
esize.height, kCVPixelFormatType_32BGRA, (CFDictionaryRef) options,
&pixel_buffer);
NSParameterAssert(status == kCVReturnSuccess && pixel_buffer != NULL);
CVPixelBufferLockBaseAddress(pixel_buffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pixel_buffer);
NSParameterAssert(pixel_buffer != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, esize.width,
esize.height, 8, 4*esize.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGAffineTransform flipVerticalq = CGAffineTransformMake(1, 0, 0, -1, 0, self.frame.size.height);
flipVerticalq = CGAffineTransformRotate(flipVerticalq,(90.0*3.14f/180.0f));
CGContextConcatCTM(context, flipVerticalq);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pixel_buffer, 0);
if(![adaptor appendPixelBuffer:pixel_buffer withPresentationTime:currentTime]) {
NSLog(#"FAIL");
} else {
NSLog(#"Success:%d", currentFrame);
currentTime = CMTimeAdd(currentTime, frameLength);
}
free(buffer);
CVPixelBufferRelease(pixel_buffer);
}
// this code saves the video to library on device itself
- (void) moveVideoToSavedPhotos {
ALAssetsLibrary *library = [[ALAssetsLibrary alloc] init];
NSString *localVid = [NSHomeDirectory() stringByAppendingPathComponent:MOVIE_NAME];
NSURL* fileURL = [NSURL fileURLWithPath:localVid];
NSLog(#"movie saved %#",fileURL);
BOOL isVideoOK = UIVideoAtPathIsCompatibleWithSavedPhotosAlbum(localVid);
if (NO == isVideoOK)
NSLog(#"Video at %# is not compatible",localVid);
else {
NSLog(#"video ok");
}
[library writeVideoAtPathToSavedPhotosAlbum:fileURL
completionBlock:^(NSURL *assetURL, NSError *error) {
if (error) {
NSLog(#"%#: Error saving context: %#", [self class], [error localizedDescription]);
}
}];
[library release];
}
// following code stops the video recording after particular number of frames
if (VIDEO_WRITER_IS_READY) {
[self captureScreenVideo];
currentFrame++;
if (currentFrame > 500) {
VIDEO_WRITER_IS_READY = false;
[writerInput markAsFinished];
if (![videoWriter finishWriting]) {
NSLog(#"writing not finished");
}
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
[writerInput release];
[videoWriter release];
NSLog(#"saving the file");
[self moveVideoToSavedPhotos];
}
}
also when the video captured on simulator is mirrored video...i dont have any idea where m wrong...please clarify me this thing... and hope you guys dont mind reading the whole code..

iOS CoreVideo Memory Leaks

Can somebody help me trace these CoreVideo memory leaks when running Instruments in XCode?
Basically, the memory leak happens when I press the "Record Video" button on my custom motion jpeg player. I cannot tell exactly which part of my code is leaking as Leaks Instruments is not pointing to any of my calls. BTW, I'm using the iPad device to test the leaks.
Heres the messages from the Leaks Instruments:
Responsible Library = CoreVideo
Responsible Frame:
CVPixelBufferBacking::initWithPixelBufferDescription(..)
CVObjectAlloc(...)
CVBuffer::init()
Here's my code that handles each motion jpeg frames streamed by the server:
-(void)processServerData:(NSData *)data{
/*
//render the video in the UIImage control
*/
UIImage *image =[UIImage imageWithData:data];
self.imageCtrl.image = image;
/*
//check if we are recording
*/
if (myRecorder.isRecording) {
//create initial sample: todo:check if this is still needed
if (counter==0) {
self.buffer = [Recorder pixelBufferFromCGImage:image.CGImage size:myRecorder.imageSize];
CVPixelBufferPoolCreatePixelBuffer (NULL, myRecorder.adaptor.pixelBufferPool, &buffer);
if(buffer)
{
CVBufferRelease(buffer);
}
}
if (counter < myRecorder.maxFrames)
{
if([myRecorder.writerInput isReadyForMoreMediaData])
{
CMTime frameTime = CMTimeMake(1, myRecorder.timeScale);
CMTime lastTime=CMTimeMake(counter, myRecorder.timeScale);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
self.buffer = [Recorder pixelBufferFromCGImage:image.CGImage size:myRecorder.imageSize];
[myRecorder.adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
if(buffer)
{
CVBufferRelease(buffer);
}
counter++;
if (counter==myRecorder.maxFrames)
{
[myRecorder finishSession];
counter=0;
myRecorder.isRecording = NO;
}
}
else
{
NSLog(#"adaptor not ready counter=%d ",counter );
}
}
}
}
Here's the pixelBufferFromCGImage function:
+ (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image size:(CGSize) size{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
Aprpeciate any help! Thanks
I refactored the processFrame method and I'm no longer getting the leaks.
-(void) processFrame:(UIImage *) image {
if (myRecorder.frameCounter < myRecorder.maxFrames)
{
if([myRecorder.writerInput isReadyForMoreMediaData])
{
CMTime frameTime = CMTimeMake(1, myRecorder.timeScale);
CMTime lastTime=CMTimeMake(myRecorder.frameCounter, myRecorder.timeScale);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
buffer = [Recorder pixelBufferFromCGImage:image.CGImage size:myRecorder.imageSize];
if(buffer)
{
[myRecorder.adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
myRecorder.frameCounter++;
CVBufferRelease(buffer);
if (myRecorder.frameCounter==myRecorder.maxFrames)
{
[myRecorder finishSession];
myRecorder.frameCounter=0;
myRecorder.isRecording = NO;
}
}
else
{
NSLog(#"Buffer is empty");
}
}
else
{
NSLog(#"adaptor not ready frameCounter=%d ",myRecorder.frameCounter );
}
}
}
I don't see anything too obvious. I did notice you use self.buffer and buffer here. If it is retained, you might be leaking there. If CVPixelBufferPoolCreatePixelBuffer if allocating memory in the second line after self.buffer retained in the first line, the first might be leaking.
self.buffer = [Recorder pixelBufferFromCGImage:image.CGImage size:myRecorder.imageSize];
CVPixelBufferPoolCreatePixelBuffer (NULL, myRecorder.adaptor.pixelBufferPool, &buffer);
Hope that helps.

Resources