I want to animate images smoothly while converting them to video. Dispite of searching SO, I am unable to understand how to achive it. I tried changing the Rotation angle(CGAffineTransformRotation), Translations and Scaling but didn't found a way to for the smooth animations. Heres how I am converting array of photos to video :
- (void)createVideoWithArrayImages:(NSMutableArray*)images size:(CGSize)size time:(float)time output:(NSURL*)output {
//getting a random path
NSError *error;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:output fileType:AVFileTypeMPEG4 error: &error];
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys: AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput: videoWriterInput sourcePixelBufferAttributes:nil];
videoWriterInput.expectsMediaDataInRealTime = YES;
[videoWriter addInput: videoWriterInput];
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
//convert uiimage to CGImage.
//convert uiimage to CGImage.
NSInteger fps = 30;
int frameCount = 0;
for(UIImage *img in images) {
//for(VideoFrame * frm in imageArray)
NSLog(#"**************************************************");
//UIImage * img = frm._imageFrame;
buffer = [self videoPixelBufferFromCGImage:[img CGImage] andSize:size andAngle:(int)[images indexOfObject:img]];
double numberOfSecondsPerFrame = time / images.count;
double frameDuration = fps * numberOfSecondsPerFrame;
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < fps) {
if (adaptor.assetWriterInput.readyForMoreMediaData) {
//print out status:
NSLog(#"Processing video frame (%d,%d)",frameCount,(int)[images count]);
CMTime frameTime = CMTimeMake(frameCount * frameDuration,(int32_t) fps);
NSLog(#"Frame Time : %f", CMTimeGetSeconds(frameTime));
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if(!append_ok) {
NSError *error = videoWriter.error;
if(error!=nil) {
NSLog(#"Unresolved error %#,%#.", error, [error userInfo]);
}
}
}
else {
printf("adaptor not ready %d, %d\n", frameCount, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
}
if (!append_ok) {
printf("error appending image %d times %d\n, with error.", frameCount, j);
}
frameCount++;
NSLog(#"**************************************************");
}
[videoWriterInput markAsFinished];
[videoWriter finishWriting];
videoWriter = nil;
if(buffer != NULL)
CVPixelBufferRelease(buffer);
NSLog(#"************ write standard video successful ************");
}
Here CVPixelBufferRef is returned as follows :
- (CVPixelBufferRef)videoPixelBufferFromCGImage: (CGImageRef) image andSize:(CGSize) size andAngle:(int)angle {
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys: [NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey, [NSNumber numberWithBool:YES],kCVPixelBufferCGBitmapContextCompatibilityKey, nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width, size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options, &pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width, size.height, 8, 4*size.width, rgbColorSpace, kCGImageAlphaPremultipliedFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, size.width, size.height), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I tried adding Translations to the CVPixelBufferRef but thing didn't worked out for me. Any guide, any help would be very useful.
YOu don't specify what you want to achieve, but I've used AVVideoCompositionCoreAnimationTool to include animated CALayers in a AVMutableVideoComposition.
Here is the reference to that class
Hope that helps!
Related
I am making images into a video. But always crashed because of memory warning, too much allocation on CVPixelBufferCreate. I don't know how to handle it right. I've seen a lot of similar topics and none of them solved my problem.
Here's my code:
- (void) writeImagesArray:(NSArray*)array asMovie:(NSString*)path
{
NSError *error = nil;
UIImage *first = [array objectAtIndex:0];
CGSize frameSize = first.size;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:path] fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithDouble:frameSize.width],AVVideoWidthKey,
[NSNumber numberWithDouble:frameSize.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
self.adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
[videoWriter addInput:writerInput];
//Start Session
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
int frameCount = 0;
CVPixelBufferRef buffer = NULL;
for(UIImage *img in array)
{
buffer = [self newPixelBufferFromCGImage:[img CGImage] andFrameSize:frameSize];
if (self.adaptor.assetWriterInput.readyForMoreMediaData)
{
CMTime frameTime = CMTimeMake(frameCount,FPS);
[self.adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
}
if(buffer)
CVPixelBufferRelease(buffer);
frameCount++;
}
[writerInput markAsFinished];
[videoWriter finishWritingWithCompletionHandler:^{
if (videoWriter.status == AVAssetWriterStatusFailed) {
NSLog(#"Movie save failed.");
}else{
NSLog(#"Movie saved.");
}
}];
NSLog(#"Finished.");
}
- (CVPixelBufferRef)newPixelBufferFromCGImage: (CGImageRef) image andFrameSize:(CGSize)frameSize
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
frameSize.width,
frameSize.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,
&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGBitmapInfo bitmapInfo = (CGBitmapInfo) kCGImageAlphaNoneSkipFirst;
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata,
frameSize.width,
frameSize.height,
8,
4*frameSize.width,
rgbColorSpace,
bitmapInfo);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
UPDATES:
I made my video into small segments.
After adding a [NSThread sleepForTimeInterval:0.00005]; in the loop.
the memory just magically released.
But, this cause my UI stuck for seconds because of this line. Any better solution?
for(UIImage *img in array)
{
buffer = [self newPixelBufferFromCGImage:[img CGImage] andFrameSize:frameSize];
//CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, adaptor.pixelBufferPool, &buffer);
if (adaptor.assetWriterInput.readyForMoreMediaData)
{
CMTime frameTime = CMTimeMake(frameCount,FPS);
[adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
}
if(buffer)
CVPixelBufferRelease(buffer);
frameCount++;
[NSThread sleepForTimeInterval:0.00005];
}
Here's the memory:
From a fast review of your code, I can't see anything wrong in the management of the CVBuffer itself.
What I think it could be the source of your issue is the array of UIImages.
UIImage has this behavior, until you request the CGImage property or draw it, the attached image is not decoded in memory, so the impact in memory of unused images is low.
Your enumeration calls the CGImage property on each image and you never get rid of them, this can explain the continue increase of memory allocation.
If you not use the Images later. You can do it like this:
[images enumerateObjectsUsingBlock:^(UIImage * _Nonnull img, NSUInteger idx, BOOL * _Nonnull stop) {
CVPixelBufferRef pixelBuffer = [self pixelBufferFromCGImage:img.CGImage frameSize:[VDVideoEncodeConfig globalConfig].size];
CMTime frameTime = CMTimeMake(frameCount, (int32_t)[VDVideoEncodeConfig globalConfig].frameRate);
frameCount++;
[_assetRW appendNewSampleBuffer:pixelBuffer pst:frameTime];
CVPixelBufferRelease(pixelBuffer);
// This can release the memory
// The Image.CGImageRef result in the memory leak you see in the Instruments
images[idx] = [NSNull null];
}];
I have used following code to create video from the images.
This code works fine when i select the image from camera roll which is downloaded from web or the screenshot but the image selected which are taken from camera shows zoomed in in the movie.
I don't know what is wrong with the images of camera.
can anyone please help me resolve this issue.
-(IBAction)createV:(id)sender
{
NSString *documentsDirectory = [NSHomeDirectory()
stringByAppendingPathComponent:#"Documents"];
NSString *videoOutputPath = [documentsDirectory stringByAppendingPathComponent:#"test_output.mp4"];
CGSize imageSize = [DatabaseAccess getusersetsize];
double nospf =[[[NSUserDefaults standardUserDefaults] valueForKey:#"duration"] intValue];
NSUInteger fps = 10;
NSMutableArray *imageArray;// = [DatabaseAccess getimagelist:#"select imgname from tbl_userimage"];
NSArray* imagePaths = [DatabaseAccess getimagelist:#"select imgname,strftime('%d-%m-%Y', tdate) as tdate from tbl_userimage"];
imageArray = [[NSMutableArray alloc] initWithCapacity:imagePaths.count];
int i=0;
for (NSString* path in [imagePaths valueForKey:#"image"] )
{
if ([[NSUserDefaults standardUserDefaults] boolForKey:#"disdate"])
{
CGSize imgsize = [DatabaseAccess getusersetsize];
//[imageArray addObject:[[DatabaseAccess drawText:[[imagePaths valueForKey:#"date"] objectAtIndex:i] inImage:[UIImage imageWithContentsOfFile:[DatabaseAccess documentsPathForFileName:path]] atPoint:CGPointMake(imgsize.width-250,imgsize.height-60) ] fixOrientation]];
[imageArray addObject:[DatabaseAccess drawText:[[imagePaths valueForKey:#"date"] objectAtIndex:i] inImage:[UIImage imageWithContentsOfFile:[DatabaseAccess documentsPathForFileName:path]] atPoint:CGPointMake(imgsize.width-250,imgsize.height-60) ]];
}
else
{
[imageArray addObject:[UIImage imageWithContentsOfFile:[DatabaseAccess documentsPathForFileName:path]]];
NSLog(#"%#",path);
// [imageArray addObject:[UIImage imageNamed:path]];
}
i++;
}
[self exportImages:imageArray asVideoToPath:videoOutputPath withFrameSize:imageSize framesPerSecond:fps numberOfSecondsPerFrame:nospf];
}
- (void)exportImages:(NSMutableArray *)imageArray asVideoToPath:(NSString *)videoOutputPath withFrameSize:(CGSize)imageSize framesPerSecond:(NSUInteger)fps numberOfSecondsPerFrame:(double)numberOfSecondsPerFrame {
NSError *error = nil;
NSString *documentsDirectory = [NSHomeDirectory()
stringByAppendingPathComponent:#"Documents"];
NSFileManager *fileMgr = [NSFileManager defaultManager];
if ([fileMgr removeItemAtPath:videoOutputPath error:&error] != YES)
NSLog(#"Unable to delete file: %#", [error localizedDescription]);
////////////// end setup ///////////////////////////////////
NSLog(#"Start building video from defined frames.");
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:videoOutputPath] fileType:AVFileTypeMPEG4 error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:imageSize.width], AVVideoWidthKey,
[NSNumber numberWithInt:imageSize.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(videoWriterInput);
NSParameterAssert([videoWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = YES;
[videoWriter addInput:videoWriterInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
//convert uiimage to CGImage.
int frameCount = 0;
//double numberOfSecondsPerFrame = 6;
double frameDuration = fps * numberOfSecondsPerFrame;
//for(VideoFrame * frm in imageArray)
NSLog(#"**************************************************");
for(UIImage * img in imageArray)
{
//UIImage * img = frm._imageFrame;
buffer = [self pixelBufferFromCGImage:[img CGImage]];
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 30) {
if (adaptor.assetWriterInput.readyForMoreMediaData) {
//print out status:
NSLog(#"Processing video frame (%d,%lu)",frameCount,(unsigned long)[imageArray count]);
CMTime frameTime = CMTimeMake(frameCount,(int32_t) numberOfSecondsPerFrame);
//CMTime frameTime = CMTimeMake(frameCount*frameDuration,(int32_t) fps);
// NSLog(#"%#",frameTime);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if(!append_ok){
NSError *error = videoWriter.error;
if(error!=nil) {
NSLog(#"Unresolved error %#,%#.", error, [error userInfo]);
}
}
}
else {
printf("adaptor not ready %d, %d\n", frameCount, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
}
if (!append_ok) {
printf("error appending image %d times %d\n, with error.", frameCount, j);
}
frameCount++;
}
NSLog(#"**************************************************");
//Finish the session:
[videoWriterInput markAsFinished];
[videoWriter finishWriting];
NSLog(#"Write Ended");
[self playMovie:videoOutputPath];
}
- (CVPixelBufferRef) pixelBufferFromCGImage: (CGImageRef) image {
// CGSize size = CGSizeMake(400, 200);
CGSize size = [DatabaseAccess getusersetsize];
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
size.width,
size.height,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
if (status != kCVReturnSuccess){
NSLog(#"Failed to create pixel buffer");
}
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaPremultipliedFirst);
//kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
//CGContextConcatCTM(context, frameTransform);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
I have solved image zoomed in issue using this code
-(UIImage*)scaleImage:(UIImage*)image toSize:(CGSize)newSize
{
UIGraphicsBeginImageContext(newSize);
[image drawInRect:CGRectMake(0, 0, newSize.width, newSize.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage;
}
resize UIImage before converting it to CGImage,
make sure you resize image.width in multiple of 16.
CGSize your_size = CGSizeMake(1600, 800);
UIImage *tempImg = [self scaleImage:img toSize:your_size..];
buffer = [self pixelBufferFromCGImage:[tempImg CGImage]];
In my application , i am converting array of images to video. For that, i am using the below code.
NSFileManager *fileMgr = [NSFileManager defaultManager];
NSString *documentsDirectory = [NSHomeDirectory()
stringByAppendingPathComponent:#"Documents"];
NSString *videoOutputPath = [documentsDirectory stringByAppendingPathComponent:#"test_output.mp4"];
//NSLog(#"-->videoOutputPath= %#", videoOutputPath);
// get rid of existing mp4 if exists...
if ([fileMgr removeItemAtPath:videoOutputPath error:&error] != YES)
NSLog(#"Unable to delete file: %#", [error localizedDescription]);
NSUInteger fps = 30;
////////////// end setup ///////////////////////////////////
NSLog(#"Start building video from defined frames.");
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:videoOutputPath] fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:288], AVVideoWidthKey,
[NSNumber numberWithInt:352], AVVideoHeightKey,
nil];
AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(videoWriterInput);
NSParameterAssert([videoWriter canAddInput:videoWriterInput]);
videoWriterInput.expectsMediaDataInRealTime = YES;
[videoWriter addInput:videoWriterInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
//convert uiimage to CGImage.
int frameCount = 0;
//for(VideoFrame * frm in imageArray)
NSLog(#"**************************************************");
for(UIImage * img in finalArrayVal)
{
//UIImage * img = frm._imageFrame;
buffer = [self pixelBufferFromCGImage:[img CGImage]];
BOOL append_ok = NO;
int j = 0;
while (!append_ok && j < 30) {
if (adaptor.assetWriterInput.readyForMoreMediaData) {
//print out status:
NSLog(#"Processing video frame (%d,%lu)",frameCount,(unsigned long)[finalArrayVal count]);
CMTime frameTime = CMTimeMake(frameCount,(int32_t) fps);
append_ok = [adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if(!append_ok){
NSError *error = videoWriter.error;
if(error!=nil) {
NSLog(#"Unresolved error %#,%#.", error, [error userInfo]);
}
}
}
else {
printf("adaptor not ready %d, %d\n", frameCount, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
}
if (!append_ok) {
printf("error appending image %d times %d\n, with error.", frameCount, j);
}
frameCount++;
}
NSLog(#"**************************************************");
//Finish the session:
[videoWriterInput markAsFinished];
[videoWriter finishWriting];
It converts the array of images to video file. But it crops the part of image exactly like below when i am trying to convert it to video.
Input Image :
Output video screenshot:
You can see upper portion of image is highly ignored. Note that i have set AVCaptureSessionPreset352x288 for defaultAVCaptureSessionPreset in camera. Here i need convert full image as frames in video. Can anyone provide the solution to accomplish the above stated issue.
After the study , i replaced the below line to send the additional parameter (size of the image )
buffer = [myController pixelBufferFromImage:[img CGImage] andSize:imageSize];
Instead of this :
buffer = [self pixelBufferFromCGImage:[img CGImage]];
And added this method in myController
+ (CVPixelBufferRef) pixelBufferFromImage: (CGImageRef) image andSize:(CGSize) size
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,nil];
CVPixelBufferRef pxbuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, size.width,
size.height, kCVPixelFormatType_32ARGB, (__bridge CFDictionaryRef) options,&pxbuffer);
NSParameterAssert(status == kCVReturnSuccess && pxbuffer != NULL);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
NSParameterAssert(pxdata != NULL);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
NSParameterAssert(context);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
Every file I write with AVAssetWriter has a black background, if the images I include do not fill the entire render area. Is there any way to write with transparency? Here's the method I use to get the pixel buffer:
- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image {
CGSize size = self.renderSize;
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVPixelBufferPoolCreatePixelBuffer(NULL, self.adaptor.pixelBufferPool, &pxbuffer);
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault,
size.width,
size.height,
kCVPixelFormatType_32ARGB,
(__bridge CFDictionaryRef) options,
&pxbuffer);
if (status != kCVReturnSuccess){
NSLog(#"Failed to create pixel buffer");
}
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, size.width,
size.height, 8, 4*size.width, rgbColorSpace,
(CGBitmapInfo)kCGImageAlphaPremultipliedFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
And the AVAssetWriter code:
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:self.renderSize.width], AVVideoWidthKey,
[NSNumber numberWithInt:self.renderSize.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* videoWriterInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
NSDictionary *bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt:kCVPixelFormatType_32ARGB], kCVPixelBufferPixelFormatTypeKey, nil];
self.adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:videoWriterInput
sourcePixelBufferAttributes:bufferAttributes];
buffer = [self pixelBufferFromCGImage:[stickerImage CGImage]];
BOOL append_ok = YES;
int j = 0;
while (append_ok && j < totalFrames) {
if (self.adaptor.assetWriterInput.readyForMoreMediaData) {
CMTime frameTime = CMTimeMake(frameCount,(int32_t) fps);
append_ok = [self.adaptor appendPixelBuffer:buffer withPresentationTime:frameTime];
if(!append_ok){
NSError *error = self.assetWriter.error;
if(error!=nil) {
NSLog(#"Unresolved error %#,%#.", error, [error userInfo]);
}
}
}
else {
printf("adaptor not ready %d, %d\n", frameCount, j);
[NSThread sleepForTimeInterval:0.1];
}
j++;
frameCount++;
}
if (!append_ok) {
printf("error appending image %d times %d\n, with error.", frameCount, j);
}
In short, This is not possible. AVAssetWriter outputs using the h264 video codec. The h264 video codec does not support an alpha channel, so you are unfortunately out of luck.
A more involved solution would involve rendering the alpha channel (transparency) into a grayscale h264 encoded video and compositiong the rub and alpha information back together when viewing.
I'm using the code below to take a series of UIImage's and convert them into a mov. For some reason I keep on getting a green border at the bottom or on the right side depending on whether it's a landscape or portrait photo. The images are 215x320.
How can I remove the green borders? Is there a better way of creating .mov from UIImages?
- (void)createMov
{
UIImage *first = [self.videoFrames objectAtIndex:0];
CGSize frameSize = first.size;
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:
[NSURL fileURLWithPath:fileName] fileType:AVFileTypeQuickTimeMovie
error:&error];
if (error) {
NSLog(#"error creating AssetWriter: %#",[error description]);
}
int numPixels = first.size.width * first.size.height;
int bitsPerSecond = numPixels * 11.04;
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:frameSize.width], AVVideoWidthKey,
[NSNumber numberWithInt:frameSize.height], AVVideoHeightKey,
[NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInteger:bitsPerSecond], AVVideoAverageBitRateKey,
[NSNumber numberWithInteger:30], AVVideoMaxKeyFrameIntervalKey,
nil], AVVideoCompressionPropertiesKey,
nil];
AVAssetWriterInput* writerInput = [[AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings] retain];
NSMutableDictionary *attributes = [[NSMutableDictionary alloc] init];
[attributes setObject:[NSNumber numberWithUnsignedInt:kCVPixelFormatType_32ARGB] forKey: (NSString*)kCVPixelBufferPixelFormatTypeKey];
[attributes setObject:[NSNumber numberWithUnsignedInt:frameSize.width] forKey:(NSString*)kCVPixelBufferWidthKey];
[attributes setObject:[NSNumber numberWithUnsignedInt:frameSize.height] forKey:(NSString*)kCVPixelBufferHeightKey];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:attributes];
[videoWriter addInput:writerInput];
// fixes all errors
writerInput.expectsMediaDataInRealTime = YES;
//Start a session:
BOOL start = [videoWriter startWriting];
NSLog(#"Session started? %d", start);
[videoWriter startSessionAtSourceTime:kCMTimeZero];
// Writing.
CVPixelBufferRef buffer = NULL;
buffer = [self pixelBufferFromCGImage:[first CGImage]];
BOOL result = [adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];
if (result == NO) //failes on 3GS, but works on iphone 4
NSLog(#"failed to append buffer");
if(buffer)
CVBufferRelease(buffer);
[NSThread sleepForTimeInterval:0.05];
int fps = [Utils frameRate];
int i = 0;
for (UIImage *imgFrame in self.videoFrames) {
i = [self addFrame:adaptor videoWriter:videoWriter buffer:buffer imgFrame:imgFrame i:i fps:fps];
}
//Finish the session:
[writerInput markAsFinished];
[videoWriter finishWriting];
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
[videoWriter release];
[writerInput release];
}
- (int)addFrame:(AVAssetWriterInputPixelBufferAdaptor *)adaptor videoWriter:(AVAssetWriter *)videoWriter buffer:(CVPixelBufferRef)buffer imgFrame:(UIImage *)imgFrame i:(int)i fps: (float)fps
{
if (adaptor.assetWriterInput.readyForMoreMediaData) {
i++;
NSLog(#"inside for loop %d",i);
CMTime frameTime = CMTimeMake(1, fps);
CMTime lastTime=CMTimeMake(i, fps);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
buffer = [self pixelBufferFromCGImage:[imgFrame CGImage]];
BOOL result = [adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
if (result == NO) { //fails on 3GS, but works on iphone 4
NSLog(#"failed to append buffer");
NSLog(#"The error is %#", [videoWriter error]);
}
if (buffer) {
CVBufferRelease(buffer);
}
[NSThread sleepForTimeInterval:0.05];
} else {
NSLog(#"error");
i--;
}
[NSThread sleepForTimeInterval:0.02];
return i;
}
- (CVPixelBufferRef)pixelBufferFromCGImage:(CGImageRef)image
{
NSDictionary *options = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithBool:YES], kCVPixelBufferCGImageCompatibilityKey,
[NSNumber numberWithBool:YES], kCVPixelBufferCGBitmapContextCompatibilityKey,
nil];
CVPixelBufferRef pxbuffer = NULL;
CVPixelBufferCreate(kCFAllocatorDefault, CGImageGetWidth(image),
CGImageGetHeight(image), kCVPixelFormatType_32ARGB, (CFDictionaryRef) options,
&pxbuffer);
CVPixelBufferLockBaseAddress(pxbuffer, 0);
void *pxdata = CVPixelBufferGetBaseAddress(pxbuffer);
size_t rowBytes = CVPixelBufferGetBytesPerRow(pxbuffer);
CGColorSpaceRef rgbColorSpace = CGColorSpaceCreateDeviceRGB();
CGContextRef context = CGBitmapContextCreate(pxdata, CGImageGetWidth(image),
CGImageGetHeight(image), 8, rowBytes, rgbColorSpace,
kCGImageAlphaNoneSkipFirst);
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0));
CGContextDrawImage(context, CGRectMake(0, 0, CGImageGetWidth(image),
CGImageGetHeight(image)), image);
CGColorSpaceRelease(rgbColorSpace);
CGContextRelease(context);
CVPixelBufferUnlockBaseAddress(pxbuffer, 0);
return pxbuffer;
}
It seems i got the similar issue, and I just got an idea come from my head after read two posts as below.
http://forum.videohelp.com/threads/314973-green-line-at-bottom-of-video-window
http://en.wikipedia.org/wiki/H.264/MPEG-4_AVC#Features
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithUnsignedLong:sourceWidth], AVVideoWidthKey,
[NSNumber numberWithUnsignedLong:sourceHeight], AVVideoHeightKey,
nil];
AVAssetWriterInput *writerInput =
[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings];
Before start write data by AVAssertWriter, please reset sourceWidth and sourceHeight with this method:
size_t fixedLineValue(size_t sourceValue){
size_t divide = sourceValue%4;
size_t finalValue = sourceValue;
if (divide) {
finalValue=(sourceValue/4+1)*4;
}
return finalValue;
}
I cant's finger out clearly why do this, but after that, the ugly green borders were removed. Please comment if there's a better answer and let's know. Thanks.