iOS - video frame processing optimization - ios

In my project, I need to copy a chunk of each frame of a video on one unique resulting image.
Capturing video frames is not a big deal. It would be something like :
// duration is the movie lenght in s.
// frameDuration is 1/fps. (or 24fps, frameDuration = 1/24)
// player is a MPMoviePlayerController
for (NSTimeInterval i=0; i < duration; i += frameDuration) {
UIImage * image = [player thumbnailImageAtTime:i timeOption:MPMovieTimeOptionExact];
CGRect destinationRect = [self getDestinationRect:i];
[self drawImage:image inRect:destinationRect fromRect:originRect];
// UI feedback
[self performSelectorOnMainThread:#selector(setProgressValue:) withObject:[NSNumber numberWithFloat:x/totalFrames] waitUntilDone:NO];
}
The problem comes when I try to implement drawImage:inRect:fromRect: method.
I tried this code, which :
create a new CGImage with CGImageCreateWithImageInRect from the video frame to extract the chunk of image.
Make a CGContextDrawImage on the ImageContext to draw the chunk
But when the video reaches 12-14s, my iPhone 4S is announcing his third memory warning and crashes. I've profiled the app with the Leak tool, and it found no leak at all...
I'm not very strong in Quartz. Is there better optimized way to achieve this?

Finally I kept the Quartz part of my code and changed the way I retrieved the images.
Now I use AVFoundation, which is a far faster solution.
// Creating the tools : 1/ the video asset, 2/ the image generator, 3/ the composition, which helps to retrieve video properties.
AVURLAsset *asset = [[[AVURLAsset alloc] initWithURL:moviePathURL
options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], AVURLAssetPreferPreciseDurationAndTimingKey, nil]] autorelease];
AVAssetImageGenerator *generator = [[[AVAssetImageGenerator alloc] initWithAsset:asset] autorelease];
generator.appliesPreferredTrackTransform = YES; // if I omit this, the frames are rotated 90° (didn't try in landscape)
AVVideoComposition * composition = [AVVideoComposition videoCompositionWithPropertiesOfAsset:asset];
// Retrieving the video properties
NSTimeInterval duration = CMTimeGetSeconds(asset.duration);
frameDuration = CMTimeGetSeconds(composition.frameDuration);
CGSize renderSize = composition.renderSize;
CGFloat totalFrames = round(duration/frameDuration);
// Selecting each frame we want to extract : all of them.
NSMutableArray * times = [NSMutableArray arrayWithCapacity:round(duration/frameDuration)];
for (int i=0; i<totalFrames; i++) {
NSValue *time = [NSValue valueWithCMTime:CMTimeMakeWithSeconds(i*frameDuration, composition.frameDuration.timescale)];
[times addObject:time];
}
__block int i = 0;
AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
if (result == AVAssetImageGeneratorSucceeded) {
int x = round(CMTimeGetSeconds(requestedTime)/frameDuration);
CGRect destinationStrip = CGRectMake(x, 0, 1, renderSize.height);
[self drawImage:im inRect:destinationStrip fromRect:originStrip inContext:context];
}
else
NSLog(#"Ouch: %#", error.description);
i++;
[self performSelectorOnMainThread:#selector(setProgressValue:) withObject:[NSNumber numberWithFloat:i/totalFrames] waitUntilDone:NO];
if(i == totalFrames) {
[self performSelectorOnMainThread:#selector(performVideoDidFinish) withObject:nil waitUntilDone:NO];
}
};
// Launching the process...
generator.requestedTimeToleranceBefore = kCMTimeZero;
generator.requestedTimeToleranceAfter = kCMTimeZero;
generator.maximumSize = renderSize;
[generator generateCGImagesAsynchronouslyForTimes:times completionHandler:handler];
Even with very long video, it takes the time but it never crash !

In addition to Martin's answer I'd suggest shrinking the sizes of the images obtained by that call; that is, adding a property [generator.maximumSize = CGSizeMake(width,height)]; Make the images as small as possible so they wouldn't take up too much memory

Related

ios UICollectionView slow scrolling and memory warning

I am currently working on a project that involves a UICollectionView populated by AVAssets imported from a UIImagepickerController, after 10 or so item are in the collection, Scrolling becomes laggy and slow, and occasionally I receive memory warnings. I believe the problem to be in the thumbnail generation which happens in realtime, here is the code i use:
- (void) setAsset:(AVAsset *)asset
{
_asset = asset;
AVAssetImageGenerator *generate = [[AVAssetImageGenerator alloc] initWithAsset:_asset];
NSError *err = NULL;
CMTime time = CMTimeMake(1, 60);
generate.appliesPreferredTrackTransform = YES;
CGImageRef imgRef = [generate copyCGImageAtTime:time actualTime:NULL error:&err];
self.VideoImageView.image = [UIImage imageWithCGImage:(imgRef)];
}
Is there another less "expensive way" to achieve this without delay?
Any help on the matter would be greatly appreciated.
You can do the real time thumbnail generation on a background thread, and then jump back to the main thread when the operation is done to set the actual thumbnail.
Right now you're doing everything on the main thread, which blocks the UI, and makes the scrolling jerky.
You can set a placeholder image in your cell and generate thumbnail in background queue.
Then set it to your image view.
AsynImageView might be of some use to you.
Open another thread to deal with the image thing. When your image done switch back to main thread to update a cell. You can do this with GCD
Here are the code that asynchronous ganerate thumbnail image for video.
-(void)generateImage
{
NSURL *url = [NSURL fileURLWithPath:_videoPath];
AVURLAsset *asset=[[AVURLAsset alloc] initWithURL:url options:nil];
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform=TRUE;
CMTime thumbTime = CMTimeMakeWithSeconds(30,30);
AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
if (result != AVAssetImageGeneratorSucceeded) {
NSLog(#"couldn't generate thumbnail, error:%#", error);
}
// TODO Do something with the image
};
CGSize maxSize = CGSizeMake(128, 128);
generator.maximumSize = maxSize;
[generator generateCGImagesAsynchronouslyForTimes:[NSArray arrayWithObject:[NSValue valueWithCMTime:thumbTime]] completionHandler:handler];
}
Hope this help you.

Need assistance regarding CMTimeMakeWithSeconds

I am trying to fetch all frames of video and converting and storing them as individual images.
I am using this code in AV Foundation Programming Guide.
the code for getting multiple images is
CMTime firstThird = CMTimeMakeWithSeconds(durationSeconds/3.0, 600);
CMTime secondThird = CMTimeMakeWithSeconds(durationSeconds*2.0/3.0, 600);
CMTime end = CMTimeMakeWithSeconds(durationSeconds, 600);
this is hard coded, but I want to convert whole video. I know I can use for loop but what to do with this durationsecond means how can I use from begging to end to get all frames?
here is my attempt
for(float f=0.0; f<=durationSeconds; f++) {
[times addObject:[NSValue valueWithCMTime:CMTimeMakeWithSeconds(durationSeconds, 600)]];
}
Any time you're about to write hundreds of lines of nearly identical code is probably a time where you need to be using a loop of some sort:
for (int currentFrame = 0; currentFrame < durationSeconds; ++currentFrame) {
CMTime currentTime = CMTimeMakeWithSeconds(i, 600);
// the rest of the code you need to create the image or whatever
}
That snippet will grab one frame per second. If you wanted to grab 30 frames per second, it'd look more like this:
const CGFloat framesPerSecond = 30.0;
for (int currentFrame = 0; currentFrame < (durationSeconds * framesPerSecond); ++currentFrame) {
CMTime currentTime = CMTimeMakeWithSeconds(currentFrame/framesPerSecond, 600);
// again, the code you need to create the image from this time
}
Just set the value of framesPerSecond to however many frames per second you want to capture.
As a disclaimer, I'm not completely familiar with this stuff, so a <= might be appropriate in the conditional statements here.
ADDENDUM: The code I've posted is only going to grab the timestamp for which to grab an image. The rest of the code should look something like this:
AVAsset *myAsset = // your asset here
AVAssetImageGenerator *imageGenerator = [[AVAssetImageGenerator alloc] initWithAsset:myAsset];
NSError *error;
CMTime actualTime;
CGImageRef currentImage = [imageGenerator copyCGImageAtTime:currentTime
actualTime:&actualTime
error:&error];
if (!error) {
[someMutableArray addObject:[[UIImage alloc] initWithCGImage:currentImage]];
}

ios - generateCGImagesAsynchronouslyForTimes taking too long

The problem I have is loading 20 images from video takes too long. The more thumbnails I want to get, the longer I have to wait. Method I use is generateCGImagesAsynchronouslyForTimes. Does anyone know why I have this problem?
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform = YES;
generator.requestedTimeToleranceAfter = kCMTimeZero;
generator.requestedTimeToleranceBefore = kCMTimeZero;
CGSize maxSize = CGSizeMake(320, 180);
generator.maximumSize = maxSize;
AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
if (result != AVAssetImageGeneratorSucceeded) {
NSLog(#"couldn't generate thumbnail, error:%#", error);
}
UIImage *frameImage = [UIImage imageWithCGImage:im];
dispatch_async(dispatch_get_main_queue(), ^{
[_frameImageView setImage:frameImage];
});
};
[generator generateCGImagesAsynchronouslyForTimes:timeArray completionHandler:handler];
I know your issues.
It takes a lot of time for generating thumbnail because you set requestedTimeToleranceAfter and requestedTimeToleranceBefore are kCMTimeZero.
Long Answer:
If you specific TimeTolerance, it will be turned for precision rather than performance. if you just want to video thumbnail, so you don't need generate thumbnail with hight precision.
It's similar with seekToTime with tolerance. Reference from https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/02_Playback.html#//apple_ref/doc/uid/TP40010188-CH3-SW3 , Section Seeking—Repositioning the Playhead.
Short Answer :
Just remove requestedTimeToleranceAfter and requestedTimeToleranceBefore.

Saving high quality images, doing live processing - what's the best approach?

I'm still learning about AVFoundation, so I'm unsure how best I should approach the problem of needing to capture a high quality still image, but provide a low-quality preview video stream.
I've got an app that needs to take high quality images (AVCaptureSessionPresetPhoto), but process the preview video stream using OpenCV - for which a much lower resolution is acceptable. Simply using the base OpenCV Video Camera class is no good, as setting the defaultAVCaptureSessionPreset to AVCaptureSessionPresetPhoto results in the full resolution frame being passed to processImage - which is very slow indeed.
How can I have a high-quality connection to the device that I can use for capturing the still image, and a low-quality connection that can be processed and displayed? A description of how I need to set up sessions/connections would be very helpful. Is there an open-source example of such an app?
I did something similar - I grabbed the pixels in the delegate method, made a CGImageRef of them, then dispatched that to the normal priority queue, where it was modified. Since AVFoundation must be using a CADisplayLink for the callback method it has highest priority. In my particular case I was not grabbing all pixels so it worked on an iPhone 4 at 30fps. Depending on what devices you want to run you have number of pixels, fps, etc trade offs.
Another idea is to grab a power of 2 subset of pixels - for instance every 4th in each row and every 4th row. Again I did something similar in my app at 20-30fps. You can then further operate on this smaller image in dispatched blocks.
If this seems daunting offer a bounty for working code.
CODE:
// Image is oriented with bottle neck to the left and the bottle bottom on the right
- (void)captureOutput:(AVCaptureVideoDataOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
#if 1
AVCaptureDevice *camera = [(AVCaptureDeviceInput *)[captureSession.inputs lastObject] device];
if(camera.adjustingWhiteBalance || camera.adjustingExposure) NSLog(#"GOTCHA: %d %d", camera.adjustingWhiteBalance, camera.adjustingExposure);
printf("foo\n");
#endif
if(saveState != saveOne && saveState != saveAll) return;
#autoreleasepool {
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
//NSLog(#"PE: value=%lld timeScale=%d flags=%x", prStamp.value, prStamp.timescale, prStamp.flags);
/*Lock the image buffer*/
CVPixelBufferLockBaseAddress(imageBuffer,0);
NSRange captureRange;
if(saveState == saveOne) {
#if 0 // B G R A MODE !
NSLog(#"PIXEL_TYPE: 0x%lx", CVPixelBufferGetPixelFormatType(imageBuffer));
uint8_t *newPtr = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
NSLog(#"ONE VAL %x %x %x %x", newPtr[0], newPtr[1], newPtr[2], newPtr[3]);
}
exit(0);
#endif
[edgeFinder setupImageBuffer:imageBuffer];
BOOL success = [edgeFinder delineate:1];
if(!success) {
dispatch_async(dispatch_get_main_queue(), ^{ edgeFinder = nil; [delegate error]; });
saveState = saveNone;
} else
bottleRange = edgeFinder.sides;
xRange.location = edgeFinder.shoulder;
xRange.length = edgeFinder.bottom - xRange.location;
NSLog(#"bottleRange 1: %# neck=%d bottom=%d", NSStringFromRange(bottleRange), edgeFinder.shoulder, edgeFinder.bottom );
//searchRows = [edgeFinder expandRange:bottleRange];
rowsPerSwath = lrintf((bottleRange.length*NUM_DEGREES_TO_GRAB)*(float)M_PI/360.0f);
NSLog(#"rowsPerSwath = %d", rowsPerSwath);
saveState = saveIdling;
captureRange = NSMakeRange(0, [WLIPBase numRows]);
dispatch_async(dispatch_get_main_queue(), ^
{
[delegate focusDone];
edgeFinder = nil;
captureOutput.alwaysDiscardsLateVideoFrames = YES;
});
} else {
NSInteger rows = rowsPerSwath;
NSInteger newOffset = bottleRange.length - rows;
if(newOffset & 1) {
--newOffset;
++rows;
}
captureRange = NSMakeRange(bottleRange.location + newOffset/2, rows);
}
//NSLog(#"captureRange=%u %u", captureRange.location, captureRange.length);
/*Get information about the image*/
uint8_t *baseAddress = (uint8_t *)CVPixelBufferGetBaseAddress(imageBuffer);
size_t bytesPerRow = CVPixelBufferGetBytesPerRow(imageBuffer);
size_t width = CVPixelBufferGetWidth(imageBuffer);
// Note Apple sample code cheats big time - the phone is big endian so this reverses the "apparent" order of bytes
CGContextRef newContext = CGBitmapContextCreate(NULL, width, captureRange.length, 8, bytesPerRow, colorSpace, kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Little); // Video in ARGB format
assert(newContext);
uint8_t *newPtr = (uint8_t *)CGBitmapContextGetData(newContext);
size_t offset = captureRange.location * bytesPerRow;
memcpy(newPtr, baseAddress + offset, captureRange.length * bytesPerRow);
CVPixelBufferUnlockBaseAddress(imageBuffer, 0);
OSAtomicIncrement32(&totalImages);
int32_t curDepth = OSAtomicIncrement32(&queueDepth);
if(curDepth > maxDepth) maxDepth = curDepth;
#define kImageContext #"kImageContext"
#define kState #"kState"
#define kPresTime #"kPresTime"
CMTime prStamp = CMSampleBufferGetPresentationTimeStamp(sampleBuffer); // when it was taken?
//CMTime deStamp = CMSampleBufferGetDecodeTimeStamp(sampleBuffer); // now?
NSDictionary *dict = [NSDictionary dictionaryWithObjectsAndKeys:
[NSValue valueWithBytes:&saveState objCType:#encode(saveImages)], kState,
[NSValue valueWithNonretainedObject:(__bridge id)newContext], kImageContext,
[NSValue valueWithBytes:&prStamp objCType:#encode(CMTime)], kPresTime,
nil ];
dispatch_async(imageQueue, ^
{
// could be on any thread now
OSAtomicDecrement32(&queueDepth);
if(!isCancelled) {
saveImages state; [(NSValue *)[dict objectForKey:kState] getValue:&state];
CGContextRef context; [(NSValue *)[dict objectForKey:kImageContext] getValue:&context];
CMTime stamp; [(NSValue *)[dict objectForKey:kPresTime] getValue:&stamp];
CGImageRef newImageRef = CGBitmapContextCreateImage(context);
CGContextRelease(context);
UIImageOrientation orient = state == saveOne ? UIImageOrientationLeft : UIImageOrientationUp;
UIImage *image = [UIImage imageWithCGImage:newImageRef scale:1.0 orientation:orient]; // imageWithCGImage: UIImageOrientationUp UIImageOrientationLeft
CGImageRelease(newImageRef);
NSData *data = UIImagePNGRepresentation(image);
// NSLog(#"STATE:[%d]: value=%lld timeScale=%d flags=%x", state, stamp.value, stamp.timescale, stamp.flags);
{
NSString *name = [NSString stringWithFormat:#"%d.png", num];
NSString *path = [[wlAppDelegate snippetsDirectory] stringByAppendingPathComponent:name];
BOOL ret = [data writeToFile:path atomically:NO];
//NSLog(#"WROTE %d err=%d w/time %f path:%#", num, ret, (double)stamp.value/(double)stamp.timescale, path);
if(!ret) {
++errors;
} else {
dispatch_async(dispatch_get_main_queue(), ^
{
if(num) [delegate progress:(CGFloat)num/(CGFloat)(MORE_THAN_ONE_REV * SNAPS_PER_SEC) file:path];
} );
}
++num;
}
} else NSLog(#"CANCELLED");
} );
}
}
In AVCaptureSessionPresetPhoto it use small video preview(about 1000x700 for iPhone6) and high resolution photo(about 3000x2000).
So I use modified 'CvPhotoCamera' class to process small preview and take photo of full-size picture. I post this code here: https://stackoverflow.com/a/31478505/1994445

Memory management in dispatch

I try to make thumbs on my iPad app of all the view in the background using the following code:
NSString *path = [self.page previewPathForOrientation:currentOrientation];
dispatch_async( dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_LOW, 0), ^{
#autoreleasepool {
UIGraphicsBeginImageContextWithOptions(self.previewView.bounds.size, NO, 0.0);
[self.previewView.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage *image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
self.previewView = nil;
float scale = [UIScreen mainScreen].scale;
CGRect previewRect = currentOrientation == Landscape ? [[OrientationLandscape singleton] frameForPreviewImage] : [[OrientationPortrait singleton] frameForPreviewImage];
CGSize previewSize = CGSizeMake(previewRect.size.width * scale, previewRect.size.height * scale);
UIImage *scaledImage = [image scaleImageToSize:previewSize];
CGImageDestinationRef imageDestination = CGImageDestinationCreateWithURL((__bridge CFURLRef)[[NSURL alloc] initFileURLWithPath:path], (__bridge CFStringRef)#"public.png", 1, NULL);
CGImageDestinationAddImage(imageDestination, [scaledImage CGImage], NULL);
CGImageDestinationFinalize(imageDestination);
CFRelease(imageDestination);
NSFileManager *fileMngr = [[NSFileManager alloc] init];
if(![fileMngr fileExistsAtPath:path])
{
ZAssert(0, #"could not save preview file");
}
dispatch_async(dispatch_get_main_queue(), ^{
rendered++;
//DLog(#"rendered %d items", rendered);
[GetController addSkipBackupAttributeToItemAtPath:path];
[self.page setPreviewRenderedForOrientation:currentOrientation];
contentsCount = 0;
currentContentIndex = 0;
//[self prepareOtherOrientation];
if(self.journal == nil && (![self.page previewRenderedForOrientation:Landscape] || ![self.page previewRenderedForOrientation:Portrait])){
[self appendPage:self.page];
}
DLog(#"rendered page %# in orientation %d", self.page, currentOrientation);
self.page = nil;
[self retry];
});
}
});
The retry function uses an NSTimer to start the same function again, after a short delay and with a different page. Using the Allocations tool, the heap just keeps growing. After a while I get Memory Warnings, shortly after the app crashes.
Everything works fine when I remove all the dispatch calls, but of course thats not what I want. Also, when I increase the delay in the retry method to say 5 seconds, the problem disappears too, so it seems memory isn't released when things get processed in quick succession.
I absolutely ensured that this method isn't running more than once at a time... any ideas what's going on here?

Resources