The problem I have is loading 20 images from video takes too long. The more thumbnails I want to get, the longer I have to wait. Method I use is generateCGImagesAsynchronouslyForTimes. Does anyone know why I have this problem?
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform = YES;
generator.requestedTimeToleranceAfter = kCMTimeZero;
generator.requestedTimeToleranceBefore = kCMTimeZero;
CGSize maxSize = CGSizeMake(320, 180);
generator.maximumSize = maxSize;
AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
if (result != AVAssetImageGeneratorSucceeded) {
NSLog(#"couldn't generate thumbnail, error:%#", error);
}
UIImage *frameImage = [UIImage imageWithCGImage:im];
dispatch_async(dispatch_get_main_queue(), ^{
[_frameImageView setImage:frameImage];
});
};
[generator generateCGImagesAsynchronouslyForTimes:timeArray completionHandler:handler];
I know your issues.
It takes a lot of time for generating thumbnail because you set requestedTimeToleranceAfter and requestedTimeToleranceBefore are kCMTimeZero.
Long Answer:
If you specific TimeTolerance, it will be turned for precision rather than performance. if you just want to video thumbnail, so you don't need generate thumbnail with hight precision.
It's similar with seekToTime with tolerance. Reference from https://developer.apple.com/library/ios/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/02_Playback.html#//apple_ref/doc/uid/TP40010188-CH3-SW3 , Section Seeking—Repositioning the Playhead.
Short Answer :
Just remove requestedTimeToleranceAfter and requestedTimeToleranceBefore.
Related
I am using AVAssetImageGenerator to create an image from the last frame of a video. This usually works fine, but every now and then copyCGImageAtTime fails with the error
NSLocalizedDescription = "Cannot Open";
NSLocalizedFailureReason = "This media cannot be used.";
NSUnderlyingError = "Error Domain=NSOSStatusErrorDomain Code=-12431";
I am verifying that the AVAsset is not nil and I'm pulling the CMTime directly from the asset, so I do not understand why this keeps happening. This only happens when trying to get the last frame, if I use kCMTimeZero instead, it seems to work.
- (void)getLastFrameFromAsset:(AVAsset *)asset completionHandler:(void (^)(UIImage *image))completion
{
NSAssert(asset, #"Tried to generate last frame from nil asset");
AVAssetImageGenerator *gen = [[AVAssetImageGenerator alloc] initWithAsset:asset];
gen.requestedTimeToleranceBefore = kCMTimeZero;
gen.requestedTimeToleranceAfter = kCMTimeZero;
gen.appliesPreferredTrackTransform = YES;
CMTime time = [asset duration];
NSError *error = nil;
CMTime actualTime;
CGImageRef imageRef = [gen copyCGImageAtTime:time actualTime:&actualTime error:&error];
UIImage *image = [[UIImage alloc] initWithCGImage:imageRef];
NSAssert(image, #"Failed at generating image from asset's last frame");
completion(image);
CGImageRelease(imageRef);
}
This seems to be related, but it did not solve my problem.
Nothing guarantees that your asset's video track exists at [asset duration]. It's duration can be shorter than the whole asset. Since you set the tolerance to kCMTimeZero the only possible resolution is failure.
Edit: To clarify, the issue emerges when you have an asset with audio track slightly longer than the video track.
I am currently working on a project that involves a UICollectionView populated by AVAssets imported from a UIImagepickerController, after 10 or so item are in the collection, Scrolling becomes laggy and slow, and occasionally I receive memory warnings. I believe the problem to be in the thumbnail generation which happens in realtime, here is the code i use:
- (void) setAsset:(AVAsset *)asset
{
_asset = asset;
AVAssetImageGenerator *generate = [[AVAssetImageGenerator alloc] initWithAsset:_asset];
NSError *err = NULL;
CMTime time = CMTimeMake(1, 60);
generate.appliesPreferredTrackTransform = YES;
CGImageRef imgRef = [generate copyCGImageAtTime:time actualTime:NULL error:&err];
self.VideoImageView.image = [UIImage imageWithCGImage:(imgRef)];
}
Is there another less "expensive way" to achieve this without delay?
Any help on the matter would be greatly appreciated.
You can do the real time thumbnail generation on a background thread, and then jump back to the main thread when the operation is done to set the actual thumbnail.
Right now you're doing everything on the main thread, which blocks the UI, and makes the scrolling jerky.
You can set a placeholder image in your cell and generate thumbnail in background queue.
Then set it to your image view.
AsynImageView might be of some use to you.
Open another thread to deal with the image thing. When your image done switch back to main thread to update a cell. You can do this with GCD
Here are the code that asynchronous ganerate thumbnail image for video.
-(void)generateImage
{
NSURL *url = [NSURL fileURLWithPath:_videoPath];
AVURLAsset *asset=[[AVURLAsset alloc] initWithURL:url options:nil];
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform=TRUE;
CMTime thumbTime = CMTimeMakeWithSeconds(30,30);
AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
if (result != AVAssetImageGeneratorSucceeded) {
NSLog(#"couldn't generate thumbnail, error:%#", error);
}
// TODO Do something with the image
};
CGSize maxSize = CGSizeMake(128, 128);
generator.maximumSize = maxSize;
[generator generateCGImagesAsynchronouslyForTimes:[NSArray arrayWithObject:[NSValue valueWithCMTime:thumbTime]] completionHandler:handler];
}
Hope this help you.
I know this question has been asked many times on stackoverflow. But my problem is different.
I am iterating on the albumns of photos library to get all videos and their thumbnails.
Now, the problem is, my code is very slow to get the thumbnail of each video. For example, there is 14 videos in my camera roll and the total time taken to generate the thumbnail is around 3-4 seconds. I am using this code.
+(UIImage*)imageFromVideoAtURL:(NSURL*)contentURL {
UIImage* theImage = nil;
AVURLAsset* asset = [[AVURLAsset alloc] initWithURL:contentURL options:nil];
AVAssetImageGenerator* generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform = YES;
NSError* err = NULL;
CMTime time = CMTimeMake(1, 60);
CGImageRef imgRef = [generator copyCGImageAtTime:time actualTime:NULL error:&err];
theImage = [[[UIImage alloc] initWithCGImage:imgRef] autorelease];
CGImageRelease(imgRef);
[asset release];
[generator release];
return theImage;
}
I am finding a way to get the thumbnail of all videos very fast so that user has not to wait. I have seen apps on Apple store that are doing the same thing in just micro seconds. Please help.
I have also tried this code to generate the thumbnail, it is also very slow.
MPMoviePlayerController *moviePlayer = [[MPMoviePlayerController alloc] initWithContentURL:mediaUrl];
moviePlayer.shouldAutoplay = NO;
UIImage *thumbnail = [[moviePlayer thumbnailImageAtTime:0.0 timeOption:MPMovieTimeOptionNearestKeyFrame] retain];
[imageView setImage:thumbnail]; //imageView is a UIImageView
If you are loading videos from image library, it should already have the embedded thumbnail of the video. This can be accessed using thumbnail or aspectRatioThumnail methods of ALAsset class.
So in your case the thumbnails could be loaded like:
ALAssetLibrary* lib = [ALAssetLibrary new];
[lib assetForURL:contentURL resultBlock:^(ALAsset* asset) {
CGImageRef thumb = [asset thumbnail];
dispatch_async(dispatch_get_main_queue(), ^{
//do any UI operation here with thumb
});
}];
Please make sure to make any UIKit call in the main queue as assetForURL:: method may invoke the resultBlock in some background thread.
In my project, I need to copy a chunk of each frame of a video on one unique resulting image.
Capturing video frames is not a big deal. It would be something like :
// duration is the movie lenght in s.
// frameDuration is 1/fps. (or 24fps, frameDuration = 1/24)
// player is a MPMoviePlayerController
for (NSTimeInterval i=0; i < duration; i += frameDuration) {
UIImage * image = [player thumbnailImageAtTime:i timeOption:MPMovieTimeOptionExact];
CGRect destinationRect = [self getDestinationRect:i];
[self drawImage:image inRect:destinationRect fromRect:originRect];
// UI feedback
[self performSelectorOnMainThread:#selector(setProgressValue:) withObject:[NSNumber numberWithFloat:x/totalFrames] waitUntilDone:NO];
}
The problem comes when I try to implement drawImage:inRect:fromRect: method.
I tried this code, which :
create a new CGImage with CGImageCreateWithImageInRect from the video frame to extract the chunk of image.
Make a CGContextDrawImage on the ImageContext to draw the chunk
But when the video reaches 12-14s, my iPhone 4S is announcing his third memory warning and crashes. I've profiled the app with the Leak tool, and it found no leak at all...
I'm not very strong in Quartz. Is there better optimized way to achieve this?
Finally I kept the Quartz part of my code and changed the way I retrieved the images.
Now I use AVFoundation, which is a far faster solution.
// Creating the tools : 1/ the video asset, 2/ the image generator, 3/ the composition, which helps to retrieve video properties.
AVURLAsset *asset = [[[AVURLAsset alloc] initWithURL:moviePathURL
options:[NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithBool:YES], AVURLAssetPreferPreciseDurationAndTimingKey, nil]] autorelease];
AVAssetImageGenerator *generator = [[[AVAssetImageGenerator alloc] initWithAsset:asset] autorelease];
generator.appliesPreferredTrackTransform = YES; // if I omit this, the frames are rotated 90° (didn't try in landscape)
AVVideoComposition * composition = [AVVideoComposition videoCompositionWithPropertiesOfAsset:asset];
// Retrieving the video properties
NSTimeInterval duration = CMTimeGetSeconds(asset.duration);
frameDuration = CMTimeGetSeconds(composition.frameDuration);
CGSize renderSize = composition.renderSize;
CGFloat totalFrames = round(duration/frameDuration);
// Selecting each frame we want to extract : all of them.
NSMutableArray * times = [NSMutableArray arrayWithCapacity:round(duration/frameDuration)];
for (int i=0; i<totalFrames; i++) {
NSValue *time = [NSValue valueWithCMTime:CMTimeMakeWithSeconds(i*frameDuration, composition.frameDuration.timescale)];
[times addObject:time];
}
__block int i = 0;
AVAssetImageGeneratorCompletionHandler handler = ^(CMTime requestedTime, CGImageRef im, CMTime actualTime, AVAssetImageGeneratorResult result, NSError *error){
if (result == AVAssetImageGeneratorSucceeded) {
int x = round(CMTimeGetSeconds(requestedTime)/frameDuration);
CGRect destinationStrip = CGRectMake(x, 0, 1, renderSize.height);
[self drawImage:im inRect:destinationStrip fromRect:originStrip inContext:context];
}
else
NSLog(#"Ouch: %#", error.description);
i++;
[self performSelectorOnMainThread:#selector(setProgressValue:) withObject:[NSNumber numberWithFloat:i/totalFrames] waitUntilDone:NO];
if(i == totalFrames) {
[self performSelectorOnMainThread:#selector(performVideoDidFinish) withObject:nil waitUntilDone:NO];
}
};
// Launching the process...
generator.requestedTimeToleranceBefore = kCMTimeZero;
generator.requestedTimeToleranceAfter = kCMTimeZero;
generator.maximumSize = renderSize;
[generator generateCGImagesAsynchronouslyForTimes:times completionHandler:handler];
Even with very long video, it takes the time but it never crash !
In addition to Martin's answer I'd suggest shrinking the sizes of the images obtained by that call; that is, adding a property [generator.maximumSize = CGSizeMake(width,height)]; Make the images as small as possible so they wouldn't take up too much memory
This question already has answers here:
iPhone Read UIimage (frames) from video with AVFoundation
(5 answers)
Closed 4 years ago.
I'm using UIImagePicker to allow the user to create a video and then trim it. I need to split that video into multiple frames and let the user choose one of them.
In order to show the frames I likely must convert them to UIImage. How can I do this? I must use AVFoundation but I couldn't find a tutorial on how to get & convert the frames.
Should I do the image capture with AVFoundation too? If so do I have to implementing trimming myself?
I think the answer in this question is what you are looking for.
iPhone Read UIimage (frames) from video with AVFoundation.
There are 2 methods specified by the accepted answer. You can use either one according to your requirement.
Here is code to get FPS images from video
1) Import
#import <Photos/Photos.h>
2) in viewDidLoad
videoUrl = [NSURL fileURLWithPath:[[NSBundle mainBundle]pathForResource:#"VfE_html5" ofType:#"mp4"]];
[self createImage:5]; // 5 is frame per second (FPS) you can change FPS as per your requirement.
3) Functions
-(void)createImage:(int)withFPS {
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:videoUrl options:nil];
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.requestedTimeToleranceAfter = kCMTimeZero;
generator.requestedTimeToleranceBefore = kCMTimeZero;
for (Float64 i = 0; i < CMTimeGetSeconds(asset.duration) * withFPS ; i++){
#autoreleasepool {
CMTime time = CMTimeMake(i, withFPS);
NSError *err;
CMTime actualTime;
CGImageRef image = [generator copyCGImageAtTime:time actualTime:&actualTime error:&err];
UIImage *generatedImage = [[UIImage alloc] initWithCGImage:image];
[self savePhotoToAlbum: generatedImage]; // Saves the image on document directory and not memory
CGImageRelease(image);
}
}
}
-(void)savePhotoToAlbum:(UIImage*)imageToSave {
[[PHPhotoLibrary sharedPhotoLibrary] performChanges:^{
PHAssetChangeRequest *changeRequest = [PHAssetChangeRequest creationRequestForAssetFromImage:imageToSave];
} completionHandler:^(BOOL success, NSError *error) {
if (success) {
NSLog(#"sucess.");
}
else {
NSLog(#"fail.");
}
}];
}
You also can use the lib VideoBufferReader (see on GitHub), based on AVFoundation.