AVAssetExportSession exporting too slow - ios

I am trying to export an AVMutableComposition using AVAssetExportSession.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPresetHighestQuality];
exporter.outputURL=url;
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.videoComposition = mainCompositionInst;
exporter.shouldOptimizeForNetworkUse = YES;
[exporter exportAsynchronouslyWithCompletionHandler:^
{
switch (exporter.status)
{
case AVAssetExportSessionStatusCompleted:
{
NSLog(#"Video Merge SuccessFullt");
}
break;
case AVAssetExportSessionStatusFailed:
NSLog(#"Failed:%#", exporter.error.description);
break;
case AVAssetExportSessionStatusCancelled:
NSLog(#"Canceled:%#", exporter.error);
break;
case AVAssetExportSessionStatusExporting:
NSLog(#"Exporting!");
break;
case AVAssetExportSessionStatusWaiting:
NSLog(#"Waiting");
break;
default:
break;
}
}];
But exporting even 1 minute video takes around 30 seconds, which is too much considering iPad inbuilt camera app takes less than 2 seconds.
Also if I remove videoComposition from exporter, time reduces to 7 seconds, which is still bad considering video length to be only 1 minute.
So, I want to know how to decrease the export time to minimum?
Also, I want to know, does AVAssetExportSession takes generally this much time or is it just my case?
Update:
Merge Code:
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo
preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableVideoCompositionLayerInstruction *videoTrackLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
NSMutableArray *instructions = [NSMutableArray new];
CGSize size = CGSizeZero;
CMTime time = kCMTimeZero;
for (AVURLAsset *asset in assets)
{
AVAssetTrack *assetTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioAssetTrack = [asset tracksWithMediaType:AVMediaTypeAudio].firstObject;
NSError *error;
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetTrack.timeRange.duration )
ofTrack:assetTrack
atTime:time
error:&error];
[videoTrackLayerInstruction setTransform:assetTrack.preferredTransform atTime:time];
if (error) {
NSLog(#"asset url :: %#",assetTrack.asset);
NSLog(#"Error1 - %#", error.debugDescription);
}
[audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, audioAssetTrack.timeRange.duration)
ofTrack:audioAssetTrack
atTime:time
error:&error];
if (error) {
NSLog(#"Error2 - %#", error.debugDescription);
}
time = CMTimeAdd(time, assetTrack.timeRange.duration);
if (CGSizeEqualToSize(size, CGSizeZero)) {
size = assetTrack.naturalSize;
}
}
AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, time);
mainInstruction.layerInstructions = [NSArray arrayWithObject:videoTrackLayerInstruction];
AVMutableVideoComposition *mainCompositionInst = [AVMutableVideoComposition videoComposition];
mainCompositionInst.instructions = [NSArray arrayWithObject:mainInstruction];
mainCompositionInst.frameDuration = CMTimeMake(1, 30);
mainCompositionInst.renderSize = size;

I think the issue here is AVAssetExportPresetHighestQuality this will cause conversion or up sampling, and will slow things WAY down. Try using AVAssetExportPresetPassthrough instead. Took my export times from ~35+ secs down to less than a second
I also disabled optimizing for network use as all of our videos are only used within the app, and never streamed or passed over a network

I built an app that merged together different video fragments and I can safely say it is your case. My video files have ~10 mb so maybe they are little smaller but it takes less than second to merge them all together, even if there is 10, 20 segments.
Now as for it is actually happening, I have checked my configuration against yours and the difference is following:
I use export.outputFileType = AVFileTypeMPEG4
I have network optimization disabled, and if you are not planning to stream the video from your device, you should disable it too
Other than that, it should be the same, I can't really compare it as you would have to provide code about how you actually create the composition. There are some things to check though:
if you are using AVURLAssetPreferPreciseDurationAndTimingKey when creating AVURLAsset and you don't have enough keyframes, it can actually take quite some time to seek through to find the keys and so it slows it down
Consider if you really need highest quality for the video
Consider resolution of the video and possibly lower it down
I should be able to help you more if you provide more information, but maybe some of this stuff will work. Give it a try and then report back.
Hope it helps a little!
Edit 1:
I forgot to mention that if you are out of options, you should try to use FFmpeg library as it is very high performance, though due to licencing might not be suitable for you.

Keep in my that maybe the asset you're trying to export is not stored locally and first is downloading the content and then exporting your assets.
if you don't want to download any content
let videoRequestOptions: PHVideoRequestOptions = PHVideoRequestOptions()
videoRequestOptions.isNetworkAccessAllowed = true
you also going to receive a message within requestExportSession completion handler with a couple of useful info values.
https://developer.apple.com/documentation/photokit/phimagemanager/image_result_info_keys
otherwise, if you want to download your asset from iCloud and make it as fast as possible you could play with the following parameters
let videoRequestOptions: PHVideoRequestOptions = PHVideoRequestOptions()
// highQualityFormat, highQualityFormat, fastFormat, automatic
videoRequestOptions.deliveryMode = .fastFormat
videoRequestOptions.isNetworkAccessAllowed = true
another important property is the export preset, there's a bunch of available preset
let lowQualityPreset1 = AVAssetExportPresetLowQuality
let lowQualityPreset2 = AVAssetExportPreset640x480
let lowQualityPreset3 = AVAssetExportPreset960x540
let lowQualityPreset4 = AVAssetExportPreset1280x720
let manager = PHImageManager()
manager.requestExportSession(forVideo: asset,
options: videoRequestOptions,
exportPreset: lowQualityPreset1) { (session, info) in
session?.outputURL = outputUrl
session?.outputFileType = .mp4
session?.shouldOptimizeForNetworkUse = true
session?.exportAsynchronously {
}
}

Related

Merging Clips with Different Resolutions

I have a set of video clips that I would like to merge together and then put a watermark on it.
I am able to do both functions individually, however problems arise when performing the them together.
All clips that will be merged are either 1920x1080 or 960x540.
For some reason, AVAssetExportSession does not display them well together.
Here are the 2 bugs based on 3 different scenarios:
This image is a result of:
Merging Clips together
As you can see, there is nothing wrong here, the output video produces the desired effect.
However, when I then try to add a watermark, it creates the following issue:
This image is a result of:
Merging Clips together
Putting a watermark on it
BUG 1: Some clips in the video get resized for whatever reason while other clips do not.
This image is a result of:
Merging Clips together
Resizing clips that are 960x540 to 1920x1080
Putting a watermark on it
Bug 2 Now the clips that need to be resized get resized, however the old unresized clip is still there.
Merging/Resizing Code:
-(void) mergeClips{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
AVMutableComposition *mixComposition = [[AVMutableComposition alloc] init];
AVMutableCompositionTrack *mutableVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *mutableAudioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
// loop through the list of videos and add them to the track
CMTime currentTime = kCMTimeZero;
NSMutableArray* instructionArray = [[NSMutableArray alloc] init];
if (_clipsArray){
for (int i = 0; i < (int)[_clipsArray count]; i++){
NSURL* url = [_clipsArray objectAtIndex:i];
AVAsset *asset = [AVAsset assetWithURL:url];
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
CGSize size = videoTrack.naturalSize;
CGFloat widthScale = 1920.0f/size.width;
CGFloat heightScale = 1080.0f/size.height;
// lines that performs resizing
AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableVideoTrack];
CGAffineTransform scale = CGAffineTransformMakeScale(widthScale,heightScale);
CGAffineTransform move = CGAffineTransformMakeTranslation(0,0);
[layerInstruction setTransform:CGAffineTransformConcat(scale, move) atTime:currentTime];
[instructionArray addObject:layerInstruction];
[mutableVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, asset.duration)
ofTrack:videoTrack
atTime:currentTime error:nil];
[mutableAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, asset.duration)
ofTrack:audioTrack
atTime:currentTime error:nil];
currentTime = CMTimeMakeWithSeconds(CMTimeGetSeconds(asset.duration) + CMTimeGetSeconds(currentTime), asset.duration.timescale);
}
}
AVMutableVideoCompositionInstruction * mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, currentTime);
mainInstruction.layerInstructions = instructionArray;
// 4 - Get path
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *lastPostedDayPath = [documentsDirectory stringByAppendingPathComponent:#"lastPostedDay"];
//Check if folder exists, if not create folder
if (![[NSFileManager defaultManager] fileExistsAtPath:lastPostedDayPath]){
[[NSFileManager defaultManager] createDirectoryAtPath:lastPostedDayPath withIntermediateDirectories:NO attributes:nil error:nil];
}
NSString *fileName = [NSString stringWithFormat:#"%li_%li_%li.mov", (long)_month, (long)_day, (long)_year];
NSString *finalDayPath = [lastPostedDayPath stringByAppendingPathComponent:fileName];
NSURL *url = [NSURL fileURLWithPath:finalDayPath];
BOOL fileExists = [[NSFileManager defaultManager] fileExistsAtPath:finalDayPath];
if (fileExists){
NSLog(#"file exists");
[[NSFileManager defaultManager] removeItemAtURL:url error:nil];
}
AVMutableVideoComposition *mainComposition = [AVMutableVideoComposition videoComposition];
mainComposition.instructions = [NSArray arrayWithObject:mainInstruction];
mainComposition.frameDuration = CMTimeMake(1, 30);
mainComposition.renderSize = CGSizeMake(1920.0f, 1080.0f);
// 5 - Create exporter
_exportSession = [[AVAssetExportSession alloc] initWithAsset:mixComposition
presetName:AVAssetExportPresetHighestQuality];
_exportSession.outputURL=url;
_exportSession.outputFileType = AVFileTypeQuickTimeMovie;
_exportSession.shouldOptimizeForNetworkUse = YES;
_exportSession.videoComposition = mainComposition;
[_exportSession exportAsynchronouslyWithCompletionHandler:^{
[merge_timer invalidate];
merge_timer = nil;
switch (_exportSession.status) {
case AVAssetExportSessionStatusFailed:
NSLog(#"Export failed -> Reason: %#, User Info: %#",
_exportSession.error.localizedDescription,
_exportSession.error.userInfo.description);
[self showSavingFailedDialog];
break;
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export cancelled");
[self showSavingFailedDialog];
break;
case AVAssetExportSessionStatusCompleted:
NSLog(#"Export finished");
[self addWatermarkToExportSession:_exportSession];
break;
default:
break;
}
}];
});
}
Once it finishes this, I run it through a different Export Session that just simply adds a watermark.
Is there something I am doing wrong in my code or process?
Is there an easier way for achieving this?
Thank you for your time!
I was able to solve my issue.
For some reason, AVAssetExportSession will not actually create a 'flat' video file of the merged clips, so it still recognized the lower resolution clips and their locations when adding the watermark which caused them to resize.
What I did to solve this was, first use AVAssetWriter to merge my clips and create one 'flat' file. I then could add a watermark without having a resizing issue.
Hope this helps anyone who may come across this problem in the future!
I also encountered the same problem,
you can set opacity after one video end like this:
[layerInstruction setOpacity:0.0 atTime:duration];

Compositing 2 videos with half opacity using AVFoundation without CALayer becomes appending videos?

What I wanted: Insert multiple videos layers with some opacity ALL at time 0:00 of a AVVideoCompositionTrack.
I read carefully official AVFoundation documents and also many WWDC discussion on this topic. But I couldn't understand why the result is NOT following the API statements.
I can achieve the overlay result with 2 AVPlayerLayer during playback. That could also mean I can use AVVideoCompositionCoreAnimationTool to achieve similar stuff during export.
But I tend to leave CALayer for the subtitles/image overlays or the animations.
What I tried for any inserting AVAsset:
- (void)addVideo:(AVAsset *)asset_in withOpacity:(float)opacity
{
// This is demo for composition of opaque videos. So we all insert video at time - 0:00
[_videoCompositionTrack insertTimeRange:CMTimeRangeMake( kCMTimeZero, asset_in.duration )
ofTrack:[ [asset_in tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0 ]
atTime:kCMTimeZero error:nil ];
AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
AVAssetTrack *assettrack_in = [ [asset_in tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0 ];
mutableVideoCompositionInstruction.timeRange = CMTimeRangeMake( kCMTimeZero, assettrack_in.timeRange.duration );
AVMutableVideoCompositionLayerInstruction *videoCompositionLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:_videoCompositionTrack];
[videoCompositionLayerInstruction setTransform:assettrack_in.preferredTransform atTime:kCMTimeZero];
[videoCompositionLayerInstruction setOpacity:opacity atTime:kCMTimeZero];
mutableVideoCompositionInstruction.layerInstructions = #[videoCompositionLayerInstruction];
[_arrayVideoCompositionInstructions addObject:mutableVideoCompositionInstruction];
}
Please be aware that insertTimeRange has atTime:kCMTimeZero as parameter. So I expect they will be put at the beginning of the video composition.
What I tried for exporting:
- (IBAction)ExportAndPlay:(id)sender
{
_mutableVideoComposition.instructions = [_arrayVideoCompositionInstructions copy];
// Create a static date formatter so we only have to initialize it once.
static NSDateFormatter *kDateFormatter;
if (!kDateFormatter) {
kDateFormatter = [[NSDateFormatter alloc] init];
kDateFormatter.dateStyle = NSDateFormatterMediumStyle;
kDateFormatter.timeStyle = NSDateFormatterShortStyle;
}
// Create the export session with the composition and set the preset to the highest quality.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:_mutableComposition presetName:AVAssetExportPresetHighestQuality];
// Set the desired output URL for the file created by the export process.
exporter.outputURL = [[[[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:#YES error:nil] URLByAppendingPathComponent:[kDateFormatter stringFromDate:[NSDate date]]] URLByAppendingPathExtension:CFBridgingRelease(UTTypeCopyPreferredTagWithClass((CFStringRef)AVFileTypeQuickTimeMovie, kUTTagClassFilenameExtension))];
// Set the output file type to be a QuickTime movie.
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = _mutableVideoComposition;
_mutableVideoComposition.instructions = [_arrayVideoCompositionInstructions copy];
// Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
[exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
switch ([exporter status]) {
case AVAssetExportSessionStatusFailed:
{
NSLog(#"Export failed: %# %#", [[exporter error] localizedDescription],[[exporter error]debugDescription]);
}
case AVAssetExportSessionStatusCancelled:
{
NSLog(#"Export canceled");
break;
}
case AVAssetExportSessionStatusCompleted:
{
NSLog(#"Export complete!");
NSLog( #"Export URL = %#", [exporter.outputURL absoluteString] );
[self altPlayWithUrl:exporter.outputURL];
}
default:
{
NSLog(#"default");
}
}
} );
}];
}
What turns out: It export a video with second video appended after first video, if I select 2 video clips.
This is not the same behaviour from what I read from :AVMutableCompositionTrack
May anyone shed some light for this helpless lamb?
Edit: Is there any detail missing so that no one can lend me a hand? If so, please leave comment so I can make them up.
Okay, sorry to ask this because of some misunderstanding of API about AVMutableCompositionTrack.
If you want to blend 2 video as 2 overlay as I do. You're going to need 2 AVMutableCompositionTrack instances, both instantiated from the same AVMutableComposition like this:
// 0. Setup AVMutableCompositionTracks <= FOR EACH AVAssets !!!
AVMutableCompositionTrack *mutableCompositionVideoTrack1 = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *mutableCompositionVideoTrack2 = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
And insert both AVAssets you want into THEIR OWN AVMutableCompositionTrack:
AVAssetTrack *videoAssetTrack1 = [ [ [_arrayVideoAssets firstObject] tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0 ];
AVAssetTrack *videoAssetTrack2 = [ [ [_arrayVideoAssets lastObject] tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0 ];
[ mutableCompositionVideoTrack1 insertTimeRange:CMTimeRangeMake( kCMTimeZero, videoAssetTrack1.timeRange.duration ) ofTrack:videoAssetTrack1 atTime:kCMTimeZero error:nil ];
[ mutableCompositionVideoTrack2 insertTimeRange:CMTimeRangeMake( kCMTimeZero, videoAssetTrack2.timeRange.duration ) ofTrack:videoAssetTrack2 atTime:kCMTimeZero error:nil ];
Then setup the AVMutableVideoComposition with 2 layer instruction of each AVMutableCompositionTracks:
AVMutableVideoCompositionInstruction *compInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
compInstruction.timeRange = CMTimeRangeMake( kCMTimeZero, videoAssetTrack1.timeRange.duration );
AVMutableVideoCompositionLayerInstruction *layerInstruction1 = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack1];
[layerInstruction1 setOpacity:0.5f atTime:kCMTimeZero];
AVMutableVideoCompositionLayerInstruction *layerInstruction2 = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack2];
[layerInstruction2 setOpacity:0.8f atTime:kCMTimeZero];
CGAffineTransform transformScale = CGAffineTransformMakeScale( 0.5f, 0.5f );
CGAffineTransform transformTransition = CGAffineTransformMakeTranslation( videoComposition.renderSize.width / 2, videoComposition.renderSize.height / 2 );
[ layerInstruction2 setTransform:CGAffineTransformConcat(transformScale, transformTransition) atTime:kCMTimeZero ];
compInstruction.layerInstructions = #[ layerInstruction1, layerInstruction2 ];
videoComposition.instructions = #[ compInstruction ];
Finally, it should be fine during exporting.
Sorry to bother if any did take a look.

iOS7 AVMutableVideoCompositionLayerInstruction causes video frame to freeze

I'm working with modifying some video via AVMutableVideoCompositionLayerInstruction in the iOS7 SDK.
The following code used to work on iOS 6.1.3, but in iOS7 the video is frozen on the first frame (though I can still hear the audio ok). I got rid of all actual transformations I was applying to verify that adding a video composition alone causes problems.
AVURLAsset* videoAsset = [[AVURLAsset alloc] initWithURL:inputFileURL options:NULL];
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction *layerInstruction =
[AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoAssetTrack];
AVMutableVideoComposition *mainComposition = [AVMutableVideoComposition videoComposition];
AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mainInstruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration);
mainComposition.instructions = [NSArray arrayWithObject:mainInstruction];
mainComposition.frameDuration = videoAsset.duration;
mainComposition.renderSize = CGSizeMake(320, 320);
...
exportSession.videoComposition = mainComposition;
If I do not set the videoComposition attribute of exportSession then the video records ok, but I cannot apply any transformations. Anyone know what could be causing this?
Thanks.
A good way to debug issues with the video composition is to use [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:asset]. The returned AVMutableVideoComposition should work correctly. Then you can compare the contents of the instructions array with your instructions.
To increase the confusion levels, the asset there can also be an AVComposition. I think the AVFoundation team didn't do the best job when naming these things....
I've been struggling as well with AVMutableVideoCompositionLayerInstruction and mix video with CALayers. After a few days trying different ways what I realise is that the time of the assets are pretty important.
The proper way to find out the time of each asset is use the property:
loadValuesAsynchronouslyForKeys:#[#"duration"]
//Asset url
NSURL *assetUrl = [NSURL fileURLWithPath:_firstVideoFilePath];
//audio/video assets
AVURLAsset * videoAsset = [[AVURLAsset alloc]initWithURL:assetUrl options:nil];
//var to store the duration
CMTime __block durationTime;
//And here we'll be able to proper get the asset duration
[videoAsset loadValuesAsynchronouslyForKeys:#[#"duration"] completionHandler: ^{
Float64 durationSeconds = CMTimeGetSeconds([videoAsset duration]);
durationTime = [videoAsset duration];
//At this point you have the proper asset duration value, you can start any video processing from here.
}];
Hope this helps to anyone with the same issue.

AVAssetExportSession ignoring videoComposition rotation & stripping metadata

I am attempting to rotate video prior to upload on my iOS device because other platforms (such as android) do not properly interpret the rotation information in iOS-recorded videos and, as a result, play them improperly rotated.
I have looked at the following stack posts but have not had success apply any of them to my case:
iOS rotate every frame of video
Rotating Video w/ AVMutableVideoCompositionLayerInstruction
AVMutableVideoComposition rotated video captured in portrait mode
iOS AVFoundation: Setting Orientation of Video
I coped the Apple AVSimpleEditor project sample, but unfortunately all that ever happens is, upon creating an AVAssetExportSession and calling exportAsynchronouslyWithCompletionHandler, no rotation is performed, and what's worse, rotation metadata is stripped out of the resulting file.
Here is the code that runs the export:
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:[_mutableComposition copy] presetName:AVAssetExportPresetPassthrough];
exportSession.outputURL = outputURL;
exportSession.outputFileType = AVFileType3GPP;
exportSession.shouldOptimizeForNetworkUse = YES;
exportSession.videoComposition = _mutableVideoComposition;
[exportSession exportAsynchronouslyWithCompletionHandler:^(void)
{
NSLog(#"Status is %d %#", exportSession.status, exportSession.error);
handler(exportSession);
[exportSession release];
}];
The values _mutableComposition and _mutableVideoComposition are initialized by this method here:
- (void) getVideoComposition:(AVAsset*)asset
{
AVMutableComposition *mutableComposition = nil;
AVMutableVideoComposition *mutableVideoComposition = nil;
AVMutableVideoCompositionInstruction *instruction = nil;
AVMutableVideoCompositionLayerInstruction *layerInstruction = nil;
CGAffineTransform t1;
CGAffineTransform t2;
AVAssetTrack *assetVideoTrack = nil;
AVAssetTrack *assetAudioTrack = nil;
// Check if the asset contains video and audio tracks
if ([[asset tracksWithMediaType:AVMediaTypeVideo] count] != 0) {
assetVideoTrack = [asset tracksWithMediaType:AVMediaTypeVideo][0];
}
if ([[asset tracksWithMediaType:AVMediaTypeAudio] count] != 0) {
assetAudioTrack = [asset tracksWithMediaType:AVMediaTypeAudio][0];
}
CMTime insertionPoint = kCMTimeZero;
NSError *error = nil;
// Step 1
// Create a composition with the given asset and insert audio and video tracks into it from the asset
// Check whether a composition has already been created, i.e, some other tool has already been applied
// Create a new composition
mutableComposition = [AVMutableComposition composition];
// Insert the video and audio tracks from AVAsset
if (assetVideoTrack != nil) {
AVMutableCompositionTrack *compositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetVideoTrack atTime:insertionPoint error:&error];
}
if (assetAudioTrack != nil) {
AVMutableCompositionTrack *compositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetAudioTrack atTime:insertionPoint error:&error];
}
// Step 2
// Translate the composition to compensate the movement caused by rotation (since rotation would cause it to move out of frame)
t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.height, 0.0);
// Rotate transformation
t2 = CGAffineTransformRotate(t1, degreesToRadians(90.0));
// Step 3
// Set the appropriate render sizes and rotational transforms
// Create a new video composition
mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.renderSize = CGSizeMake(assetVideoTrack.naturalSize.height,assetVideoTrack.naturalSize.width);
mutableVideoComposition.frameDuration = CMTimeMake(1, 30);
// The rotate transform is set on a layer instruction
instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mutableComposition duration]);
layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:(mutableComposition.tracks)[0]];
[layerInstruction setTransform:t2 atTime:kCMTimeZero];
// Step 4
// Add the transform instructions to the video composition
instruction.layerInstructions = #[layerInstruction];
mutableVideoComposition.instructions = #[instruction];
TT_RELEASE_SAFELY(_mutableComposition);
_mutableComposition = [mutableComposition retain];
TT_RELEASE_SAFELY(_mutableVideoComposition);
_mutableVideoComposition = [mutableVideoComposition retain];
}
I pulled this method from AVSERotateCommand from here. Can anyone suggest why this method would not successfully rotate my video by the necessary 90 degrees?
because you are using AVAssetExportPresetPassthrough the AVAssetExportSession will ignore the videoComposition, use any other preset.

iOS AVFoundation - Show a time display over a video and export

I want to show a display overlay over a video and export that video including this display. I had a look into the AVFoundation Framework, AVCompositions, AVAssets etc. but I still do not have an idea to achieve this. There is a class called AVSynchronizedLayer which lets you animate things synchrounous to the video, but I do not want to animate, I jsut want to overlay the time display into every single frame of the video. Any advice?
Regards
Something like this...
(NB: culled from a much larger project, so I may have included some unnecessary pieces by accident).
You'll need to grab the CALayer of your clock / animation, and set it to the var myClockLayer (used 1/3 of the way down by the andimation tool).
This also assumes your incoming video has just two tracks - audio and video. If you have more, you'll need to set the track id in "asTrackID:2" more carefully.
AVURLAsset* url = [AVURLAsset URLAssetWithURL:incomingVideo options:nil];
AVMutableComposition *videoComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [videoComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipVideoTrack = [[url tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [url duration]) ofTrack:clipVideoTrack atTime:kCMTimeZero error:&error];
AVMutableVideoComposition* videoComposition = [[AVMutableVideoComposition videoComposition]retain];
videoComposition.renderSize = CGSizeMake(320, 240);
videoComposition.frameDuration = CMTimeMake(1, 30);
videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithAdditionalLayer:myClockLayer asTrackID:2];
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30) );
AVMutableVideoCompositionLayerInstruction* layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:clipVideoTrack];
instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
videoComposition.instructions = [NSArray arrayWithObject: instruction];
exporter = [[AVAssetExportSession alloc] initWithAsset:saveComposition presetName:AVAssetExportPresetHighestQuality] ;
exporter.videoComposition = videoComposition;
exporter.outputURL=url3;
exporter.outputFileType=AVFileTypeQuickTimeMovie;
[exporter exportAsynchronouslyWithCompletionHandler:^(void){}];
I think you can use AVCaptureVideoDataOutput to process each frame and use AVAssetWriter to record the processed frame.You can refer to this answer
https://stackoverflow.com/a/4944594/379941 .
use AVAssetWriterPixelBufferAdaptor's appendPixelBuffer:withPresentationTime: method to export
And I strongly suggest using OpenCV to process frame. this is a nice tutorial
http://aptogo.co.uk/2011/09/opencv-framework-for-ios/.
OpenCV library is very great.

Resources