I have 3 videos that I am sequencing using an AVMutableComposition and then playing the video using an AVPlayer and grabbing the frames using an AVPlayerItemVdeoOutput. The video sequence is as follows:
[Logo Video - n seconds][Main video - m seconds][Logo Video - l seconds]
The code looks like this:
// Build the composition.
pComposition = [AVMutableComposition composition];
// Fill in the assets that make up the composition
AVMutableCompositionTrack* pCompositionVideoTrack = [pComposition addMutableTrackWithMediaType: AVMediaTypeVideo preferredTrackID: 1];
AVMutableCompositionTrack* pCompositionAudioTrack = [pComposition addMutableTrackWithMediaType: AVMediaTypeAudio preferredTrackID: 2];
CMTime time = kCMTimeZero;
CMTimeRange keyTimeRange = kCMTimeRangeZero;
for( AVAsset* pAssetsAsset in pAssets )
{
AVAssetTrack* pAssetsAssetVideoTrack = [pAssetsAsset tracksWithMediaType: AVMediaTypeVideo].firstObject;
AVAssetTrack* pAssetsAssetAudioTrack = [pAssetsAsset tracksWithMediaType: AVMediaTypeAudio].firstObject;
CMTimeRange timeRange = CMTimeRangeMake( kCMTimeZero, pAssetsAsset.duration );
NSError* pVideoError = nil;
NSError* pAudioError = nil;
if ( pAssetsAssetVideoTrack )
{
[pCompositionVideoTrack insertTimeRange: timeRange ofTrack: pAssetsAssetVideoTrack atTime: time error: &pVideoError];
}
if ( pAssetsAssetAudioTrack )
{
[pCompositionAudioTrack insertTimeRange: timeRange ofTrack: pAssetsAssetAudioTrack atTime: time error: &pAudioError];
}
if ( pAssetsAsset == pKeyAsset )
{
keyTimeRange = CMTimeRangeMake( time, timeRange.duration );
}
NSLog( #"%#", [pVideoError description] );
NSLog( #"%#", [pAudioError description] );
time = CMTimeAdd( time, pAssetsAsset.duration );
}
The logo videos are silent and merely display my logo. I manually create these videos so everything is perfect here. The "Main Video" however can end up with the wrong orientation. To combat this an AVMutableVideoComposition looks like the perfect way forward. So I setup a simple video composition that does a simple setTransform as follows:
pAsset = pComposition;
pPlayerItem = [AVPlayerItem playerItemWithAsset: pAsset];
pPlayer = [AVPlayer playerWithPlayerItem: pPlayerItem];
NSArray* pPlayerTracks = [pAsset tracksWithMediaType: AVMediaTypeVideo];
AVAssetTrack* pPlayerTrack = pPlayerTracks[0];
pVideoCompositionLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstruction];
[pVideoCompositionLayerInstruction setTransform: [[pKeyAsset tracksWithMediaType: AVMediaTypeVideo].firstObject preferredTransform] atTime: kCMTimeZero];
pVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
pVideoCompositionInstruction.backgroundColor = [[UIColor blackColor] CGColor];
pVideoCompositionInstruction.timeRange = keyTimeRange;
pVideoCompositionInstruction.layerInstructions = #[ pVideoCompositionLayerInstruction ];
pVideoComposition = [AVMutableVideoComposition videoComposition];
pVideoComposition.renderSize = [[pKeyAsset tracksWithMediaType: AVMediaTypeVideo].firstObject naturalSize];
pVideoComposition.frameDuration = [[pKeyAsset tracksWithMediaType: AVMediaTypeVideo].firstObject minFrameDuration];
pVideoComposition.instructions = #[ pVideoCompositionInstruction ];
pPlayerItem.videoComposition = pVideoComposition;
However when I come to play the video sequence I get no output returned. AVPlayerItemVideoOutput hasNewPixelBufferForItemTime always returns NO. If I comment out the last line in the code above (ie the setting the videoComposition) then everything works as before (with videos with the wrong orientation). Does anybody know what I'm doing wrong? Any thoughts much appreciated!
The issue here is that keyTimeRange may not start at time zero if your Logo video has nonzero duration. pVideoCompositionInstruction will start at keyTimeRange.start, rather than kCMTimeZero (when the AVMutableComposition will start), which violates the rules for AVVideoCompositionInstructions
"For the first instruction in the array, timeRange.start must be less than or equal to the earliest time for which playback or other processing will be attempted (typically kCMTimeZero)", according to the docs
To solve this, set pVideoComposition.instructions to an array containing three AVMutableVideoCompositionInstruction objects, each with their own AVMutableVideoCompositionLayerInstruction according to each AVAsset's transform. The time range for each of the three instructions should be the times at which these assets appear in the composition track. Make sure they line up exactly.
Related
I am trying to export an AVMutableComposition using AVAssetExportSession.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPresetHighestQuality];
exporter.outputURL=url;
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.videoComposition = mainCompositionInst;
exporter.shouldOptimizeForNetworkUse = YES;
[exporter exportAsynchronouslyWithCompletionHandler:^
{
switch (exporter.status)
{
case AVAssetExportSessionStatusCompleted:
{
NSLog(#"Video Merge SuccessFullt");
}
break;
case AVAssetExportSessionStatusFailed:
NSLog(#"Failed:%#", exporter.error.description);
break;
case AVAssetExportSessionStatusCancelled:
NSLog(#"Canceled:%#", exporter.error);
break;
case AVAssetExportSessionStatusExporting:
NSLog(#"Exporting!");
break;
case AVAssetExportSessionStatusWaiting:
NSLog(#"Waiting");
break;
default:
break;
}
}];
But exporting even 1 minute video takes around 30 seconds, which is too much considering iPad inbuilt camera app takes less than 2 seconds.
Also if I remove videoComposition from exporter, time reduces to 7 seconds, which is still bad considering video length to be only 1 minute.
So, I want to know how to decrease the export time to minimum?
Also, I want to know, does AVAssetExportSession takes generally this much time or is it just my case?
Update:
Merge Code:
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo
preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableVideoCompositionLayerInstruction *videoTrackLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
NSMutableArray *instructions = [NSMutableArray new];
CGSize size = CGSizeZero;
CMTime time = kCMTimeZero;
for (AVURLAsset *asset in assets)
{
AVAssetTrack *assetTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioAssetTrack = [asset tracksWithMediaType:AVMediaTypeAudio].firstObject;
NSError *error;
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetTrack.timeRange.duration )
ofTrack:assetTrack
atTime:time
error:&error];
[videoTrackLayerInstruction setTransform:assetTrack.preferredTransform atTime:time];
if (error) {
NSLog(#"asset url :: %#",assetTrack.asset);
NSLog(#"Error1 - %#", error.debugDescription);
}
[audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, audioAssetTrack.timeRange.duration)
ofTrack:audioAssetTrack
atTime:time
error:&error];
if (error) {
NSLog(#"Error2 - %#", error.debugDescription);
}
time = CMTimeAdd(time, assetTrack.timeRange.duration);
if (CGSizeEqualToSize(size, CGSizeZero)) {
size = assetTrack.naturalSize;
}
}
AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, time);
mainInstruction.layerInstructions = [NSArray arrayWithObject:videoTrackLayerInstruction];
AVMutableVideoComposition *mainCompositionInst = [AVMutableVideoComposition videoComposition];
mainCompositionInst.instructions = [NSArray arrayWithObject:mainInstruction];
mainCompositionInst.frameDuration = CMTimeMake(1, 30);
mainCompositionInst.renderSize = size;
I think the issue here is AVAssetExportPresetHighestQuality this will cause conversion or up sampling, and will slow things WAY down. Try using AVAssetExportPresetPassthrough instead. Took my export times from ~35+ secs down to less than a second
I also disabled optimizing for network use as all of our videos are only used within the app, and never streamed or passed over a network
I built an app that merged together different video fragments and I can safely say it is your case. My video files have ~10 mb so maybe they are little smaller but it takes less than second to merge them all together, even if there is 10, 20 segments.
Now as for it is actually happening, I have checked my configuration against yours and the difference is following:
I use export.outputFileType = AVFileTypeMPEG4
I have network optimization disabled, and if you are not planning to stream the video from your device, you should disable it too
Other than that, it should be the same, I can't really compare it as you would have to provide code about how you actually create the composition. There are some things to check though:
if you are using AVURLAssetPreferPreciseDurationAndTimingKey when creating AVURLAsset and you don't have enough keyframes, it can actually take quite some time to seek through to find the keys and so it slows it down
Consider if you really need highest quality for the video
Consider resolution of the video and possibly lower it down
I should be able to help you more if you provide more information, but maybe some of this stuff will work. Give it a try and then report back.
Hope it helps a little!
Edit 1:
I forgot to mention that if you are out of options, you should try to use FFmpeg library as it is very high performance, though due to licencing might not be suitable for you.
Keep in my that maybe the asset you're trying to export is not stored locally and first is downloading the content and then exporting your assets.
if you don't want to download any content
let videoRequestOptions: PHVideoRequestOptions = PHVideoRequestOptions()
videoRequestOptions.isNetworkAccessAllowed = true
you also going to receive a message within requestExportSession completion handler with a couple of useful info values.
https://developer.apple.com/documentation/photokit/phimagemanager/image_result_info_keys
otherwise, if you want to download your asset from iCloud and make it as fast as possible you could play with the following parameters
let videoRequestOptions: PHVideoRequestOptions = PHVideoRequestOptions()
// highQualityFormat, highQualityFormat, fastFormat, automatic
videoRequestOptions.deliveryMode = .fastFormat
videoRequestOptions.isNetworkAccessAllowed = true
another important property is the export preset, there's a bunch of available preset
let lowQualityPreset1 = AVAssetExportPresetLowQuality
let lowQualityPreset2 = AVAssetExportPreset640x480
let lowQualityPreset3 = AVAssetExportPreset960x540
let lowQualityPreset4 = AVAssetExportPreset1280x720
let manager = PHImageManager()
manager.requestExportSession(forVideo: asset,
options: videoRequestOptions,
exportPreset: lowQualityPreset1) { (session, info) in
session?.outputURL = outputUrl
session?.outputFileType = .mp4
session?.shouldOptimizeForNetworkUse = true
session?.exportAsynchronously {
}
}
I am using scaleTimeRange:toDuration: to produce a fast-motion effect of upto 10x the original video speed. However I have noticed that videos with a higher bit-rate (say 20MBits/s and above) begin to stutter when played through an AVPlayer for anything above 4x the normal speed, enough to crash the AVPlayerLayer (disappear) if it runs for a while.
The code.
//initialize the player
self.player = [[AVPlayer alloc] init];
//load up the asset
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:[[NSBundle mainBundle] URLForResource:#"sample-video" withExtension:#"mov"] options:nil];
[asset loadValuesAsynchronouslyForKeys:#[#"playable", #"hasProtectedContent", #"tracks"] completionHandler:
^{
AVMutableComposition *composition = [AVMutableComposition composition];
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
// composition set to play at 60fps
videoComposition.frameDuration = CMTimeMake(1,60);
//add video track to composition
AVMutableCompositionTrack *videoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
//1080p composition
CGSize renderSize = CGSizeMake(1920.0, 1080.0);
CMTime currentTime = kCMTimeZero;
CGFloat scale = 1.0;
AVAssetTrack *assetTrack = nil;
assetTrack = [asset tracksWithMediaType:AVMediaTypeVideo].firstObject;
[videoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, asset.duration) ofTrack:assetTrack atTime:currentTime error:nil];
CMTimeRange scaleTimeRange = CMTimeRangeMake(currentTime, asset.duration);
//Speed it up to 8x.
CMTime scaledDuration = CMTimeMultiplyByFloat64(asset.duration,1.0/8.0);
[videoTrack scaleTimeRange:scaleTimeRange toDuration:scaledDuration];
AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
//ensure video is scaled up/down to match the composition
scale = renderSize.width/assetTrack.naturalSize.width;
[layerInstruction setTransform:CGAffineTransformMakeScale(scale, scale) atTime:currentTime];
AVMutableVideoCompositionInstruction *videoInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
videoInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration);
videoInstruction.layerInstructions = #[ layerInstruction];
videoComposition.instructions = #[videoInstruction];
videoComposition.renderSize = renderSize;
//pass the stuff to AVPlayer for playback
self.playerItem = [AVPlayerItem playerItemWithAsset:composition];
self.playerItem.videoComposition = videoComposition;
[self.player replaceCurrentItemWithPlayerItem:self.playerItem];
//playerView is a custom view with AVPlayerLayer, picked up from https://developer.apple.com/library/mac/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/02_Playback.html#//apple_ref/doc/uid/TP40010188-CH3-SW11
[self.playerView setPlayer:self.player];
//call [self.player play] when ready.
}];
Some notes:
All testing done on iPhone 6
I am deliberately not adding any audio tracks to rule out the possibility of audio playing a part here.
Normal bitrate videos (16Mbits/s on average) play fine on 10x
Same composition code produces a smooth playback on an OSX application
The stutter is more obvious with higher bitrates.
All videos being tested are 1080p 60fps
A high-bitrate video behaves well if opened in and exported to 1080, so to tone-down the bit-rate and maintain FPS.
There is no rendering/exporting of video involved.
Has anyone else ran into this and know a way around?
I'm trying to use AVMutableComposition to play a sequence of sound files at precise times.
When the view loads, I create the composition with the intent of playing 4 sounds evenly spaced over 1 second. It shouldn't matter how long or short the sounds are, I just want to fire them at exactly 0, 0.25, 0.5 and 0.75 seconds:
AVMutableComposition *composition = [[AVMutableComposition alloc] init];
NSDictionary *options = #{AVURLAssetPreferPreciseDurationAndTimingKey : #YES};
for (NSInteger i = 0; i < 4; i++)
{
AVMutableCompositionTrack* track = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
NSURL *url = [[NSBundle mainBundle] URLForResource:[NSString stringWithFormat:#"sound_file_%i", i] withExtension:#"caf"];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:options];
AVAssetTrack *assetTrack = [asset tracksWithMediaType:AVMediaTypeAudio].firstObject;
CMTimeRange timeRange = [assetTrack timeRange];
Float64 t = i * 0.25;
NSError *error;
BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTimeMakeWithSeconds(t, 1) error:&error];
if (!success)
{
NSLog(#"unsuccesful creation of composition");
}
if (error)
{
NSLog(#"composition creation error: %#", error);
}
}
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
self.avPlayer = [[AVPlayer alloc] initWithPlayerItem:playerItem];
The composition is created successfully with no errors. Later, when I want to play the sequence I do this:
[self.avPlayer seekToTime:CMTimeMakeWithSeconds(0, 1)];
[self.avPlayer play];
For some reason, the sounds are not evenly spaced at all - but play almost all at once. I tried the same thing spaced over 4 seconds, replacing the time calculation like this:
Float64 t = i * 1.0;
And this plays perfectly. Any time interval under 1 second seems to generate unexpected results. What am I missing? Are AVCompositions not supposed to be used for time intervals under 1 second? Or perhaps I'm misunderstanding the time intervals?
Your CMTimeMakeWithSeconds(t, 1) is in whole second 'slices' because your timescale is set to 1. No matter what fraction t is, the atTime: will always end up as 0. This is why it works when you increase it to 1 second (t=i*1).
You need to set the timescale to 4 to get your desired 0.25 second slices. Since the CMTime is now in .25 second slices, you won't need the i * 0.25 calculcation. Just use the i directly; atTime:CMTimeMake(i, 4)
If you might need to get more precise in the future, you should account for it now so you won't have to adjust your code later. Apple recommends using a timescale of 600 as it is a multiple of the common video framerates (24, 25, and 30 FPS) but it works fine for audio-only too. So for your situation, you would use 24 slices to get your .25 second value; Float64 t = i * 24; atTime:CMTimeMake(t, 600)
As for your issue of all 4 sounds playing almost all at once, be aware of this unanswered SO question where it only happens on the first play. Even with the changes above, you might still run into this issue.
Unless each track is exactly 0.25 seconds long this is your problem:
Float64 t = i * 0.25;
NSError *error;
BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTimeMakeWithSeconds(t, 1) error:&error];
You need to be keeping track of the cumulative time range added so far, and inserting the next track at that time:
CMTime currentTime = kCMTimeZero;
for (NSInteger i = 0; i < 4; i++) {
/* Code to create track for insertion */
CMTimeRange trackTimeRange = [assetTrack timeRange];
BOOL success = [track insertTimeRange:trackTimeRange
ofTrack:assetTrack
atTime:currentTime
error:&error];
/* Error checking code */
//Update time range for insertion
currentTime = CMTimeAdd(currentTime,trackTimeRange.duration);
}
i changed a bit your code, sorry i had no time to test it.
AVMutableComposition *composition = [AVMutableComposition composition];
NSDictionary *options = #{AVURLAssetPreferPreciseDurationAndTimingKey : #YES};
CMTime totalDuration = kCMTimeZero;
for (NSInteger i = 0; i < 4; i++)
{
AVMutableCompositionTrack* track = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
NSURL *url = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:#"Record_%i", i] ofType:#"caf"]];
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:url options:options];
AVAssetTrack *assetTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
CMTimeRange timeRange = [assetTrack timeRange];
NSError *error;
BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTIME_COMPARE_INLINE(totalDuration, >, kCMTimeZero)? CMTimeAdd(totalDuration, CMTimeMake(1, 4)): totalDuration error:&error];
if (!success)
{
NSLog(#"unsuccesful creation of composition");
}
if (error)
{
NSLog(#"composition creation error: %#", error);
}
totalDuration = CMTimeAdd(CMTimeAdd(totalDuration,CMTimeMake(1, 4)), asset.duration);
}
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
self.avPlayer = [[AVPlayer alloc] initWithPlayerItem:playerItem];
P.S. use kCMTimeZero instead of CMTimeMakeWithSeconds(0, 1).
I'm working with modifying some video via AVMutableVideoCompositionLayerInstruction in the iOS7 SDK.
The following code used to work on iOS 6.1.3, but in iOS7 the video is frozen on the first frame (though I can still hear the audio ok). I got rid of all actual transformations I was applying to verify that adding a video composition alone causes problems.
AVURLAsset* videoAsset = [[AVURLAsset alloc] initWithURL:inputFileURL options:NULL];
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction *layerInstruction =
[AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoAssetTrack];
AVMutableVideoComposition *mainComposition = [AVMutableVideoComposition videoComposition];
AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mainInstruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration);
mainComposition.instructions = [NSArray arrayWithObject:mainInstruction];
mainComposition.frameDuration = videoAsset.duration;
mainComposition.renderSize = CGSizeMake(320, 320);
...
exportSession.videoComposition = mainComposition;
If I do not set the videoComposition attribute of exportSession then the video records ok, but I cannot apply any transformations. Anyone know what could be causing this?
Thanks.
A good way to debug issues with the video composition is to use [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:asset]. The returned AVMutableVideoComposition should work correctly. Then you can compare the contents of the instructions array with your instructions.
To increase the confusion levels, the asset there can also be an AVComposition. I think the AVFoundation team didn't do the best job when naming these things....
I've been struggling as well with AVMutableVideoCompositionLayerInstruction and mix video with CALayers. After a few days trying different ways what I realise is that the time of the assets are pretty important.
The proper way to find out the time of each asset is use the property:
loadValuesAsynchronouslyForKeys:#[#"duration"]
//Asset url
NSURL *assetUrl = [NSURL fileURLWithPath:_firstVideoFilePath];
//audio/video assets
AVURLAsset * videoAsset = [[AVURLAsset alloc]initWithURL:assetUrl options:nil];
//var to store the duration
CMTime __block durationTime;
//And here we'll be able to proper get the asset duration
[videoAsset loadValuesAsynchronouslyForKeys:#[#"duration"] completionHandler: ^{
Float64 durationSeconds = CMTimeGetSeconds([videoAsset duration]);
durationTime = [videoAsset duration];
//At this point you have the proper asset duration value, you can start any video processing from here.
}];
Hope this helps to anyone with the same issue.
I am attempting to rotate video prior to upload on my iOS device because other platforms (such as android) do not properly interpret the rotation information in iOS-recorded videos and, as a result, play them improperly rotated.
I have looked at the following stack posts but have not had success apply any of them to my case:
iOS rotate every frame of video
Rotating Video w/ AVMutableVideoCompositionLayerInstruction
AVMutableVideoComposition rotated video captured in portrait mode
iOS AVFoundation: Setting Orientation of Video
I coped the Apple AVSimpleEditor project sample, but unfortunately all that ever happens is, upon creating an AVAssetExportSession and calling exportAsynchronouslyWithCompletionHandler, no rotation is performed, and what's worse, rotation metadata is stripped out of the resulting file.
Here is the code that runs the export:
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:[_mutableComposition copy] presetName:AVAssetExportPresetPassthrough];
exportSession.outputURL = outputURL;
exportSession.outputFileType = AVFileType3GPP;
exportSession.shouldOptimizeForNetworkUse = YES;
exportSession.videoComposition = _mutableVideoComposition;
[exportSession exportAsynchronouslyWithCompletionHandler:^(void)
{
NSLog(#"Status is %d %#", exportSession.status, exportSession.error);
handler(exportSession);
[exportSession release];
}];
The values _mutableComposition and _mutableVideoComposition are initialized by this method here:
- (void) getVideoComposition:(AVAsset*)asset
{
AVMutableComposition *mutableComposition = nil;
AVMutableVideoComposition *mutableVideoComposition = nil;
AVMutableVideoCompositionInstruction *instruction = nil;
AVMutableVideoCompositionLayerInstruction *layerInstruction = nil;
CGAffineTransform t1;
CGAffineTransform t2;
AVAssetTrack *assetVideoTrack = nil;
AVAssetTrack *assetAudioTrack = nil;
// Check if the asset contains video and audio tracks
if ([[asset tracksWithMediaType:AVMediaTypeVideo] count] != 0) {
assetVideoTrack = [asset tracksWithMediaType:AVMediaTypeVideo][0];
}
if ([[asset tracksWithMediaType:AVMediaTypeAudio] count] != 0) {
assetAudioTrack = [asset tracksWithMediaType:AVMediaTypeAudio][0];
}
CMTime insertionPoint = kCMTimeZero;
NSError *error = nil;
// Step 1
// Create a composition with the given asset and insert audio and video tracks into it from the asset
// Check whether a composition has already been created, i.e, some other tool has already been applied
// Create a new composition
mutableComposition = [AVMutableComposition composition];
// Insert the video and audio tracks from AVAsset
if (assetVideoTrack != nil) {
AVMutableCompositionTrack *compositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetVideoTrack atTime:insertionPoint error:&error];
}
if (assetAudioTrack != nil) {
AVMutableCompositionTrack *compositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetAudioTrack atTime:insertionPoint error:&error];
}
// Step 2
// Translate the composition to compensate the movement caused by rotation (since rotation would cause it to move out of frame)
t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.height, 0.0);
// Rotate transformation
t2 = CGAffineTransformRotate(t1, degreesToRadians(90.0));
// Step 3
// Set the appropriate render sizes and rotational transforms
// Create a new video composition
mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.renderSize = CGSizeMake(assetVideoTrack.naturalSize.height,assetVideoTrack.naturalSize.width);
mutableVideoComposition.frameDuration = CMTimeMake(1, 30);
// The rotate transform is set on a layer instruction
instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mutableComposition duration]);
layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:(mutableComposition.tracks)[0]];
[layerInstruction setTransform:t2 atTime:kCMTimeZero];
// Step 4
// Add the transform instructions to the video composition
instruction.layerInstructions = #[layerInstruction];
mutableVideoComposition.instructions = #[instruction];
TT_RELEASE_SAFELY(_mutableComposition);
_mutableComposition = [mutableComposition retain];
TT_RELEASE_SAFELY(_mutableVideoComposition);
_mutableVideoComposition = [mutableVideoComposition retain];
}
I pulled this method from AVSERotateCommand from here. Can anyone suggest why this method would not successfully rotate my video by the necessary 90 degrees?
because you are using AVAssetExportPresetPassthrough the AVAssetExportSession will ignore the videoComposition, use any other preset.