AVMutableComposition - scaleTimeRange causing high bit-rate video to stutter on fast motion - ios

I am using scaleTimeRange:toDuration: to produce a fast-motion effect of upto 10x the original video speed. However I have noticed that videos with a higher bit-rate (say 20MBits/s and above) begin to stutter when played through an AVPlayer for anything above 4x the normal speed, enough to crash the AVPlayerLayer (disappear) if it runs for a while.
The code.
//initialize the player
self.player = [[AVPlayer alloc] init];
//load up the asset
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:[[NSBundle mainBundle] URLForResource:#"sample-video" withExtension:#"mov"] options:nil];
[asset loadValuesAsynchronouslyForKeys:#[#"playable", #"hasProtectedContent", #"tracks"] completionHandler:
^{
AVMutableComposition *composition = [AVMutableComposition composition];
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
// composition set to play at 60fps
videoComposition.frameDuration = CMTimeMake(1,60);
//add video track to composition
AVMutableCompositionTrack *videoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
//1080p composition
CGSize renderSize = CGSizeMake(1920.0, 1080.0);
CMTime currentTime = kCMTimeZero;
CGFloat scale = 1.0;
AVAssetTrack *assetTrack = nil;
assetTrack = [asset tracksWithMediaType:AVMediaTypeVideo].firstObject;
[videoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, asset.duration) ofTrack:assetTrack atTime:currentTime error:nil];
CMTimeRange scaleTimeRange = CMTimeRangeMake(currentTime, asset.duration);
//Speed it up to 8x.
CMTime scaledDuration = CMTimeMultiplyByFloat64(asset.duration,1.0/8.0);
[videoTrack scaleTimeRange:scaleTimeRange toDuration:scaledDuration];
AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
//ensure video is scaled up/down to match the composition
scale = renderSize.width/assetTrack.naturalSize.width;
[layerInstruction setTransform:CGAffineTransformMakeScale(scale, scale) atTime:currentTime];
AVMutableVideoCompositionInstruction *videoInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
videoInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration);
videoInstruction.layerInstructions = #[ layerInstruction];
videoComposition.instructions = #[videoInstruction];
videoComposition.renderSize = renderSize;
//pass the stuff to AVPlayer for playback
self.playerItem = [AVPlayerItem playerItemWithAsset:composition];
self.playerItem.videoComposition = videoComposition;
[self.player replaceCurrentItemWithPlayerItem:self.playerItem];
//playerView is a custom view with AVPlayerLayer, picked up from https://developer.apple.com/library/mac/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/02_Playback.html#//apple_ref/doc/uid/TP40010188-CH3-SW11
[self.playerView setPlayer:self.player];
//call [self.player play] when ready.
}];
Some notes:
All testing done on iPhone 6
I am deliberately not adding any audio tracks to rule out the possibility of audio playing a part here.
Normal bitrate videos (16Mbits/s on average) play fine on 10x
Same composition code produces a smooth playback on an OSX application
The stutter is more obvious with higher bitrates.
All videos being tested are 1080p 60fps
A high-bitrate video behaves well if opened in and exported to 1080, so to tone-down the bit-rate and maintain FPS.
There is no rendering/exporting of video involved.
Has anyone else ran into this and know a way around?

Related

Is it possible fully crop a video with AVAssetWriter?

I am using AVAssetWriter to create a video file, I want to crop it and came upon the keys for AVVideoCleanApertureKey. This lets me set a viewport for the video, which looks like it is cropping the video. But in reality it doesn't, the full frame is still present in the video file, and wether the "crop" is used seems to be up to the view player.
So, is there a way to make discard the "outside" data around the viewport with AVAssetWriter? Or do I need to change my approach completely.
These images show the same video file, but only QuickTime cares about the viewport.
I believe the direct answer to the question is NO. So in the end I switched over to AVMutableVideoComposition and AVAssetExportSession as other answers around the web suggested.
AVAsset *video = [AVAsset assetWithURL:outputURL];
AVAssetTrack *assetVideoTrack = [[video tracksWithMediaType:AVMediaTypeVideo] lastObject];
AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:assetVideoTrack];
CGAffineTransform transform = CGAffineTransformMakeTranslation(-self.rect.origin.x, -self.rect.origin.y);
[layerInstruction setTransform:transform atTime:kCMTimeZero];
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.layerInstructions = #[layerInstruction];
instruction.timeRange = assetVideoTrack.timeRange;
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
// https://stackoverflow.com/questions/22883525/avassetexportsession-giving-me-a-green-border-on-right-and-bottom-of-output-vide
videoComposition.renderSize = CGSizeMake(floor(self.rect.size.width / 16) * 16,
floor(self.rect.size.height / 16) * 16);
videoComposition.renderScale = 1.0;
videoComposition.frameDuration = CMTimeMake(1, 30);
videoComposition.instructions = #[instruction];
AVAssetExportSession *exportSession = [AVAssetExportSession exportSessionWithAsset:video presetName:AVAssetExportPreset1280x720];
exportSession.shouldOptimizeForNetworkUse = NO;
exportSession.outputFileType = AVFileTypeMPEG4;
exportSession.videoComposition = videoComposition;
exportSession.outputURL = outputURL2;
[exportSession exportAsynchronouslyWithCompletionHandler:^{
NSLog(#"done processing video!");
}];

No rendering with AVMutableVideoComposition

I have 3 videos that I am sequencing using an AVMutableComposition and then playing the video using an AVPlayer and grabbing the frames using an AVPlayerItemVdeoOutput. The video sequence is as follows:
[Logo Video - n seconds][Main video - m seconds][Logo Video - l seconds]
The code looks like this:
// Build the composition.
pComposition = [AVMutableComposition composition];
// Fill in the assets that make up the composition
AVMutableCompositionTrack* pCompositionVideoTrack = [pComposition addMutableTrackWithMediaType: AVMediaTypeVideo preferredTrackID: 1];
AVMutableCompositionTrack* pCompositionAudioTrack = [pComposition addMutableTrackWithMediaType: AVMediaTypeAudio preferredTrackID: 2];
CMTime time = kCMTimeZero;
CMTimeRange keyTimeRange = kCMTimeRangeZero;
for( AVAsset* pAssetsAsset in pAssets )
{
AVAssetTrack* pAssetsAssetVideoTrack = [pAssetsAsset tracksWithMediaType: AVMediaTypeVideo].firstObject;
AVAssetTrack* pAssetsAssetAudioTrack = [pAssetsAsset tracksWithMediaType: AVMediaTypeAudio].firstObject;
CMTimeRange timeRange = CMTimeRangeMake( kCMTimeZero, pAssetsAsset.duration );
NSError* pVideoError = nil;
NSError* pAudioError = nil;
if ( pAssetsAssetVideoTrack )
{
[pCompositionVideoTrack insertTimeRange: timeRange ofTrack: pAssetsAssetVideoTrack atTime: time error: &pVideoError];
}
if ( pAssetsAssetAudioTrack )
{
[pCompositionAudioTrack insertTimeRange: timeRange ofTrack: pAssetsAssetAudioTrack atTime: time error: &pAudioError];
}
if ( pAssetsAsset == pKeyAsset )
{
keyTimeRange = CMTimeRangeMake( time, timeRange.duration );
}
NSLog( #"%#", [pVideoError description] );
NSLog( #"%#", [pAudioError description] );
time = CMTimeAdd( time, pAssetsAsset.duration );
}
The logo videos are silent and merely display my logo. I manually create these videos so everything is perfect here. The "Main Video" however can end up with the wrong orientation. To combat this an AVMutableVideoComposition looks like the perfect way forward. So I setup a simple video composition that does a simple setTransform as follows:
pAsset = pComposition;
pPlayerItem = [AVPlayerItem playerItemWithAsset: pAsset];
pPlayer = [AVPlayer playerWithPlayerItem: pPlayerItem];
NSArray* pPlayerTracks = [pAsset tracksWithMediaType: AVMediaTypeVideo];
AVAssetTrack* pPlayerTrack = pPlayerTracks[0];
pVideoCompositionLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstruction];
[pVideoCompositionLayerInstruction setTransform: [[pKeyAsset tracksWithMediaType: AVMediaTypeVideo].firstObject preferredTransform] atTime: kCMTimeZero];
pVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
pVideoCompositionInstruction.backgroundColor = [[UIColor blackColor] CGColor];
pVideoCompositionInstruction.timeRange = keyTimeRange;
pVideoCompositionInstruction.layerInstructions = #[ pVideoCompositionLayerInstruction ];
pVideoComposition = [AVMutableVideoComposition videoComposition];
pVideoComposition.renderSize = [[pKeyAsset tracksWithMediaType: AVMediaTypeVideo].firstObject naturalSize];
pVideoComposition.frameDuration = [[pKeyAsset tracksWithMediaType: AVMediaTypeVideo].firstObject minFrameDuration];
pVideoComposition.instructions = #[ pVideoCompositionInstruction ];
pPlayerItem.videoComposition = pVideoComposition;
However when I come to play the video sequence I get no output returned. AVPlayerItemVideoOutput hasNewPixelBufferForItemTime always returns NO. If I comment out the last line in the code above (ie the setting the videoComposition) then everything works as before (with videos with the wrong orientation). Does anybody know what I'm doing wrong? Any thoughts much appreciated!
The issue here is that keyTimeRange may not start at time zero if your Logo video has nonzero duration. pVideoCompositionInstruction will start at keyTimeRange.start, rather than kCMTimeZero (when the AVMutableComposition will start), which violates the rules for AVVideoCompositionInstructions
"For the first instruction in the array, timeRange.start must be less than or equal to the earliest time for which playback or other processing will be attempted (typically kCMTimeZero)", according to the docs
To solve this, set pVideoComposition.instructions to an array containing three AVMutableVideoCompositionInstruction objects, each with their own AVMutableVideoCompositionLayerInstruction according to each AVAsset's transform. The time range for each of the three instructions should be the times at which these assets appear in the composition track. Make sure they line up exactly.

iOS7 AVMutableVideoCompositionLayerInstruction causes video frame to freeze

I'm working with modifying some video via AVMutableVideoCompositionLayerInstruction in the iOS7 SDK.
The following code used to work on iOS 6.1.3, but in iOS7 the video is frozen on the first frame (though I can still hear the audio ok). I got rid of all actual transformations I was applying to verify that adding a video composition alone causes problems.
AVURLAsset* videoAsset = [[AVURLAsset alloc] initWithURL:inputFileURL options:NULL];
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction *layerInstruction =
[AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoAssetTrack];
AVMutableVideoComposition *mainComposition = [AVMutableVideoComposition videoComposition];
AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mainInstruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration);
mainComposition.instructions = [NSArray arrayWithObject:mainInstruction];
mainComposition.frameDuration = videoAsset.duration;
mainComposition.renderSize = CGSizeMake(320, 320);
...
exportSession.videoComposition = mainComposition;
If I do not set the videoComposition attribute of exportSession then the video records ok, but I cannot apply any transformations. Anyone know what could be causing this?
Thanks.
A good way to debug issues with the video composition is to use [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:asset]. The returned AVMutableVideoComposition should work correctly. Then you can compare the contents of the instructions array with your instructions.
To increase the confusion levels, the asset there can also be an AVComposition. I think the AVFoundation team didn't do the best job when naming these things....
I've been struggling as well with AVMutableVideoCompositionLayerInstruction and mix video with CALayers. After a few days trying different ways what I realise is that the time of the assets are pretty important.
The proper way to find out the time of each asset is use the property:
loadValuesAsynchronouslyForKeys:#[#"duration"]
//Asset url
NSURL *assetUrl = [NSURL fileURLWithPath:_firstVideoFilePath];
//audio/video assets
AVURLAsset * videoAsset = [[AVURLAsset alloc]initWithURL:assetUrl options:nil];
//var to store the duration
CMTime __block durationTime;
//And here we'll be able to proper get the asset duration
[videoAsset loadValuesAsynchronouslyForKeys:#[#"duration"] completionHandler: ^{
Float64 durationSeconds = CMTimeGetSeconds([videoAsset duration]);
durationTime = [videoAsset duration];
//At this point you have the proper asset duration value, you can start any video processing from here.
}];
Hope this helps to anyone with the same issue.

AVAssetExportSession ignoring videoComposition rotation & stripping metadata

I am attempting to rotate video prior to upload on my iOS device because other platforms (such as android) do not properly interpret the rotation information in iOS-recorded videos and, as a result, play them improperly rotated.
I have looked at the following stack posts but have not had success apply any of them to my case:
iOS rotate every frame of video
Rotating Video w/ AVMutableVideoCompositionLayerInstruction
AVMutableVideoComposition rotated video captured in portrait mode
iOS AVFoundation: Setting Orientation of Video
I coped the Apple AVSimpleEditor project sample, but unfortunately all that ever happens is, upon creating an AVAssetExportSession and calling exportAsynchronouslyWithCompletionHandler, no rotation is performed, and what's worse, rotation metadata is stripped out of the resulting file.
Here is the code that runs the export:
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:[_mutableComposition copy] presetName:AVAssetExportPresetPassthrough];
exportSession.outputURL = outputURL;
exportSession.outputFileType = AVFileType3GPP;
exportSession.shouldOptimizeForNetworkUse = YES;
exportSession.videoComposition = _mutableVideoComposition;
[exportSession exportAsynchronouslyWithCompletionHandler:^(void)
{
NSLog(#"Status is %d %#", exportSession.status, exportSession.error);
handler(exportSession);
[exportSession release];
}];
The values _mutableComposition and _mutableVideoComposition are initialized by this method here:
- (void) getVideoComposition:(AVAsset*)asset
{
AVMutableComposition *mutableComposition = nil;
AVMutableVideoComposition *mutableVideoComposition = nil;
AVMutableVideoCompositionInstruction *instruction = nil;
AVMutableVideoCompositionLayerInstruction *layerInstruction = nil;
CGAffineTransform t1;
CGAffineTransform t2;
AVAssetTrack *assetVideoTrack = nil;
AVAssetTrack *assetAudioTrack = nil;
// Check if the asset contains video and audio tracks
if ([[asset tracksWithMediaType:AVMediaTypeVideo] count] != 0) {
assetVideoTrack = [asset tracksWithMediaType:AVMediaTypeVideo][0];
}
if ([[asset tracksWithMediaType:AVMediaTypeAudio] count] != 0) {
assetAudioTrack = [asset tracksWithMediaType:AVMediaTypeAudio][0];
}
CMTime insertionPoint = kCMTimeZero;
NSError *error = nil;
// Step 1
// Create a composition with the given asset and insert audio and video tracks into it from the asset
// Check whether a composition has already been created, i.e, some other tool has already been applied
// Create a new composition
mutableComposition = [AVMutableComposition composition];
// Insert the video and audio tracks from AVAsset
if (assetVideoTrack != nil) {
AVMutableCompositionTrack *compositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetVideoTrack atTime:insertionPoint error:&error];
}
if (assetAudioTrack != nil) {
AVMutableCompositionTrack *compositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetAudioTrack atTime:insertionPoint error:&error];
}
// Step 2
// Translate the composition to compensate the movement caused by rotation (since rotation would cause it to move out of frame)
t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.height, 0.0);
// Rotate transformation
t2 = CGAffineTransformRotate(t1, degreesToRadians(90.0));
// Step 3
// Set the appropriate render sizes and rotational transforms
// Create a new video composition
mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.renderSize = CGSizeMake(assetVideoTrack.naturalSize.height,assetVideoTrack.naturalSize.width);
mutableVideoComposition.frameDuration = CMTimeMake(1, 30);
// The rotate transform is set on a layer instruction
instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mutableComposition duration]);
layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:(mutableComposition.tracks)[0]];
[layerInstruction setTransform:t2 atTime:kCMTimeZero];
// Step 4
// Add the transform instructions to the video composition
instruction.layerInstructions = #[layerInstruction];
mutableVideoComposition.instructions = #[instruction];
TT_RELEASE_SAFELY(_mutableComposition);
_mutableComposition = [mutableComposition retain];
TT_RELEASE_SAFELY(_mutableVideoComposition);
_mutableVideoComposition = [mutableVideoComposition retain];
}
I pulled this method from AVSERotateCommand from here. Can anyone suggest why this method would not successfully rotate my video by the necessary 90 degrees?
because you are using AVAssetExportPresetPassthrough the AVAssetExportSession will ignore the videoComposition, use any other preset.

iOS AVFoundation - Show a time display over a video and export

I want to show a display overlay over a video and export that video including this display. I had a look into the AVFoundation Framework, AVCompositions, AVAssets etc. but I still do not have an idea to achieve this. There is a class called AVSynchronizedLayer which lets you animate things synchrounous to the video, but I do not want to animate, I jsut want to overlay the time display into every single frame of the video. Any advice?
Regards
Something like this...
(NB: culled from a much larger project, so I may have included some unnecessary pieces by accident).
You'll need to grab the CALayer of your clock / animation, and set it to the var myClockLayer (used 1/3 of the way down by the andimation tool).
This also assumes your incoming video has just two tracks - audio and video. If you have more, you'll need to set the track id in "asTrackID:2" more carefully.
AVURLAsset* url = [AVURLAsset URLAssetWithURL:incomingVideo options:nil];
AVMutableComposition *videoComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [videoComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipVideoTrack = [[url tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [url duration]) ofTrack:clipVideoTrack atTime:kCMTimeZero error:&error];
AVMutableVideoComposition* videoComposition = [[AVMutableVideoComposition videoComposition]retain];
videoComposition.renderSize = CGSizeMake(320, 240);
videoComposition.frameDuration = CMTimeMake(1, 30);
videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithAdditionalLayer:myClockLayer asTrackID:2];
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30) );
AVMutableVideoCompositionLayerInstruction* layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:clipVideoTrack];
instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
videoComposition.instructions = [NSArray arrayWithObject: instruction];
exporter = [[AVAssetExportSession alloc] initWithAsset:saveComposition presetName:AVAssetExportPresetHighestQuality] ;
exporter.videoComposition = videoComposition;
exporter.outputURL=url3;
exporter.outputFileType=AVFileTypeQuickTimeMovie;
[exporter exportAsynchronouslyWithCompletionHandler:^(void){}];
I think you can use AVCaptureVideoDataOutput to process each frame and use AVAssetWriter to record the processed frame.You can refer to this answer
https://stackoverflow.com/a/4944594/379941 .
use AVAssetWriterPixelBufferAdaptor's appendPixelBuffer:withPresentationTime: method to export
And I strongly suggest using OpenCV to process frame. this is a nice tutorial
http://aptogo.co.uk/2011/09/opencv-framework-for-ios/.
OpenCV library is very great.

Resources