I'm working with modifying some video via AVMutableVideoCompositionLayerInstruction in the iOS7 SDK.
The following code used to work on iOS 6.1.3, but in iOS7 the video is frozen on the first frame (though I can still hear the audio ok). I got rid of all actual transformations I was applying to verify that adding a video composition alone causes problems.
AVURLAsset* videoAsset = [[AVURLAsset alloc] initWithURL:inputFileURL options:NULL];
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction *layerInstruction =
[AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoAssetTrack];
AVMutableVideoComposition *mainComposition = [AVMutableVideoComposition videoComposition];
AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mainInstruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration);
mainComposition.instructions = [NSArray arrayWithObject:mainInstruction];
mainComposition.frameDuration = videoAsset.duration;
mainComposition.renderSize = CGSizeMake(320, 320);
...
exportSession.videoComposition = mainComposition;
If I do not set the videoComposition attribute of exportSession then the video records ok, but I cannot apply any transformations. Anyone know what could be causing this?
Thanks.
A good way to debug issues with the video composition is to use [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:asset]. The returned AVMutableVideoComposition should work correctly. Then you can compare the contents of the instructions array with your instructions.
To increase the confusion levels, the asset there can also be an AVComposition. I think the AVFoundation team didn't do the best job when naming these things....
I've been struggling as well with AVMutableVideoCompositionLayerInstruction and mix video with CALayers. After a few days trying different ways what I realise is that the time of the assets are pretty important.
The proper way to find out the time of each asset is use the property:
loadValuesAsynchronouslyForKeys:#[#"duration"]
//Asset url
NSURL *assetUrl = [NSURL fileURLWithPath:_firstVideoFilePath];
//audio/video assets
AVURLAsset * videoAsset = [[AVURLAsset alloc]initWithURL:assetUrl options:nil];
//var to store the duration
CMTime __block durationTime;
//And here we'll be able to proper get the asset duration
[videoAsset loadValuesAsynchronouslyForKeys:#[#"duration"] completionHandler: ^{
Float64 durationSeconds = CMTimeGetSeconds([videoAsset duration]);
durationTime = [videoAsset duration];
//At this point you have the proper asset duration value, you can start any video processing from here.
}];
Hope this helps to anyone with the same issue.
Related
I am using AVAssetWriter to create a video file, I want to crop it and came upon the keys for AVVideoCleanApertureKey. This lets me set a viewport for the video, which looks like it is cropping the video. But in reality it doesn't, the full frame is still present in the video file, and wether the "crop" is used seems to be up to the view player.
So, is there a way to make discard the "outside" data around the viewport with AVAssetWriter? Or do I need to change my approach completely.
These images show the same video file, but only QuickTime cares about the viewport.
I believe the direct answer to the question is NO. So in the end I switched over to AVMutableVideoComposition and AVAssetExportSession as other answers around the web suggested.
AVAsset *video = [AVAsset assetWithURL:outputURL];
AVAssetTrack *assetVideoTrack = [[video tracksWithMediaType:AVMediaTypeVideo] lastObject];
AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:assetVideoTrack];
CGAffineTransform transform = CGAffineTransformMakeTranslation(-self.rect.origin.x, -self.rect.origin.y);
[layerInstruction setTransform:transform atTime:kCMTimeZero];
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.layerInstructions = #[layerInstruction];
instruction.timeRange = assetVideoTrack.timeRange;
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
// https://stackoverflow.com/questions/22883525/avassetexportsession-giving-me-a-green-border-on-right-and-bottom-of-output-vide
videoComposition.renderSize = CGSizeMake(floor(self.rect.size.width / 16) * 16,
floor(self.rect.size.height / 16) * 16);
videoComposition.renderScale = 1.0;
videoComposition.frameDuration = CMTimeMake(1, 30);
videoComposition.instructions = #[instruction];
AVAssetExportSession *exportSession = [AVAssetExportSession exportSessionWithAsset:video presetName:AVAssetExportPreset1280x720];
exportSession.shouldOptimizeForNetworkUse = NO;
exportSession.outputFileType = AVFileTypeMPEG4;
exportSession.videoComposition = videoComposition;
exportSession.outputURL = outputURL2;
[exportSession exportAsynchronouslyWithCompletionHandler:^{
NSLog(#"done processing video!");
}];
I am trying to export an AVMutableComposition using AVAssetExportSession.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mutableComposition presetName:AVAssetExportPresetHighestQuality];
exporter.outputURL=url;
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.videoComposition = mainCompositionInst;
exporter.shouldOptimizeForNetworkUse = YES;
[exporter exportAsynchronouslyWithCompletionHandler:^
{
switch (exporter.status)
{
case AVAssetExportSessionStatusCompleted:
{
NSLog(#"Video Merge SuccessFullt");
}
break;
case AVAssetExportSessionStatusFailed:
NSLog(#"Failed:%#", exporter.error.description);
break;
case AVAssetExportSessionStatusCancelled:
NSLog(#"Canceled:%#", exporter.error);
break;
case AVAssetExportSessionStatusExporting:
NSLog(#"Exporting!");
break;
case AVAssetExportSessionStatusWaiting:
NSLog(#"Waiting");
break;
default:
break;
}
}];
But exporting even 1 minute video takes around 30 seconds, which is too much considering iPad inbuilt camera app takes less than 2 seconds.
Also if I remove videoComposition from exporter, time reduces to 7 seconds, which is still bad considering video length to be only 1 minute.
So, I want to know how to decrease the export time to minimum?
Also, I want to know, does AVAssetExportSession takes generally this much time or is it just my case?
Update:
Merge Code:
AVMutableComposition *mutableComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *videoCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo
preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *audioCompositionTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableVideoCompositionLayerInstruction *videoTrackLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoCompositionTrack];
NSMutableArray *instructions = [NSMutableArray new];
CGSize size = CGSizeZero;
CMTime time = kCMTimeZero;
for (AVURLAsset *asset in assets)
{
AVAssetTrack *assetTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioAssetTrack = [asset tracksWithMediaType:AVMediaTypeAudio].firstObject;
NSError *error;
[videoCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetTrack.timeRange.duration )
ofTrack:assetTrack
atTime:time
error:&error];
[videoTrackLayerInstruction setTransform:assetTrack.preferredTransform atTime:time];
if (error) {
NSLog(#"asset url :: %#",assetTrack.asset);
NSLog(#"Error1 - %#", error.debugDescription);
}
[audioCompositionTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, audioAssetTrack.timeRange.duration)
ofTrack:audioAssetTrack
atTime:time
error:&error];
if (error) {
NSLog(#"Error2 - %#", error.debugDescription);
}
time = CMTimeAdd(time, assetTrack.timeRange.duration);
if (CGSizeEqualToSize(size, CGSizeZero)) {
size = assetTrack.naturalSize;
}
}
AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, time);
mainInstruction.layerInstructions = [NSArray arrayWithObject:videoTrackLayerInstruction];
AVMutableVideoComposition *mainCompositionInst = [AVMutableVideoComposition videoComposition];
mainCompositionInst.instructions = [NSArray arrayWithObject:mainInstruction];
mainCompositionInst.frameDuration = CMTimeMake(1, 30);
mainCompositionInst.renderSize = size;
I think the issue here is AVAssetExportPresetHighestQuality this will cause conversion or up sampling, and will slow things WAY down. Try using AVAssetExportPresetPassthrough instead. Took my export times from ~35+ secs down to less than a second
I also disabled optimizing for network use as all of our videos are only used within the app, and never streamed or passed over a network
I built an app that merged together different video fragments and I can safely say it is your case. My video files have ~10 mb so maybe they are little smaller but it takes less than second to merge them all together, even if there is 10, 20 segments.
Now as for it is actually happening, I have checked my configuration against yours and the difference is following:
I use export.outputFileType = AVFileTypeMPEG4
I have network optimization disabled, and if you are not planning to stream the video from your device, you should disable it too
Other than that, it should be the same, I can't really compare it as you would have to provide code about how you actually create the composition. There are some things to check though:
if you are using AVURLAssetPreferPreciseDurationAndTimingKey when creating AVURLAsset and you don't have enough keyframes, it can actually take quite some time to seek through to find the keys and so it slows it down
Consider if you really need highest quality for the video
Consider resolution of the video and possibly lower it down
I should be able to help you more if you provide more information, but maybe some of this stuff will work. Give it a try and then report back.
Hope it helps a little!
Edit 1:
I forgot to mention that if you are out of options, you should try to use FFmpeg library as it is very high performance, though due to licencing might not be suitable for you.
Keep in my that maybe the asset you're trying to export is not stored locally and first is downloading the content and then exporting your assets.
if you don't want to download any content
let videoRequestOptions: PHVideoRequestOptions = PHVideoRequestOptions()
videoRequestOptions.isNetworkAccessAllowed = true
you also going to receive a message within requestExportSession completion handler with a couple of useful info values.
https://developer.apple.com/documentation/photokit/phimagemanager/image_result_info_keys
otherwise, if you want to download your asset from iCloud and make it as fast as possible you could play with the following parameters
let videoRequestOptions: PHVideoRequestOptions = PHVideoRequestOptions()
// highQualityFormat, highQualityFormat, fastFormat, automatic
videoRequestOptions.deliveryMode = .fastFormat
videoRequestOptions.isNetworkAccessAllowed = true
another important property is the export preset, there's a bunch of available preset
let lowQualityPreset1 = AVAssetExportPresetLowQuality
let lowQualityPreset2 = AVAssetExportPreset640x480
let lowQualityPreset3 = AVAssetExportPreset960x540
let lowQualityPreset4 = AVAssetExportPreset1280x720
let manager = PHImageManager()
manager.requestExportSession(forVideo: asset,
options: videoRequestOptions,
exportPreset: lowQualityPreset1) { (session, info) in
session?.outputURL = outputUrl
session?.outputFileType = .mp4
session?.shouldOptimizeForNetworkUse = true
session?.exportAsynchronously {
}
}
I am using scaleTimeRange:toDuration: to produce a fast-motion effect of upto 10x the original video speed. However I have noticed that videos with a higher bit-rate (say 20MBits/s and above) begin to stutter when played through an AVPlayer for anything above 4x the normal speed, enough to crash the AVPlayerLayer (disappear) if it runs for a while.
The code.
//initialize the player
self.player = [[AVPlayer alloc] init];
//load up the asset
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:[[NSBundle mainBundle] URLForResource:#"sample-video" withExtension:#"mov"] options:nil];
[asset loadValuesAsynchronouslyForKeys:#[#"playable", #"hasProtectedContent", #"tracks"] completionHandler:
^{
AVMutableComposition *composition = [AVMutableComposition composition];
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
// composition set to play at 60fps
videoComposition.frameDuration = CMTimeMake(1,60);
//add video track to composition
AVMutableCompositionTrack *videoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
//1080p composition
CGSize renderSize = CGSizeMake(1920.0, 1080.0);
CMTime currentTime = kCMTimeZero;
CGFloat scale = 1.0;
AVAssetTrack *assetTrack = nil;
assetTrack = [asset tracksWithMediaType:AVMediaTypeVideo].firstObject;
[videoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, asset.duration) ofTrack:assetTrack atTime:currentTime error:nil];
CMTimeRange scaleTimeRange = CMTimeRangeMake(currentTime, asset.duration);
//Speed it up to 8x.
CMTime scaledDuration = CMTimeMultiplyByFloat64(asset.duration,1.0/8.0);
[videoTrack scaleTimeRange:scaleTimeRange toDuration:scaledDuration];
AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
//ensure video is scaled up/down to match the composition
scale = renderSize.width/assetTrack.naturalSize.width;
[layerInstruction setTransform:CGAffineTransformMakeScale(scale, scale) atTime:currentTime];
AVMutableVideoCompositionInstruction *videoInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
videoInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, composition.duration);
videoInstruction.layerInstructions = #[ layerInstruction];
videoComposition.instructions = #[videoInstruction];
videoComposition.renderSize = renderSize;
//pass the stuff to AVPlayer for playback
self.playerItem = [AVPlayerItem playerItemWithAsset:composition];
self.playerItem.videoComposition = videoComposition;
[self.player replaceCurrentItemWithPlayerItem:self.playerItem];
//playerView is a custom view with AVPlayerLayer, picked up from https://developer.apple.com/library/mac/documentation/AudioVideo/Conceptual/AVFoundationPG/Articles/02_Playback.html#//apple_ref/doc/uid/TP40010188-CH3-SW11
[self.playerView setPlayer:self.player];
//call [self.player play] when ready.
}];
Some notes:
All testing done on iPhone 6
I am deliberately not adding any audio tracks to rule out the possibility of audio playing a part here.
Normal bitrate videos (16Mbits/s on average) play fine on 10x
Same composition code produces a smooth playback on an OSX application
The stutter is more obvious with higher bitrates.
All videos being tested are 1080p 60fps
A high-bitrate video behaves well if opened in and exported to 1080, so to tone-down the bit-rate and maintain FPS.
There is no rendering/exporting of video involved.
Has anyone else ran into this and know a way around?
I guess what makes my question/problem different than other postings is that I am not scaling a view, but rather an asset layer instruction (AVMutableVideoCompositionLayerInstruction). So setting anchor points, view.center, CG Rect scaling all do not work.
Beyond moving the asset with CGAffineTransformMakeTranslation to get it to look like it was centered, but is highly inaccurate, I can not figure out how to make it scale from the center. Is there a property I'm missing? Docs and guides aren't very helpful, but maybe I missed something.
Code is below. Thank you all in advance!!! :)
Also, for those looking for way to export avasset with CGTransforms, the below code is all the steps to get there; of course need to fill out details like the CMTimeRanges, but hope this helps someone figure this confusing thing out.
-(void) goAssetsExport {
AVMutableComposition *composition = [[AVMutableComposition alloc] init];
AVMutableCompositionTrack *firstTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.frameDuration = CMTimeMake(1,30);
videoComposition.renderScale = 1.0;
videoComposition.renderSize = CGSizeMake(self.view.bounds.size.width, self.view.bounds.size.height);
NSURL *movieURL = [[NSBundle mainBundle] URLForResource:[NSString stringWithFormat:#"%#", [preloadEffectsArray objectAtIndex:i]] withExtension:#"mov"];
AVURLAsset *firstAsset = [AVURLAsset URLAssetWithURL:movieURL options:nil];
AVAssetTrack *firstAssetTrack = [[firstAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[firstTrack insertTimeRange:CMTimeRangeMake(firstTrackRangeMin, duration) ofTrack:firstAssetTrack atTime:firstTrackRangeMin error:nil];
AVMutableVideoCompositionInstruction *transitionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
AVMutableVideoCompositionLayerInstruction *fromLayer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:firstTrack];
//**This is where problem might be?**//
CGAffineTransform Scaler = CGAffineTransformMakeScale(scaleNumber,scaleNumber);
CGAffineTransform Mover = CGAffineTransformMakeTranslation(scaleNumber * -100, scaleNumber * -150);
[fromLayer setTransform:CGAffineTransformConcat(Scaler,Mover) atTime:firstTrackRangeMin];
transitionInstruction.layerInstructions = [NSArray arrayWithObject:fromLayer];
videoComposition.instructions = instructionArray;
[self exportVideo:composition withInstructionComposition:videoComposition];
}
CGAffineTransform work in Quartz coordinate system, that's the point from where you should start:
CGAffineTransform translate = CGAffineTransformMakeTranslation((x,y);
CGAffineTransform scale = CGAffineTransformMakeScale(q,z);
CGAffineTransform finalTransform = CGAffineTransformConcat(scale, CGAffineTransformConcat(translate, assetTrackForVideo1.preferredTransform));
[layerInstructionForVideo setTransform:finalTransform atTime:kCMTimeZero];
[layerInstructions addObject:layerInstructionForVideo];
that kind of transform work for me.
EDIT: my mistake, forget to edit all code !
I want to show a display overlay over a video and export that video including this display. I had a look into the AVFoundation Framework, AVCompositions, AVAssets etc. but I still do not have an idea to achieve this. There is a class called AVSynchronizedLayer which lets you animate things synchrounous to the video, but I do not want to animate, I jsut want to overlay the time display into every single frame of the video. Any advice?
Regards
Something like this...
(NB: culled from a much larger project, so I may have included some unnecessary pieces by accident).
You'll need to grab the CALayer of your clock / animation, and set it to the var myClockLayer (used 1/3 of the way down by the andimation tool).
This also assumes your incoming video has just two tracks - audio and video. If you have more, you'll need to set the track id in "asTrackID:2" more carefully.
AVURLAsset* url = [AVURLAsset URLAssetWithURL:incomingVideo options:nil];
AVMutableComposition *videoComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [videoComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipVideoTrack = [[url tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [url duration]) ofTrack:clipVideoTrack atTime:kCMTimeZero error:&error];
AVMutableVideoComposition* videoComposition = [[AVMutableVideoComposition videoComposition]retain];
videoComposition.renderSize = CGSizeMake(320, 240);
videoComposition.frameDuration = CMTimeMake(1, 30);
videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithAdditionalLayer:myClockLayer asTrackID:2];
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30) );
AVMutableVideoCompositionLayerInstruction* layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:clipVideoTrack];
instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
videoComposition.instructions = [NSArray arrayWithObject: instruction];
exporter = [[AVAssetExportSession alloc] initWithAsset:saveComposition presetName:AVAssetExportPresetHighestQuality] ;
exporter.videoComposition = videoComposition;
exporter.outputURL=url3;
exporter.outputFileType=AVFileTypeQuickTimeMovie;
[exporter exportAsynchronouslyWithCompletionHandler:^(void){}];
I think you can use AVCaptureVideoDataOutput to process each frame and use AVAssetWriter to record the processed frame.You can refer to this answer
https://stackoverflow.com/a/4944594/379941 .
use AVAssetWriterPixelBufferAdaptor's appendPixelBuffer:withPresentationTime: method to export
And I strongly suggest using OpenCV to process frame. this is a nice tutorial
http://aptogo.co.uk/2011/09/opencv-framework-for-ios/.
OpenCV library is very great.