I'm trying to use AVFoundation to crop videos I'm recording. So lets say i create a AVCaptureVideoPreviewLayer and set the frame to be 300x300.
AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [AVCaptureVideoPreviewLayer layerWithSession:session];
captureVideoPreviewLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
captureVideoPreviewLayer.delegate = self;
captureVideoPreviewLayer.frame = CGRectMake(0,0, 300, 300);
[previewView.layer addSublayer:captureVideoPreviewLayer];
The user sees the video cropped. I'd like to save the video exactly the way the user is viewing it. Using AVCaptureMovieFileOutput, the video obviously gets saved without cropping. I was considering using a AVCaptureVideoDataOutput to intercept the frames and crop them myself, but I was wondering if there is a more efficient way to do this, perhaps with AVExportSession and using an AVVideoComposition.
Any guidance would be appreciated.
Soemthing like this. 99% of this code just sets it up to do a custom CGAffineTransform, and then save out the result.
I'm assuming that you want the cropped video to take up full size/width of the output - so that e.g a Scale Affine is the correct solution (you zoom in on the video, giving the effect of having cropped + resized).
AVAsset* asset = // your input
AVAssetTrack *clipVideoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoComposition* videoComposition = [[AVMutableVideoComposition videoComposition]retain];
videoComposition.renderSize = CGSizeMake(320, 240);
videoComposition.frameDuration = CMTimeMake(1, 30);
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30) );
AVMutableVideoCompositionLayerInstruction* transformer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:clipVideoTrack];
CGAffineTransform finalTransform = // setup a transform that grows the video, effectively causing a crop
[transformer setTransform:finalTransform atTime:kCMTimeZero];
instruction.layerInstructions = [NSArray arrayWithObject:transformer];
videoComposition.instructions = [NSArray arrayWithObject: instruction];
exporter = [[AVAssetExportSession alloc] initWithAsset:saveComposition presetName:AVAssetExportPresetHighestQuality] ;
exporter.videoComposition = videoComposition;
exporter.outputURL=url3;
exporter.outputFileType=AVFileTypeQuickTimeMovie;
[exporter exportAsynchronouslyWithCompletionHandler:^(void){}];
ios7 added a specific Layer instruction just for cropping.
videolayerInstruction setCropRectangle:atTime:
_mike
Related
I am using AVAssetWriter to create a video file, I want to crop it and came upon the keys for AVVideoCleanApertureKey. This lets me set a viewport for the video, which looks like it is cropping the video. But in reality it doesn't, the full frame is still present in the video file, and wether the "crop" is used seems to be up to the view player.
So, is there a way to make discard the "outside" data around the viewport with AVAssetWriter? Or do I need to change my approach completely.
These images show the same video file, but only QuickTime cares about the viewport.
I believe the direct answer to the question is NO. So in the end I switched over to AVMutableVideoComposition and AVAssetExportSession as other answers around the web suggested.
AVAsset *video = [AVAsset assetWithURL:outputURL];
AVAssetTrack *assetVideoTrack = [[video tracksWithMediaType:AVMediaTypeVideo] lastObject];
AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:assetVideoTrack];
CGAffineTransform transform = CGAffineTransformMakeTranslation(-self.rect.origin.x, -self.rect.origin.y);
[layerInstruction setTransform:transform atTime:kCMTimeZero];
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.layerInstructions = #[layerInstruction];
instruction.timeRange = assetVideoTrack.timeRange;
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
// https://stackoverflow.com/questions/22883525/avassetexportsession-giving-me-a-green-border-on-right-and-bottom-of-output-vide
videoComposition.renderSize = CGSizeMake(floor(self.rect.size.width / 16) * 16,
floor(self.rect.size.height / 16) * 16);
videoComposition.renderScale = 1.0;
videoComposition.frameDuration = CMTimeMake(1, 30);
videoComposition.instructions = #[instruction];
AVAssetExportSession *exportSession = [AVAssetExportSession exportSessionWithAsset:video presetName:AVAssetExportPreset1280x720];
exportSession.shouldOptimizeForNetworkUse = NO;
exportSession.outputFileType = AVFileTypeMPEG4;
exportSession.videoComposition = videoComposition;
exportSession.outputURL = outputURL2;
[exportSession exportAsynchronouslyWithCompletionHandler:^{
NSLog(#"done processing video!");
}];
I want to crop a specified region of CGRect from my video file.
I followed this Tutorial .
I'm using renderSize property of AVMutableVideoComposition to get to a specified size. Here is my code that I use to crop a video to a specified size.
- (void) cropVideoAtPath:(NSString *) path
{
//load our movie Asset
AVAsset *asset = [AVAsset assetWithURL:[NSURL fileURLWithPath:path]];
//create an avassetrack with our asset
AVAssetTrack *clipVideoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
//create a video composition and preset some settings
AVMutableVideoComposition* videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.frameDuration = CMTimeMake(1, 30);
//here we are setting its render size to its height x height (Square)
float deltaHeight = 100;
videoComposition.renderSize = CGSizeMake(clipVideoTrack.naturalSize.width, clipVideoTrack.naturalSize.height-deltaHeight);
//create a video instruction
AVMutableVideoCompositionInstruction *instruction =
[AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30));
AVMutableVideoCompositionLayerInstruction* transformer [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:clipVideoTrack];
// CGAffineTransform t1 = CGAffineTransformMakeTranslation(0, 1000);
// CGAffineTransform finalTransform = t1;
// [transformer setTransform:finalTransform atTime:kCMTimeZero];
instruction.layerInstructions = [NSArray arrayWithObject:transformer];
videoComposition.instructions = [NSArray arrayWithObject: instruction];
//Create an Export Path to store the cropped video
NSString * documentsPath = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0];
NSString *exportPathLoc = [documentsPath stringByAppendingFormat:#"/CroppedVideo.mp4"];
NSURL *exportUrl = [NSURL fileURLWithPath:exportPathLoc];
//Remove any prevouis videos at that path
[[NSFileManager defaultManager] removeItemAtURL:exportUrl error:nil];
//Export
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPresetHighestQuality] ;
exporter.videoComposition = videoComposition;
exporter.outputURL = exportUrl;
exporter.outputFileType = AVFileTypeQuickTimeMovie;
[exporter exportAsynchronouslyWithCompletionHandler:^
{
dispatch_async(dispatch_get_main_queue(), ^{
NSLog(#"Saved");
UISaveVideoAtPathToSavedPhotosAlbum([[exporter outputURL] path],nil,nil,nil);
});
}];
}
My original video look like this one.
My video after applying renderSize to AVMutableVideoComposition look like this one.
As we can see bottom portion are successfully clipped away from original video.
Lets say I have a video with size (1000,1000) and I want only the center portion (500,500) region of that video. So my CGrect would be (0,250,500,500).
In my case I want only the region where image is present and so I want to crop out top bar and bottom bar.
So I applied CGAffineTransform to the AVMutableVideoCompositionLayerInstruction like this
CGAffineTransform t1 = CGAffineTransformMakeTranslation(0, deltaHeight);
CGAffineTransform finalTransform = t1;
[transformer setTransform:finalTransform atTime:kCMTimeZero];
which resulted like this below image.
So how can I apply this (x,y) value in cropping the video and what am I doing wrong in here . I would appreciate any help.
I'm working with modifying some video via AVMutableVideoCompositionLayerInstruction in the iOS7 SDK.
The following code used to work on iOS 6.1.3, but in iOS7 the video is frozen on the first frame (though I can still hear the audio ok). I got rid of all actual transformations I was applying to verify that adding a video composition alone causes problems.
AVURLAsset* videoAsset = [[AVURLAsset alloc] initWithURL:inputFileURL options:NULL];
AVAssetTrack *videoAssetTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction *layerInstruction =
[AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoAssetTrack];
AVMutableVideoComposition *mainComposition = [AVMutableVideoComposition videoComposition];
AVMutableVideoCompositionInstruction *mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mainInstruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration);
mainComposition.instructions = [NSArray arrayWithObject:mainInstruction];
mainComposition.frameDuration = videoAsset.duration;
mainComposition.renderSize = CGSizeMake(320, 320);
...
exportSession.videoComposition = mainComposition;
If I do not set the videoComposition attribute of exportSession then the video records ok, but I cannot apply any transformations. Anyone know what could be causing this?
Thanks.
A good way to debug issues with the video composition is to use [AVMutableVideoComposition videoCompositionWithPropertiesOfAsset:asset]. The returned AVMutableVideoComposition should work correctly. Then you can compare the contents of the instructions array with your instructions.
To increase the confusion levels, the asset there can also be an AVComposition. I think the AVFoundation team didn't do the best job when naming these things....
I've been struggling as well with AVMutableVideoCompositionLayerInstruction and mix video with CALayers. After a few days trying different ways what I realise is that the time of the assets are pretty important.
The proper way to find out the time of each asset is use the property:
loadValuesAsynchronouslyForKeys:#[#"duration"]
//Asset url
NSURL *assetUrl = [NSURL fileURLWithPath:_firstVideoFilePath];
//audio/video assets
AVURLAsset * videoAsset = [[AVURLAsset alloc]initWithURL:assetUrl options:nil];
//var to store the duration
CMTime __block durationTime;
//And here we'll be able to proper get the asset duration
[videoAsset loadValuesAsynchronouslyForKeys:#[#"duration"] completionHandler: ^{
Float64 durationSeconds = CMTimeGetSeconds([videoAsset duration]);
durationTime = [videoAsset duration];
//At this point you have the proper asset duration value, you can start any video processing from here.
}];
Hope this helps to anyone with the same issue.
EDIT: The strangest thing: it seems that when running this code from a full app everything works, but I was always running the creation of the movie from my unit tests, and only there it didn't work. Trying to figure out why is that...
I'm trying to combine video + audio + text using AVMutableComposition and export it to a new video.
My code is based on the AVEditDemo from WWDC '10
I added a purple background to the CATextLayer so I can know for a fact it is exported to the movie, but no text is shown... I tried playing with various fonts, position, color definitions, but nothing helped, so I decided to post the code here and see if anyone stumbled across something similar and can tell me what I'm missing.
Here's the code (self.audio and self.video are AVURLAssets):
CMTime exportDuration = self.audio.duration;
AVMutableComposition *composition = [[AVMutableComposition alloc] init];
AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *videoTrack = [[self.video tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
// add the video in loop until the audio ends
CMTime currStartTime = kCMTimeZero;
while (CMTimeCompare(currStartTime, exportDuration) < 0) {
CMTime timeRemaining = CMTimeSubtract(exportDuration, currStartTime);
CMTime currLoopDuration = self.video.duration;
if (CMTimeCompare(currLoopDuration, timeRemaining) > 0) {
currLoopDuration = timeRemaining;
}
CMTimeRange currLoopTimeRange = CMTimeRangeMake(kCMTimeZero, currLoopDuration);
[compositionVideoTrack insertTimeRange:currLoopTimeRange ofTrack:videoTrack
atTime:currStartTime error:nil];
currStartTime = CMTimeAdd(currStartTime, currLoopDuration);
}
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *audioTrack = [self.audio.tracks objectAtIndex:0];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, self.audio.duration) ofTrack:audioTrack atTime:kCMTimeZero error:nil];
AVMutableVideoComposition *videoComposition;
// the text layer part - THIS IS THE PART THAT DOESN'T WORK WELL
CALayer *animatedTitleLayer = [CALayer layer];
CATextLayer *titleLayer = [[CATextLayer alloc] init];
titleLayer.string = #"asdfasdf";
titleLayer.alignmentMode = kCAAlignmentCenter;
titleLayer.bounds = CGRectMake(0, 0, self.video.naturalSize.width / 2, self.video.naturalSize.height / 2);
titleLayer.opacity = 1.0;
titleLayer.backgroundColor = [UIColor purpleColor].CGColor;
[animatedTitleLayer addSublayer:titleLayer];
animatedTitleLayer.position = CGPointMake(self.video.naturalSize.width / 2.0, self.video.naturalSize.height / 2.0);
// build a Core Animation tree that contains both the animated title and the video.
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, self.video.naturalSize.width, self.video.naturalSize.height);
videoLayer.frame = CGRectMake(0, 0, self.video.naturalSize.width, self.video.naturalSize.height);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:animatedTitleLayer];
videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
AVMutableVideoCompositionInstruction *passThroughInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
passThroughInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, exportDuration);
AVMutableVideoCompositionLayerInstruction *passThroughLayer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:compositionVideoTrack];
passThroughInstruction.layerInstructions = [NSArray arrayWithObject:passThroughLayer];
videoComposition.instructions = [NSArray arrayWithObject:passThroughInstruction];
videoComposition.frameDuration = CMTimeMake(1, 30);
videoComposition.renderSize = self.video.naturalSize;
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:composition presetName:AVAssetExportPresetMediumQuality];
exportSession.videoComposition = videoComposition;
exportSession.outputURL = [NSURL fileURLWithPath:self.outputFilePath];
exportSession.outputFileType = AVFileTypeQuickTimeMovie;
[exportSession exportAsynchronouslyWithCompletionHandler:^() {
// save the video ...
}];
I ran into the same issue in a different context. In my case, I had moved preparation of the AVMutableComposition to a background thread. Moving that part of preparation back to the main queue/thread made CATextLayer overlays work properly again.
This likely doesn't exactly apply to your unit testing context, but my guess is that CATextLayer/AVFoundation depend on some part of UIKit/AppKit being running/available (a drawing context? a current screen?) in that thread context, which might explain the failure we are both seeing.
I had the problem that almost everything rendered great, also images with CALayer and content set to CGImage. Except the CGTextLayer text, if I set a background color to CGTextLayer, a beginTime and duration that was perfectly rendered too - just the actual text didn't want to appear.
That was all on the simulator, then I run it on the phone: And it was perfect.
Conclusion: The simulator renders nice videos... Until you use CATextLayer.
Further investigation is needed, but AFAICT right now CATextLayer inside an AVMutableVideoComposition simply doesn't work from within a logic unit tests target, and this feature must be tested from a regular target.
My problem was that I needed to set contentsGravity = kCAGravityBottomLeft -- otherwise my text was off the screen.
I want to show a display overlay over a video and export that video including this display. I had a look into the AVFoundation Framework, AVCompositions, AVAssets etc. but I still do not have an idea to achieve this. There is a class called AVSynchronizedLayer which lets you animate things synchrounous to the video, but I do not want to animate, I jsut want to overlay the time display into every single frame of the video. Any advice?
Regards
Something like this...
(NB: culled from a much larger project, so I may have included some unnecessary pieces by accident).
You'll need to grab the CALayer of your clock / animation, and set it to the var myClockLayer (used 1/3 of the way down by the andimation tool).
This also assumes your incoming video has just two tracks - audio and video. If you have more, you'll need to set the track id in "asTrackID:2" more carefully.
AVURLAsset* url = [AVURLAsset URLAssetWithURL:incomingVideo options:nil];
AVMutableComposition *videoComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [videoComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVAssetTrack *clipVideoTrack = [[url tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [url duration]) ofTrack:clipVideoTrack atTime:kCMTimeZero error:&error];
AVMutableVideoComposition* videoComposition = [[AVMutableVideoComposition videoComposition]retain];
videoComposition.renderSize = CGSizeMake(320, 240);
videoComposition.frameDuration = CMTimeMake(1, 30);
videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithAdditionalLayer:myClockLayer asTrackID:2];
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeMakeWithSeconds(60, 30) );
AVMutableVideoCompositionLayerInstruction* layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:clipVideoTrack];
instruction.layerInstructions = [NSArray arrayWithObject:layerInstruction];
videoComposition.instructions = [NSArray arrayWithObject: instruction];
exporter = [[AVAssetExportSession alloc] initWithAsset:saveComposition presetName:AVAssetExportPresetHighestQuality] ;
exporter.videoComposition = videoComposition;
exporter.outputURL=url3;
exporter.outputFileType=AVFileTypeQuickTimeMovie;
[exporter exportAsynchronouslyWithCompletionHandler:^(void){}];
I think you can use AVCaptureVideoDataOutput to process each frame and use AVAssetWriter to record the processed frame.You can refer to this answer
https://stackoverflow.com/a/4944594/379941 .
use AVAssetWriterPixelBufferAdaptor's appendPixelBuffer:withPresentationTime: method to export
And I strongly suggest using OpenCV to process frame. this is a nice tutorial
http://aptogo.co.uk/2011/09/opencv-framework-for-ios/.
OpenCV library is very great.