I need to create one video composed by two stitched videos like the following image:
Actually I'm playing the two videos simultaneously with two instances of AVPlayer, but I need to create the final video with those two videos: Final video = [1 | 2]
Do you have any ideas of how to do this? This is the code I use to play the two videos:
- (void)viewDidLoad
{
[super viewDidLoad];
NSURL *url = [[NSBundle mainBundle] URLForResource:#"video_1" withExtension:#"mp4"];
self.mPlayer = [AVPlayer playerWithURL:url];
self.mPlayer2 = [AVPlayer playerWithURL:url];
[self.mPlayer addObserver:self forKeyPath:#"status" options:0 context:AVPlayerDemoPlaybackViewControllerStatusObservationContext];
[self.mPlayer2 addObserver:self forKeyPath:#"status" options:0 context:AVPlayerDemoPlaybackViewControllerStatusObservationContext];
}
- (void)observeValueForKeyPath:(NSString*)path ofObject:(id)object change:(NSDictionary*)change context:(void*)context
{
if ([object isKindOfClass:[AVPlayer class]]) {
if ([path isEqualToString:#"status"]) {
switch(item.status) {
case AVPlayerItemStatusFailed:
NSLog(#"player item status failed");
break;
case AVPlayerItemStatusReadyToPlay:
NSLog(#"player item status is ready to play");
[self.mPlaybackView2 setPlayer:self.mPlayer2];
[self.mPlaybackView setPlayer:self.mPlayer];
[self.mPlayer play];
[self.mPlayer2 play];
break;
case AVPlayerItemStatusUnknown:
NSLog(#"player item status is unknown");
break;
}
}
}
}
Source: https://abdulazeem.wordpress.com/2012/04/02/
I found the solution. Here I go:
- (void)stitchAudio:(NSURL *)file1 audio2:(NSURL *)file2 {
AVAsset *video1Asset = [AVAsset assetWithURL:file1];
AVAsset *video2Asset = [AVAsset assetWithURL:file2];
AVMutableComposition* mixComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *firstTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[firstTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, video1Asset.duration)
ofTrack:[[video1Asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]
atTime:kCMTimeZero error:nil];
AVMutableCompositionTrack *secondTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[secondTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, video2Asset.duration)
ofTrack:[[video2Asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]
atTime:kCMTimeZero error:nil];
//See how we are creating AVMutableVideoCompositionInstruction object.This object will contain the array of our AVMutableVideoCompositionLayerInstruction objects.You set the duration of the layer.You should add the lenght equal to the lingth of the longer asset in terms of duration.
AVMutableVideoCompositionInstruction * MainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
MainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, video1Asset.duration);
//We will be creating 2 AVMutableVideoCompositionLayerInstruction objects.Each for our 2 AVMutableCompositionTrack.here we are creating AVMutableVideoCompositionLayerInstruction for out first track.see how we make use of Affinetransform to move and scale our First Track.so it is displayed at the bottom of the screen in smaller size.(First track in the one that remains on top).
AVMutableVideoCompositionLayerInstruction *FirstlayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:firstTrack];
CGAffineTransform Scale = CGAffineTransformMakeScale(0.68f,0.68f);
CGAffineTransform Move = CGAffineTransformMakeTranslation(0,0);
[FirstlayerInstruction setTransform:CGAffineTransformConcat(Scale,Move) atTime:kCMTimeZero];
//Here we are creating AVMutableVideoCompositionLayerInstruction for out second track.see how we make use of Affinetransform to move and scale our second Track.
AVMutableVideoCompositionLayerInstruction *SecondlayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:secondTrack];
CGAffineTransform SecondScale = CGAffineTransformMakeScale(0.68f,0.68f);
CGAffineTransform SecondMove = CGAffineTransformMakeTranslation(318,0);
[SecondlayerInstruction setTransform:CGAffineTransformConcat(SecondScale,SecondMove) atTime:kCMTimeZero];
//Now we add our 2 created AVMutableVideoCompositionLayerInstruction objects to our AVMutableVideoCompositionInstruction in form of an array.
MainInstruction.layerInstructions = [NSArray arrayWithObjects:FirstlayerInstruction,SecondlayerInstruction,nil];
//Now we create AVMutableVideoComposition object.We can add mutiple AVMutableVideoCompositionInstruction to this object.We have only one AVMutableVideoCompositionInstruction object in our example.You can use multiple AVMutableVideoCompositionInstruction objects to add multiple layers of effects such as fade and transition but make sure that time ranges of the AVMutableVideoCompositionInstruction objects dont overlap.
AVMutableVideoComposition *MainCompositionInst = [AVMutableVideoComposition videoComposition];
MainCompositionInst.instructions = [NSArray arrayWithObject:MainInstruction];
MainCompositionInst.frameDuration = CMTimeMake(1, 30);
MainCompositionInst.renderSize = CGSizeMake(640, 480);
//Finally just add the newly created AVMutableComposition with multiple tracks to an AVPlayerItem and play it using AVPlayer.
AVPlayerItem * newPlayerItem = [AVPlayerItem playerItemWithAsset:mixComposition];
newPlayerItem.videoComposition = MainCompositionInst;
self.mPlayer = [AVPlayer playerWithPlayerItem:newPlayerItem];
[self.mPlaybackView setPlayer:self.mPlayer];
[self.mPlayer play];
// [self.mPlayer addObserver:self forKeyPath:#"status" options:0 context:AVPlayerDemoPlaybackViewControllerStatusObservationContext];
// Create the export session with the composition and set the preset to the highest quality.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPresetHighestQuality];
// Set the desired output URL for the file created by the export process.
exporter.outputURL = [self newUniqueAudioFileURL];
exporter.videoComposition = MainCompositionInst;
// Set the output file type to be a QuickTime movie.
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
// Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
[exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
if (exporter.status == AVAssetExportSessionStatusCompleted) {
ALAssetsLibrary *assetsLibrary = [[ALAssetsLibrary alloc] init];
self.mPlayer = [AVPlayer playerWithURL:exporter.outputURL];
//self.mPlayer2 = [AVPlayer playerWithURL:url];
//[self.mPlayer addObserver:self forKeyPath:#"status" options:0 context:AVPlayerDemoPlaybackViewControllerStatusObservationContext];
if ([assetsLibrary videoAtPathIsCompatibleWithSavedPhotosAlbum:exporter.outputURL]) {
[assetsLibrary writeVideoAtPathToSavedPhotosAlbum:exporter.outputURL completionBlock:NULL];
}
}
});
}];
}
Source: https://abdulazeem.wordpress.com/2012/04/02/
Related
In my app, I need to edit the video which recorded by AVFoundation.
The users could be add multiple images to the video, and they may drag images to anywhere of the full video screen.
Editing view:
Editing view and Exported View
and here the code:
AVMutableVideoCompositionInstruction *instruction = nil;
AVMutableVideoCompositionLayerInstruction *layerInstruction = nil;
AVAssetTrack *assetVideoTrack = nil;
AVAssetTrack *assetAudioTrack = nil;
// Check if the asset contains video and audio tracks
if ([[self.inputAsset tracksWithMediaType:AVMediaTypeVideo] count] != 0) {
assetVideoTrack = [self.inputAsset tracksWithMediaType:AVMediaTypeVideo][0];
}
if ([[self.inputAsset tracksWithMediaType:AVMediaTypeAudio] count] != 0) {
assetAudioTrack = [self.inputAsset tracksWithMediaType:AVMediaTypeAudio][0];
}
CMTime insertionPoint = kCMTimeZero;
NSError *error = nil;
// Check if a composition already exists, else create a composition using the input asset
self.composition = [AVMutableComposition composition];
// Insert the video and audio tracks from AVAsset
if (assetVideoTrack != nil) {
AVMutableCompositionTrack *compositionVideoTrack = [self.composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [self.inputAsset duration]) ofTrack:assetVideoTrack atTime:insertionPoint error:&error];
}
if (assetAudioTrack != nil) {
AVMutableCompositionTrack *compositionAudioTrack = [self.composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [self.inputAsset duration]) ofTrack:assetAudioTrack atTime:insertionPoint error:&error];
}
CGAffineTransform t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.height,0);
// Rotate transformation
CGAffineTransform t2 = CGAffineTransformRotate(t1, ( ( 90 ) / 180.0 * M_PI ));
// Create a new video composition
self.videoComposition = [AVMutableVideoComposition videoComposition];
//self.videoComposition.renderSize = CGSizeMake([UIScreen mainScreen].bounds.size.width, [UIScreen mainScreen].bounds.size.height);
self.videoComposition.renderSize = CGSizeMake(assetVideoTrack.naturalSize.height, assetVideoTrack.naturalSize.width);
self.videoComposition.frameDuration = CMTimeMake(1, 30);
// The rotate transform is set on a layer instruction
instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [self.composition duration]);
layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:(self.composition.tracks)[0]];
[layerInstruction setTransform:t2 atTime:kCMTimeZero];
instruction.layerInstructions = #[layerInstruction];
self.videoComposition.instructions = #[instruction];
if I change
self.videoComposition.renderSize = CGSizeMake(assetVideoTrack.naturalSize.height, assetVideoTrack.naturalSize.width);
to
self.videoComposition.renderSize = CGSizeMake([UIScreen mainScreen].bounds.size.width, [UIScreen mainScreen].bounds.size.height);
In addition, I use this code to add image layer's to video:
- (void)exportWillBegin
{
//CALayer *exportWatermarkLayer = [self copyWatermarkLayer:self.watermarkLayer];
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, self.videoComposition.renderSize.width, self.videoComposition.renderSize.height);
videoLayer.frame = CGRectMake(0, 0, self.videoComposition.renderSize.width, self.videoComposition.renderSize.height);
[parentLayer addSublayer:videoLayer];
//exportWatermarkLayer.position = CGPointMake(self.videoComposition.renderSize.width/2, self.videoComposition.renderSize.height/4);
//[parentLayer addSublayer:exportWatermarkLayer];
for (NSInteger i = 0; i < self.stickerViews.count; i++) {
WTStickerView *sticker = [self.stickerViews objectAtIndex:i];
CALayer *layer = sticker.contentView.layer;
// CGAffineTransform transform = CGAffineTransformIdentity;
layer.affineTransform = sticker.transform;
layer.position = sticker.center;
[parentLayer addSublayer:layer];
}
self.videoComposition.animationTool = [AVVideoCompositionCoreAnimationTool videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
}
can anyone help me to fix this? The image layer not in the correct position where I drag to.
I'm trying to loop some fragments of a recorded video and merge them into one video.
I've successfully merged and exported a composition with up to 16 tracks. But when I try to play the composition using AVPlayer before merging, I can only export a maximum of 8 tracks.
First, I create AVComposition and AVVideoComposition
+(void)previewUserClipDanceWithAudio:(NSURL*)videoURL audioURL:(NSURL*)audioFile loop:(NSArray*)loopTime slowMotion:(NSArray*)slowFactor showInViewController:(UIViewController*)viewController completion:(void(^)(BOOL success, AVVideoComposition* videoComposition, AVComposition* composition))completion{
AVMutableComposition* mixComposition = [[AVMutableComposition alloc] init];
NSMutableArray *arrayInstruction = [[NSMutableArray alloc] init];
AVMutableVideoCompositionInstruction *videoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
AVURLAsset *audioAsset = [[AVURLAsset alloc]initWithURL:audioFile options:nil];
//NSLog(#"audio File %#",audioFile);
CMTime duration = kCMTimeZero;
AVAsset *currentAsset = [AVAsset assetWithURL:videoURL];
BOOL isCurrentAssetPortrait = YES;
for(NSInteger i=0;i< [loopTime count]; i++) {
//handle looptime array
NSInteger loopDur = [[loopTime objectAtIndex:i] intValue];
NSInteger value = labs(loopDur);
//NSLog(#"loopInfo %d value %d",loopInfo,value);
//handle slowmotion array
double slowInfo = [[slowFactor objectAtIndex:i] doubleValue];
double videoScaleFactor = fabs(slowInfo);
AVMutableCompositionTrack *currentTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *audioTrack;
audioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];
if (i==0) {
[currentTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, currentAsset.duration) ofTrack:[[currentAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:duration error:nil];
[audioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, currentAsset.duration) ofTrack:[[currentAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:duration error:nil];
} else {
[currentTrack insertTimeRange:CMTimeRangeMake(CMTimeSubtract(currentAsset.duration, CMTimeMake(value, 10)), CMTimeMake(value, 10)) ofTrack:[[currentAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:duration error:nil];
if (videoScaleFactor==1) {
[audioTrack insertTimeRange:CMTimeRangeMake(CMTimeSubtract(currentAsset.duration, CMTimeMake(value, 10)), CMTimeMake(value, 10)) ofTrack:[[currentAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:duration error:nil];
}
//slow motion here
if (videoScaleFactor!=1) {
[currentTrack scaleTimeRange:CMTimeRangeMake(CMTimeSubtract(currentAsset.duration, CMTimeMake(value, 10)), CMTimeMake(value, 10))
toDuration:CMTimeMake(value*videoScaleFactor, 10)];
NSLog(#"slowmo %f",value*videoScaleFactor);
}
}
AVMutableVideoCompositionLayerInstruction *currentAssetLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:currentTrack];
AVAssetTrack *currentAssetTrack = [[currentAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
BOOL isCurrentAssetPortrait = YES;
//CGFloat assetScaleToFitRatio;
//assetScaleToFitRatio = [self getScaleToFitRatioCurrentTrack:currentTrack];
if(isCurrentAssetPortrait){
//NSLog(#"portrait");
if (slowInfo<0) {
CGRect screenRect = [[UIScreen mainScreen] bounds];
CGFloat ratio = screenRect.size.height / screenRect.size.width;
// we have to adjust the ratio for 16:9 screens
if (ratio == 1.775) ratio = 1.77777777777778;
CGFloat complimentSize = (currentAssetTrack.naturalSize.height*ratio);
CGFloat tx = (currentAssetTrack.naturalSize.width-complimentSize)/2;
// invert translation because of portrait
tx *= -1;
// t1: rotate and position video since it may have been cropped to screen ratio
CGAffineTransform t1 = CGAffineTransformTranslate(currentAssetTrack.preferredTransform, tx, 0);
// t2/t3: mirror video vertically
CGAffineTransform t2 = CGAffineTransformTranslate(t1, currentAssetTrack.naturalSize.width, 0);
CGAffineTransform t3 = CGAffineTransformScale(t2, -1, 1);
[currentAssetLayerInstruction setTransform:t3 atTime:duration];
} else if (loopDur<0) {
CGRect screenRect = [[UIScreen mainScreen] bounds];
CGFloat ratio = screenRect.size.height / screenRect.size.width;
// we have to adjust the ratio for 16:9 screens
if (ratio == 1.775) ratio = 1.77777777777778;
CGFloat complimentSize = (currentAssetTrack.naturalSize.height*ratio);
CGFloat tx = (currentAssetTrack.naturalSize.width-complimentSize)/2;
// invert translation because of portrait
tx *= -1;
// t1: rotate and position video since it may have been cropped to screen ratio
CGAffineTransform t1 = CGAffineTransformTranslate(currentAssetTrack.preferredTransform, tx, 0);
// t2/t3: mirror video horizontally
CGAffineTransform t2 = CGAffineTransformTranslate(t1, 0, currentAssetTrack.naturalSize.height);
CGAffineTransform t3 = CGAffineTransformScale(t2, 1, -1);
[currentAssetLayerInstruction setTransform:t3 atTime:duration];
} else {
[currentAssetLayerInstruction setTransform:currentAssetTrack.preferredTransform atTime:duration];
}
}else{
// CGFloat translateAxisX = (currentTrack.naturalSize.width > MAX_WIDTH )?(0.0):0.0;// if use <, 640 video will be moved left by 10px. (float)(MAX_WIDTH - currentTrack.naturalSize.width)/(float)4.0
// CGAffineTransform FirstAssetScaleFactor = CGAffineTransformMakeScale(assetScaleToFitRatio,assetScaleToFitRatio);
// [currentAssetLayerInstruction setTransform:
// CGAffineTransformConcat(CGAffineTransformConcat(currentAssetTrack.preferredTransform, FirstAssetScaleFactor),CGAffineTransformMakeTranslation(translateAxisX, 0)) atTime:duration];
}
if (i==0) {
duration=CMTimeAdd(duration, currentAsset.duration);
} else {
if (videoScaleFactor!=1) {
duration=CMTimeAdd(duration, CMTimeMake(value*videoScaleFactor, 10));
} else {
duration=CMTimeAdd(duration, CMTimeMake(value, 10));
}
}
[currentAssetLayerInstruction setOpacity:0.0 atTime:duration];
[arrayInstruction addObject:currentAssetLayerInstruction];
}
AVMutableCompositionTrack *AudioBGTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[AudioBGTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, audioAsset.duration) ofTrack:[[audioAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0] atTime:CMTimeSubtract(duration, audioAsset.duration) error:nil];
videoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, duration);
videoCompositionInstruction.layerInstructions = arrayInstruction;
CGSize naturalSize;
if(isCurrentAssetPortrait){
naturalSize = CGSizeMake(MAX_HEIGHT,MAX_WIDTH);//currentAssetTrack.naturalSize.height,currentAssetTrack.naturalSize.width);
} else {
naturalSize = CGSizeMake(MAX_WIDTH,MAX_HEIGHT);//currentAssetTrack.naturalSize;
}
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.instructions = [NSArray arrayWithObject:videoCompositionInstruction];
videoComposition.frameDuration = CMTimeMake(1, 30);
videoComposition.renderSize = CGSizeMake(naturalSize.width,naturalSize.height);
NSLog(#"prepared");
AVVideoComposition *composition = [videoComposition copy];
AVComposition *mixedComposition = [mixComposition copy];
completion(YES, composition, mixedComposition);
}
Then, I set the AVPlayer
-(void)playVideoWithComposition:(AVVideoComposition*)videoComposition inMutableComposition:(AVComposition*)composition{
MBProgressHUD *hud = [MBProgressHUD showHUDAddedTo:self.view animated:YES];
hud.label.text = myLanguage(#"kMergeClip");
savedComposition = [composition copy];
savedVideoComposition = [videoComposition copy];
playerItem = [AVPlayerItem playerItemWithAsset:composition];
playerItem.videoComposition = videoComposition;
[[NSNotificationCenter defaultCenter] addObserver:self selector:#selector(repeatVideo:) name:AVPlayerItemDidPlayToEndTimeNotification object:playerItem];
if (!player) {
player = [AVPlayer playerWithPlayerItem:playerItem];
layer = [AVPlayerLayer playerLayerWithPlayer:player];
layer.frame = [UIScreen mainScreen].bounds;
[self.ibPlayerView.layer insertSublayer:layer atIndex:0];
NSLog(#"create new player");
}
if (player.currentItem != playerItem ) {
[player replaceCurrentItemWithPlayerItem:playerItem];
}
player.actionAtItemEnd = AVPlayerActionAtItemEndNone;
//[player seekToTime:kCMTimeZero];
[playerItem addObserver:self
forKeyPath:#"status"
options:NSKeyValueObservingOptionInitial | NSKeyValueObservingOptionNew
context:#"AVPlayerStatus"];
}
When user previews all the video they want and hit save. I use this method to export
+(void)mergeUserCLip:(AVVideoComposition*)videoComposition withAsset:(AVComposition*)mixComposition showInViewController:(UIViewController*)viewController completion:(void(^)(BOOL success, NSURL *fileURL))completion{
MBProgressHUD *hud = [MBProgressHUD showHUDAddedTo:viewController.view animated:YES];
hud.mode = MBProgressHUDModeDeterminateHorizontalBar;
hud.label.text = myLanguage(#"kMergeClip");
//Name merge clip using beat name
//NSString* beatName = [[[NSString stringWithFormat:#"%#",audioFile] lastPathComponent] stringByDeletingPathExtension];
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *tmpDir = [[documentsDirectory stringByDeletingLastPathComponent] stringByAppendingPathComponent:#"tmp"];
NSString *myPathDocs = [tmpDir stringByAppendingPathComponent:[NSString stringWithFormat:#"merge-beat.mp4"]];
//Not remove here, will remove when call previewPlayVC
[[NSFileManager defaultManager] removeItemAtPath:myPathDocs error:nil];
// 1 - set up the overlay
CALayer *overlayLayer = [CALayer layer];
UIImage *overlayImage = [UIImage imageNamed:#"watermark.png"];
[overlayLayer setContents:(id)[overlayImage CGImage]];
overlayLayer.frame = CGRectMake(720-221, 1280-109, 181, 69);
[overlayLayer setMasksToBounds:YES];
// aLayer = [CALayer layer];
// [aLayer addSublayer:labelLogo.layer];
// aLayer.frame = CGRectMake(MAX_WIDTH- labelLogo.width - 10.0, MAX_HEIGHT-50.0, 20.0, 20.0);
// aLayer.opacity = 1;
// 2 - set up the parent layer
CALayer *parentLayer = [CALayer layer];
CALayer *videoLayer = [CALayer layer];
parentLayer.frame = CGRectMake(0, 0, MAX_HEIGHT,MAX_WIDTH);
videoLayer.frame = CGRectMake(0, 0, MAX_HEIGHT,MAX_WIDTH);
[parentLayer addSublayer:videoLayer];
[parentLayer addSublayer:overlayLayer];
// 3 - apply magic
AVMutableVideoComposition *mutableVideoComposition = [videoComposition copy];
mutableVideoComposition.animationTool = [AVVideoCompositionCoreAnimationTool
videoCompositionCoreAnimationToolWithPostProcessingAsVideoLayer:videoLayer inLayer:parentLayer];
NSURL *url = [NSURL fileURLWithPath:myPathDocs];
myLog(#"Path: %#", myPathDocs);
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPreset1280x720];
exporter.outputURL = url;
exporter.outputFileType = AVFileTypeMPEG4;
exporter.videoComposition = mutableVideoComposition;
exporter.shouldOptimizeForNetworkUse = NO;
[exporter exportAsynchronouslyWithCompletionHandler:^ {
//NSLog(#"exporting");
switch (exporter.status) {
case AVAssetExportSessionStatusCompleted: {
NSURL *url = [NSURL fileURLWithPath:myPathDocs];
hud.progress = 1.0f;
dispatch_async(dispatch_get_main_queue(), ^{
[MBProgressHUD hideHUDForView:viewController.view animated:YES];
});
[self checkTmpSize];
if (completion) {
completion(YES, url);
}
}
break;
case AVAssetExportSessionStatusExporting:
myLog(#"Exporting!");
break;
case AVAssetExportSessionStatusWaiting:
myLog(#"Waiting");
break;
default:
break;
}
}];
}
If select options to loop less than 8 times, the above code works fine.
If select options more than 8 times, export session freezes showing export.progress = 0.0000000
If I remove this line
playerItem.videoComposition = videoComposition;
Then I cannot preview the mixed video but enable to export normally (up to 16 tracks).
Or If I remove the line in export code:
exporter.videoComposition = mutableVideoComposition;
Then it's possible to preview the mixed video, and export normally WITHOUT video composition.
So I guess there's something wrong with AVVideoComposition and/or the way I implement it.
I would appreciate any suggestion.
Many thanks.
I highly doubt the reason for this is using AVPlayer to preview video somehow hinders AVAssetExportSession as described in below posts:
iOS 5: Error merging 3 videos with AVAssetExportSession
AVPlayerItem fails with AVStatusFailed and error code “Cannot Decode”
I ran into this issue while attempting to concatenate N videos while playing up to 3 videos I AVPlayer instances in an UICollectionView. It's been discussed in the Stack Overflow question you linked that iOS is capable of only handling so many instances of AVPlayer. Each instance uses up a "render pipeline". I discovered that each instance of AVMutableCompositionTrack also uses up one of these render pipelines.
Therefore if you use too many AVPlayer instances or try to create an AVMutableComposition with too many AVMutableCompositionTrack tracks, you can run out of resources to decode H264 and you will receive the "Cannot Decode" error. I was able to get around the issue by only using two instances of AVMutableCompositionTrack. This way I could "overlap" segments of video while also applying transitions (which requires two video tracks to "play" concurrently).
In short: minimize your usage of AVMutableCompositionTrack as well as AVPlayer. You can check out the AVCustomEdit sample code by Apple for an example of this. Specifically, check out the buildTransitionComposition method inside the APLSimpleEditor class.
try this, clear player item before exporting
[self.player replaceCurrentItemWithPlayerItem:nil];
I want to export and compress a video from iPod-Library,but the "exportAsynchronouslyWithCompletionHandler" can work correctly for some video,and not work for some video.it's not work and no throw exception.
But it is strange that if I comment out the method "setVideoComposition:",the "exportAsynchronouslyWithCompletionHandler" can work normally.
Here is my code:
AVAsset *_videoAsset = [AVAsset assetWithURL:[NSURL URLWithString:filmElementModel.alassetUrl]];
CMTime assetTime = [_videoAsset duration];
AVAssetTrack *avAssetTrack = [[_videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
Float64 duration = CMTimeGetSeconds(assetTime);
AVMutableComposition *avMutableComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *avMutableCompositionTrack = [avMutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
NSError *error = nil;
[avMutableCompositionTrack insertTimeRange:CMTimeRangeMake(CMTimeMakeWithSeconds(0.0f, 30), CMTimeMakeWithSeconds(duration>8.0f?8.0f:duration, 30))
ofTrack:avAssetTrack
atTime:kCMTimeZero
error:&error];
AVMutableVideoComposition *avMutableVideoComposition = [AVMutableVideoComposition videoComposition];
avMutableVideoComposition.frameDuration = CMTimeMake(1, 30);
AVMutableVideoCompositionLayerInstruction *layerInstruciton = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:avMutableComposition.tracks[0]];
[layerInstruciton setTransform:[[[_videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] preferredTransform] atTime:kCMTimeZero];
[layerInstruciton setOpacity:0.0f atTime:[_videoAsset duration]];
AVMutableVideoCompositionInstruction *avMutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
[avMutableVideoCompositionInstruction setTimeRange:CMTimeRangeMake(kCMTimeZero, [avMutableComposition duration])];
avMutableVideoCompositionInstruction.layerInstructions = [NSArray arrayWithObject:layerInstruciton];
if (avAssetTrack.preferredTransform.a) {
NSLog(#"横向");
avMutableVideoComposition.renderSize = CGSizeMake(avAssetTrack.naturalSize.width, avAssetTrack.naturalSize.height);
}else
{
avMutableVideoComposition.renderSize = CGSizeMake(avAssetTrack.naturalSize.height, avAssetTrack.naturalSize.width);
}
avMutableVideoComposition.instructions = [NSArray arrayWithObject:avMutableVideoCompositionInstruction];
// the url for save video
NSString *outUrlString = ITTPathForBabyShotResource([NSString stringWithFormat:#"%#/%#.mp4",DATA_ENV.userModel.userId,filmElementModel.filmElementId]);
NSFileManager *fm = [[NSFileManager alloc] init];
if ([fm fileExistsAtPath:outUrlString]) {
NSLog(#"video is have. then delete that");
if ([fm removeItemAtPath:outUrlString error:&error]) {
NSLog(#"delete is ok");
}else {
NSLog(#"delete is no error = %#",error.description);
}
}
CGSize renderSize = CGSizeMake(1280, 720);
if (MIN(avAssetTrack.naturalSize.width, avAssetTrack.naturalSize.height)<720) {
renderSize =avAssetTrack.naturalSize;
}
long long fileLimite =renderSize.width*renderSize.height*(duration>8.0f?8.0f:duration)/2;
_avAssetExportSession = [[AVAssetExportSession alloc] initWithAsset:avMutableComposition presetName:AVAssetExportPreset1280x720];
[_avAssetExportSession setVideoComposition:avMutableVideoComposition];
[_avAssetExportSession setOutputURL:[NSURL fileURLWithPath:outUrlString]];
[_avAssetExportSession setOutputFileType:AVFileTypeQuickTimeMovie];
[_avAssetExportSession setFileLengthLimit: fileLimite];
[_avAssetExportSession setShouldOptimizeForNetworkUse:YES];
[_avAssetExportSession exportAsynchronouslyWithCompletionHandler:^(void){
switch (_avAssetExportSession.status) {
case AVAssetExportSessionStatusFailed:
{
}
break;
case AVAssetExportSessionStatusCompleted:
{
}
break;
case AVAssetExportSessionStatusCancelled:
NSLog(#"export cancelled");
break;
case AVAssetExportSessionStatusExporting:
NSLog(#"AVAssetExportSessionStatusExporting");
break;
case AVAssetExportSessionStatusWaiting:
NSLog(#"AVAssetExportSessionStatusWaiting");
break;
}
}];
if (_avAssetExportSession.status != AVAssetExportSessionStatusCompleted){
NSLog(#"Retry export");
}
I solved this problem!
[avMutableCompositionTrack insertTimeRange:CMTimeRangeMake(CMTimeMakeWithSeconds(0.0f, 30), CMTimeMakeWithSeconds(duration>8.0f?8.0f:duration, 30))
ofTrack:avAssetTrack
atTime:kCMTimeZero
error:&error];
I replace the CMTimeRangeMake(CMTimeMakeWithSeconds(0.0f, 30) with CMTimeRangeMake(CMTimeMakeWithSeconds(0.1f, 30).
But I don't know why it can Work properly。
I have an application that is basically a view and when the user click a button, camera visualization starts.
I want to allow all orientations when the camera isn't showed, but when the camera is showed, I need to force the application to portrait mode because if it isn't on Portrait, the video is rotated. When the camera is closed, the app may rotate again.
Do you know if I can solve the problem with video orientation?
Or how can I force the app to portrait mode? I know that on earlier ios version, you can use [UIDevice setOrientation:], but it's deprecated for lastest ios.
How can I do this for ios 5 and ios 6?
I've tried with:
[self presentViewController:[UIViewController new] animated:NO
completion:^{ [self dismissViewControllerAnimated:NO completion:nil]; }];
And in shouldAutorotateToInterfaceOrientation method:
if (state == CAMERA) {
return (interfaceOrientation == UIInterfaceOrientationPortrait);
}else{
return YES;
}
It works ok and force the app to portrait. But when the camera is closed, it doesn't work properly, it doesn't rotate well.
I mean, when the camera is closed this is what happens:
The app is on portrait
If I try to rotate the app, the device is rotated but not the application. I can see that status bar of ipad with info about time, battery, etc is rotated but the app not.
If I go to portrait again and then I rotate the device, it works ok.
Do you know what could be the problem?
Thanks in advance.
I think I've found a solution for the problem with change camera and device orientation at the same time.
I call this code when I initialize the camera, and also in shouldAutorotateToInterfaceOrientation method, where I allow all orientations.
AVCaptureVideoOrientation newOrientation;
UIInterfaceOrientation deviceOrientation = [UIApplication sharedApplication].statusBarOrientation;
NSLog(#"deviceOrientation %c",deviceOrientation);
switch (deviceOrientation)
{
case UIInterfaceOrientationPortrait:
NSLog(#"UIInterfaceOrientationPortrait");
newOrientation = AVCaptureVideoOrientationPortrait;
break;
case UIInterfaceOrientationLandscapeRight:
NSLog(#"UIInterfaceOrientationLandscapeRight");
newOrientation = AVCaptureVideoOrientationLandscapeRight;
break;
case UIInterfaceOrientationLandscapeLeft:
NSLog(#"UIInterfaceOrientationLandscapeLeft");
newOrientation = AVCaptureVideoOrientationLandscapeLeft;
break;
default:
NSLog(#"default");
newOrientation = AVCaptureVideoOrientationPortrait;
break;
}
if ([self.prevLayer respondsToSelector:#selector(connection)]){
if ([self.prevLayer.connection isVideoOrientationSupported]){
self.prevLayer.connection.videoOrientation = newOrientation;
}else{
NSLog(#"NO respond to selector connection");
}
}else{
if ([self.prevLayer isOrientationSupported]){
self.prevLayer.orientation = newOrientation;
}else{
NSLog(#"NO isOrientationSupported");
}
}
Please Try Following code for the Orientation so i think your problem may be solved.
- (void)encodeVideoOrientation:(NSURL *)anOutputFileURL
{
CGAffineTransform rotationTransform;
CGAffineTransform rotateTranslate;
CGSize renderSize;
switch (self.recordingOrientation)
{
// set these 3 values based on orientation
}
AVURLAsset * videoAsset = [[AVURLAsset alloc]initWithURL:anOutputFileURL options:nil];
AVAssetTrack *sourceVideoTrack = [[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *sourceAudioTrack = [[videoAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
AVMutableComposition* composition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
ofTrack:sourceVideoTrack
atTime:kCMTimeZero error:nil];
[compositionVideoTrack setPreferredTransform:sourceVideoTrack.preferredTransform];
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, videoAsset.duration)
ofTrack:sourceAudioTrack
atTime:kCMTimeZero error:nil];
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:compositionVideoTrack];
[layerInstruction setTransform:rotateTranslate atTime:kCMTimeZero];
AVMutableVideoComposition *videoComposition = [AVMutableVideoComposition videoComposition];
videoComposition.frameDuration = CMTimeMake(1,30);
videoComposition.renderScale = 1.0;
videoComposition.renderSize = renderSize;
instruction.layerInstructions = [NSArray arrayWithObject: layerInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, videoAsset.duration);
videoComposition.instructions = [NSArray arrayWithObject: instruction];
AVAssetExportSession * assetExport = [[AVAssetExportSession alloc] initWithAsset:composition
presetName:AVAssetExportPresetMediumQuality];
NSString* videoName = #"export.mov";
NSString *exportPath = [NSTemporaryDirectory() stringByAppendingPathComponent:videoName];
NSURL * exportUrl = [NSURL fileURLWithPath:exportPath];
if ([[NSFileManager defaultManager] fileExistsAtPath:exportPath])
{
[[NSFileManager defaultManager] removeItemAtPath:exportPath error:nil];
}
assetExport.outputFileType = AVFileTypeMPEG4;
assetExport.outputURL = exportUrl;
assetExport.shouldOptimizeForNetworkUse = YES;
assetExport.videoComposition = videoComposition;
[assetExport exportAsynchronouslyWithCompletionHandler:
^(void ) {
switch (assetExport.status)
{
case AVAssetExportSessionStatusCompleted:
// export complete
NSLog(#"Export Complete");
break;
case AVAssetExportSessionStatusFailed:
NSLog(#"Export Failed");
NSLog(#"ExportSessionError: %#", [assetExport.error localizedDescription]);
// export error (see exportSession.error)
break;
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export Failed");
NSLog(#"ExportSessionError: %#", [assetExport.error localizedDescription]);
// export cancelled
break;
}
}];
}
Hope this helps you.!!
I am currently working on an iOS app which merges desired number of videos. Once the user taps the button to merge the videos, the videos are joined and then played using AVPlayer as:
CMTime nextClipStartTime = kCMTimeZero;
NSInteger i;
CMTime transitionDuration = CMTimeMake(1, 1); // Default transition duration is one second.
// Add two video tracks and two audio tracks.
AVMutableCompositionTrack *compositionVideoTracks[2];
AVMutableCompositionTrack *compositionAudioTracks[2];
compositionVideoTracks[0] = [self.mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
compositionVideoTracks[1] = [self.mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
compositionAudioTracks[0] = [self.mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
compositionAudioTracks[1] = [self.mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
CMTimeRange *passThroughTimeRanges = alloca(sizeof(CMTimeRange) * [self.selectedAssets count]);
CMTimeRange *transitionTimeRanges = alloca(sizeof(CMTimeRange) * [self.selectedAssets count]);
// Place clips into alternating video & audio tracks in composition, overlapped by transitionDuration.
for (i = 0; i < [self.selectedAssets count]; i++ )
{
NSInteger alternatingIndex = i % 2; // alternating targets: 0, 1, 0, 1, ...
AVURLAsset *asset = [self.selectedAssets objectAtIndex:i];
NSLog(#"number of tracks %d",asset.tracks.count);
CMTimeRange assetTimeRange;
assetTimeRange.start = kCMTimeZero;
assetTimeRange.duration = asset.duration;
NSValue *clipTimeRange = [NSValue valueWithCMTimeRange:assetTimeRange];
CMTimeRange timeRangeInAsset;
if (clipTimeRange)
timeRangeInAsset = [clipTimeRange CMTimeRangeValue];
else
timeRangeInAsset = CMTimeRangeMake(kCMTimeZero, [asset duration]);
AVAssetTrack *clipVideoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[compositionVideoTracks[alternatingIndex] insertTimeRange:timeRangeInAsset ofTrack:clipVideoTrack atTime:nextClipStartTime error:nil];
AVAssetTrack *clipAudioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
[compositionAudioTracks[alternatingIndex] insertTimeRange:timeRangeInAsset ofTrack:clipAudioTrack atTime:nextClipStartTime error:nil];
// Remember the time range in which this clip should pass through.
// Every clip after the first begins with a transition.
// Every clip before the last ends with a transition.
// Exclude those transitions from the pass through time ranges.
passThroughTimeRanges[i] = CMTimeRangeMake(nextClipStartTime, timeRangeInAsset.duration);
if (i > 0) {
passThroughTimeRanges[i].start = CMTimeAdd(passThroughTimeRanges[i].start, transitionDuration);
passThroughTimeRanges[i].duration = CMTimeSubtract(passThroughTimeRanges[i].duration, transitionDuration);
}
if (i+1 < [self.selectedAssets count]) {
passThroughTimeRanges[i].duration = CMTimeSubtract(passThroughTimeRanges[i].duration, transitionDuration);
}
// The end of this clip will overlap the start of the next by transitionDuration.
// (Note: this arithmetic falls apart if timeRangeInAsset.duration < 2 * transitionDuration.)
nextClipStartTime = CMTimeAdd(nextClipStartTime, timeRangeInAsset.duration);
nextClipStartTime = CMTimeSubtract(nextClipStartTime, transitionDuration);
// Remember the time range for the transition to the next item.
transitionTimeRanges[i] = CMTimeRangeMake(nextClipStartTime, transitionDuration);
}
// Set up the video composition if we are to perform crossfade or push transitions between clips.
NSMutableArray *instructions = [NSMutableArray array];
// Cycle between "pass through A", "transition from A to B", "pass through B", "transition from B to A".
for (i = 0; i < [self.selectedAssets count]; i++ )
{
NSInteger alternatingIndex = i % 2; // alternating targets
// Pass through clip i.
AVMutableVideoCompositionInstruction *passThroughInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
passThroughInstruction.timeRange = passThroughTimeRanges[i];
AVMutableVideoCompositionLayerInstruction *passThroughLayer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:compositionVideoTracks[alternatingIndex]];
passThroughInstruction.layerInstructions = [NSArray arrayWithObject:passThroughLayer];
[instructions addObject:passThroughInstruction];
AVMutableVideoCompositionLayerInstruction *fromLayer;
AVMutableVideoCompositionLayerInstruction *toLayer;
if (i+1 < [self.selectedAssets count])
{
// Add transition from clip i to clip i+1.
AVMutableVideoCompositionInstruction *transitionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
transitionInstruction.timeRange = transitionTimeRanges[i];
fromLayer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:compositionVideoTracks[alternatingIndex]];
toLayer = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:compositionVideoTracks[1-alternatingIndex]];
// Fade out the fromLayer by setting a ramp from 1.0 to 0.0.
[fromLayer setOpacityRampFromStartOpacity:1.0 toEndOpacity:0.0 timeRange:transitionTimeRanges[i]];
transitionInstruction.layerInstructions = [NSArray arrayWithObjects:fromLayer, toLayer, nil];
[instructions addObject:transitionInstruction];
}
AVURLAsset *sourceAsset = [AVURLAsset URLAssetWithURL:[self.selectedItemsURL objectAtIndex:i] options:[NSDictionary dictionaryWithObject:[NSNumber numberWithBool:YES] forKey:AVURLAssetPreferPreciseDurationAndTimingKey]];
AVAssetTrack *sourceVideoTrack = [[sourceAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
CGSize temp = CGSizeApplyAffineTransform(sourceVideoTrack.naturalSize, sourceVideoTrack.preferredTransform);
CGSize size = CGSizeMake(fabsf(temp.width), fabsf(temp.height));
CGAffineTransform transform = sourceVideoTrack.preferredTransform;
self.videoComposition.renderSize = sourceVideoTrack.naturalSize;
if (size.width > size.height) {
[fromLayer setTransform:transform atTime:sourceAsset.duration];
} else {
float s = size.width/size.height;
CGAffineTransform new = CGAffineTransformConcat(transform, CGAffineTransformMakeScale(s,s));
float x = (size.height - size.width*s)/2;
CGAffineTransform newer = CGAffineTransformConcat(new, CGAffineTransformMakeTranslation(x, 0));
[fromLayer setTransform:newer atTime:sourceAsset.duration];
}
}
self.videoComposition.instructions = instructions;
self.videoComposition.frameDuration = CMTimeMake(1, 30);
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *myPathDocs = [documentsDirectory stringByAppendingPathComponent:[NSString stringWithFormat:#"mergeVideo-%d.mov",arc4random() % 1000]];
NSURL *url = [NSURL fileURLWithPath:myPathDocs];
self.exporter = [[AVAssetExportSession alloc] initWithAsset:self.mixComposition presetName:AVAssetExportPresetMediumQuality];
self.exporter.outputURL=url;
self.exporter.outputFileType = AVFileTypeQuickTimeMovie;
self.exporter.videoComposition = self.videoComposition;
self.exporter.shouldOptimizeForNetworkUse = YES;
self.playerItem = [AVPlayerItem playerItemWithAsset:self.mixComposition];
self.playerItem.videoComposition = self.videoComposition;
AVPlayer *player = [AVPlayer playerWithPlayerItem:self.playerItem];
AVPlayerLayer *playerLayer = [AVPlayerLayer playerLayerWithPlayer:player];
[playerLayer setFrame:CGRectMake(0, 0, self.imageView.frame.size.width, self.imageView.frame.size.height)];
[[[self imageView] layer] addSublayer:playerLayer];
playerLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;
[player play];
[[NSNotificationCenter defaultCenter]
addObserver:self selector:#selector(checkPlayEnded) name:AVPlayerItemDidPlayToEndTimeNotification object:self.playerItem];
I am currently facing the following issues:
If one video is in portrait, and other is in landscape, how i will be able to rotate the portrait video in landscape as my view is in landscape orientation but the portrait video retain its original ?
(i am loading videos stored in the camera roll, not recording them inside my app)
Neglecting the above mentioned issue, if i merges any number of videos, they work fine. Once i save that new video in my library, and then load it in my app again and try to join that video with some other new video, the resolution got disturbed although both videos if played separately in the app, works really fine. How can i solve that?
(I have tried to follow the WWDC 2010 video editing tutorial, so this code is extracted from there.)
You can check the orientation of the video run time in above code while you create object for AVMutableVideoCompositionInstruction.
The code that is to be appended to the code to fix issue is....
AVMutableVideoCompositionInstruction *instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mutableComposition duration]);
AVAssetTrack *videoTrack = [[mutableComposition tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVMutableVideoCompositionLayerInstruction * layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:videoTrack];
UIImageOrientation videoAssetOrientation_ = UIImageOrientationUp;
BOOL isVideoAssetPortrait_ = NO;
CGAffineTransform videoTransform = assetVideoTrack.preferredTransform;
if(videoTransform.a == 0 && videoTransform.b == 1.0 && videoTransform.c == -1.0 && videoTransform.d == 0)
{
videoAssetOrientation_= UIImageOrientationRight;
isVideoAssetPortrait_ = YES;
}
if(videoTransform.a == 0 && videoTransform.b == -1.0 && videoTransform.c == 1.0 && videoTransform.d == 0)
{
videoAssetOrientation_ = UIImageOrientationLeft;
isVideoAssetPortrait_ = YES;
}
CGFloat FirstAssetScaleToFitRatio = 320.0 / assetVideoTrack.naturalSize.width;
if(isVideoAssetPortrait_)
{
videoSize=CGSizeMake(350,400);
FirstAssetScaleToFitRatio = 320.0/assetVideoTrack.naturalSize.height;
CGAffineTransform FirstAssetScaleFactor = CGAffineTransformMakeScale(FirstAssetScaleToFitRatio,FirstAssetScaleToFitRatio);
[layerInstruction setTransform:CGAffineTransformConcat(assetVideoTrack.preferredTransform, FirstAssetScaleFactor) atTime:kCMTimeZero];
}
else
{
videoSize=CGSizeMake(assetVideoTrack.naturalSize.width,assetVideoTrack.naturalSize.height);
}
The above code will keep the landscape video in landscape and prevent video being convert from portrait to landscape.
I hope this will help.Instead of first converting into proper orientation and then applying editing.If you append this code your one step will be reduced and can do both things(i.e editing & orientation) in one code and also in much quicker way.