My app captures a video clip for 3 seconds, and programmatically I want to create a 15 sec clip from recorded 3 sec clip by looping it 5 times. And finally have to save 15 sec clip in CameraRoll.
I have got my 3 sec video clip via AVCaptureMovieFileOutput and I have its NSURL from delegate which is currently in NSTemporaryDirectory().
I am using AVAssetWriterInput for looping it. But it ask for CMSampleBufferRef like :
[writerInput appendSampleBuffer:sampleBuffer];
How will I gee this CMSampleBufferRef from video in NSTemporaryDirectory() ?
I have seen code for converting UIImage to CMSampleBufferRef, but I can find any for video file.
Any suggestion will be helpful. :)
Finally, I fixed my problem using AVMutableComposition. Here is my code :
AVMutableComposition *mixComposition = [AVMutableComposition new];
AVMutableCompositionTrack *mutableCompVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVURLAsset *videoAsset = [[AVURLAsset alloc]initWithURL:3SecFileURL options:nil];
CMTimeRange video_timeRange = CMTimeRangeMake(kCMTimeZero, [videoAsset duration]);
CGAffineTransform rotationTransform = CGAffineTransformMakeRotation(M_PI_2);
[mutableCompVideoTrack setPreferredTransform:rotationTransform];
CMTime currentCMTime = kCMTimeZero;
for (NSInteger count = 0 ; count < 5 ; count++)
{
[mutableCompVideoTrack insertTimeRange:video_timeRange ofTrack:[[videoAsset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0] atTime:currentCMTime error:nil];
currentCMTime = CMTimeAdd(currentCMTime, [videoAsset duration]);
}
NSString *fullMoviePath = [NSTemporaryDirectory() stringByAppendingPathComponent:[#"moviefull" stringByAppendingPathExtension:#"mov"]];
NSURL *finalVideoFileURL = [NSURL fileURLWithPath:fullMoviePath];
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPresetPassthrough];
[exportSession setOutputFileType:AVFileTypeQuickTimeMovie];
[exportSession setOutputURL:finalVideoFileURL];
CMTimeValue val = [mixComposition duration].value;
CMTime start = CMTimeMake(0, 1);
CMTime duration = CMTimeMake(val, 1);
CMTimeRange range = CMTimeRangeMake(start, duration);
[exportSession setTimeRange:range];
[exportSession exportAsynchronouslyWithCompletionHandler:^{
switch ([exportSession status])
{
case AVAssetExportSessionStatusFailed:
{
NSLog(#"Export failed: %# %#", [[exportSession error] localizedDescription], [[exportSession error]debugDescription]);
}
case AVAssetExportSessionStatusCancelled:
{
NSLog(#"Export canceled");
break;
}
case AVAssetExportSessionStatusCompleted:
{
NSLog(#"Export complete!");
}
default: NSLog(#"default");
}
}];
Take a look at AVAssetReader, this can return an CMSampleBufferRef. Keep in mind, you'll need to manipulate the timestamp for your approach to work.
Related
It seems that AVFoundation cannot accept one of my videos. I really don't know why. It works with other videos, but not this one.
I'm not even modifying the video, I'm just doing a composition with the video track, and exporting it with the preset "AVAssetExportPresetHighestQuality".
I get this error :
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSUnderlyingError=0x60000045a8e0 {Error Domain=NSOSStatusErrorDomain Code=-12769 "(null)"}, NSLocalizedFailureReason=An unknown error occurred (-12769), NSLocalizedDescription=The operation could not be completed}
Do you know if there is something wrong in my code, or if the video is just not supported by AVFoundation ?
Here's the project on Github (it just exports the video to the camera roll) :
https://github.com/moonshaped/ExportSessionCrash
Or if you don't want to use Github :
Here's the video :
Dropbox link : https://www.dropbox.com/s/twgah26gqgsv9y9/localStoreTempVideoPath.mp4?dl=0
Or WeTransfer link : https://wetransfer.com/downloads/8f8ab257068461a2c9a051542610725220170606122640/8d934c
And here's the code :
- (void)exportVideo:(AVAsset *)videoAsset
videoDuration:(Float64)videoAssetDuration
to:(NSString *)resultPath{
[Utilities deleteFileIfExists:resultPath];
AVMutableComposition *mainComposition = [[AVMutableComposition alloc] init];
AVMutableCompositionTrack *compositionVideoTrack = [mainComposition addMutableTrackWithMediaType:AVMediaTypeVideo
preferredTrackID:kCMPersistentTrackID_Invalid];
int timeScale = 100000;
int videoDurationI = (int) (videoAssetDuration * (float) timeScale);
CMTime videoDuration = CMTimeMake(videoDurationI, timeScale);
CMTimeRange videoTimeRange = CMTimeRangeMake(kCMTimeZero, videoDuration);
NSArray<AVAssetTrack *> *videoTracks = [videoAsset tracksWithMediaType:AVMediaTypeVideo];
AVAssetTrack *videoTrack = [videoTracks objectAtIndex:0];
[compositionVideoTrack insertTimeRange:videoTimeRange
ofTrack:videoTrack
atTime:kCMTimeZero
error:nil];
NSURL *outptVideoUrl = [NSURL fileURLWithPath:resultPath];
self.exporter = [[AVAssetExportSession alloc] initWithAsset:mainComposition
presetName:AVAssetExportPresetHighestQuality];
self.exporter.outputURL = outptVideoUrl;
self.exporter.outputFileType = AVFileTypeMPEG4;
self.exporter.shouldOptimizeForNetworkUse = YES;
[self.exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
switch (self.exporter.status) {
case AVAssetExportSessionStatusFailed:{
#throw [NSException exceptionWithName:#"failed export"
reason:[self.exporter.error description]
userInfo:nil];
}
case AVAssetExportSessionStatusCancelled:
#throw [NSException exceptionWithName:#"cancelled export"
reason:#"Export cancelled"
userInfo:nil];
case AVAssetExportSessionStatusCompleted: {
NSLog(#"Export finished");
}
break;
default:
break;
}
});
}];
}
I did an experiment and come to this. If you reduce 1 or more millisecond from the videoTimeRange then it will work. Try replacing the below code block:
int timeScale = 100000;
Float64 seconds = CMTimeGetSeconds([videoAsset duration]) - 0.001;
NSUInteger videoDurationI = (NSUInteger) (seconds * (float) timeScale);
CMTime videoDuration = CMTimeMake(videoDurationI, timeScale);
CMTimeRange videoTimeRange = CMTimeRangeMake(kCMTimeZero, videoDuration);
The device you are trying to test is not able to decode it. Please try it on some newer devices e.g. iPhone 6. I tested your media on iPad simulator iOS10.3 and it worked fine there so it must be something to do with encoding.
What I wanted: Insert multiple videos layers with some opacity ALL at time 0:00 of a AVVideoCompositionTrack.
I read carefully official AVFoundation documents and also many WWDC discussion on this topic. But I couldn't understand why the result is NOT following the API statements.
I can achieve the overlay result with 2 AVPlayerLayer during playback. That could also mean I can use AVVideoCompositionCoreAnimationTool to achieve similar stuff during export.
But I tend to leave CALayer for the subtitles/image overlays or the animations.
What I tried for any inserting AVAsset:
- (void)addVideo:(AVAsset *)asset_in withOpacity:(float)opacity
{
// This is demo for composition of opaque videos. So we all insert video at time - 0:00
[_videoCompositionTrack insertTimeRange:CMTimeRangeMake( kCMTimeZero, asset_in.duration )
ofTrack:[ [asset_in tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0 ]
atTime:kCMTimeZero error:nil ];
AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
AVAssetTrack *assettrack_in = [ [asset_in tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0 ];
mutableVideoCompositionInstruction.timeRange = CMTimeRangeMake( kCMTimeZero, assettrack_in.timeRange.duration );
AVMutableVideoCompositionLayerInstruction *videoCompositionLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:_videoCompositionTrack];
[videoCompositionLayerInstruction setTransform:assettrack_in.preferredTransform atTime:kCMTimeZero];
[videoCompositionLayerInstruction setOpacity:opacity atTime:kCMTimeZero];
mutableVideoCompositionInstruction.layerInstructions = #[videoCompositionLayerInstruction];
[_arrayVideoCompositionInstructions addObject:mutableVideoCompositionInstruction];
}
Please be aware that insertTimeRange has atTime:kCMTimeZero as parameter. So I expect they will be put at the beginning of the video composition.
What I tried for exporting:
- (IBAction)ExportAndPlay:(id)sender
{
_mutableVideoComposition.instructions = [_arrayVideoCompositionInstructions copy];
// Create a static date formatter so we only have to initialize it once.
static NSDateFormatter *kDateFormatter;
if (!kDateFormatter) {
kDateFormatter = [[NSDateFormatter alloc] init];
kDateFormatter.dateStyle = NSDateFormatterMediumStyle;
kDateFormatter.timeStyle = NSDateFormatterShortStyle;
}
// Create the export session with the composition and set the preset to the highest quality.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:_mutableComposition presetName:AVAssetExportPresetHighestQuality];
// Set the desired output URL for the file created by the export process.
exporter.outputURL = [[[[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:#YES error:nil] URLByAppendingPathComponent:[kDateFormatter stringFromDate:[NSDate date]]] URLByAppendingPathExtension:CFBridgingRelease(UTTypeCopyPreferredTagWithClass((CFStringRef)AVFileTypeQuickTimeMovie, kUTTagClassFilenameExtension))];
// Set the output file type to be a QuickTime movie.
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = _mutableVideoComposition;
_mutableVideoComposition.instructions = [_arrayVideoCompositionInstructions copy];
// Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
[exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
switch ([exporter status]) {
case AVAssetExportSessionStatusFailed:
{
NSLog(#"Export failed: %# %#", [[exporter error] localizedDescription],[[exporter error]debugDescription]);
}
case AVAssetExportSessionStatusCancelled:
{
NSLog(#"Export canceled");
break;
}
case AVAssetExportSessionStatusCompleted:
{
NSLog(#"Export complete!");
NSLog( #"Export URL = %#", [exporter.outputURL absoluteString] );
[self altPlayWithUrl:exporter.outputURL];
}
default:
{
NSLog(#"default");
}
}
} );
}];
}
What turns out: It export a video with second video appended after first video, if I select 2 video clips.
This is not the same behaviour from what I read from :AVMutableCompositionTrack
May anyone shed some light for this helpless lamb?
Edit: Is there any detail missing so that no one can lend me a hand? If so, please leave comment so I can make them up.
Okay, sorry to ask this because of some misunderstanding of API about AVMutableCompositionTrack.
If you want to blend 2 video as 2 overlay as I do. You're going to need 2 AVMutableCompositionTrack instances, both instantiated from the same AVMutableComposition like this:
// 0. Setup AVMutableCompositionTracks <= FOR EACH AVAssets !!!
AVMutableCompositionTrack *mutableCompositionVideoTrack1 = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *mutableCompositionVideoTrack2 = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
And insert both AVAssets you want into THEIR OWN AVMutableCompositionTrack:
AVAssetTrack *videoAssetTrack1 = [ [ [_arrayVideoAssets firstObject] tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0 ];
AVAssetTrack *videoAssetTrack2 = [ [ [_arrayVideoAssets lastObject] tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0 ];
[ mutableCompositionVideoTrack1 insertTimeRange:CMTimeRangeMake( kCMTimeZero, videoAssetTrack1.timeRange.duration ) ofTrack:videoAssetTrack1 atTime:kCMTimeZero error:nil ];
[ mutableCompositionVideoTrack2 insertTimeRange:CMTimeRangeMake( kCMTimeZero, videoAssetTrack2.timeRange.duration ) ofTrack:videoAssetTrack2 atTime:kCMTimeZero error:nil ];
Then setup the AVMutableVideoComposition with 2 layer instruction of each AVMutableCompositionTracks:
AVMutableVideoCompositionInstruction *compInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
compInstruction.timeRange = CMTimeRangeMake( kCMTimeZero, videoAssetTrack1.timeRange.duration );
AVMutableVideoCompositionLayerInstruction *layerInstruction1 = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack1];
[layerInstruction1 setOpacity:0.5f atTime:kCMTimeZero];
AVMutableVideoCompositionLayerInstruction *layerInstruction2 = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack2];
[layerInstruction2 setOpacity:0.8f atTime:kCMTimeZero];
CGAffineTransform transformScale = CGAffineTransformMakeScale( 0.5f, 0.5f );
CGAffineTransform transformTransition = CGAffineTransformMakeTranslation( videoComposition.renderSize.width / 2, videoComposition.renderSize.height / 2 );
[ layerInstruction2 setTransform:CGAffineTransformConcat(transformScale, transformTransition) atTime:kCMTimeZero ];
compInstruction.layerInstructions = #[ layerInstruction1, layerInstruction2 ];
videoComposition.instructions = #[ compInstruction ];
Finally, it should be fine during exporting.
Sorry to bother if any did take a look.
I have an app which combines video files together to make a long video. There could be a delay between videos (e.g. V1 starts at t=0s and runs for 5 seconds, V1 starts at t=10s). In this case, I want the video to freeze the last frame of V1 until V2 starts.
I'm using the code below, but between videos, the whole video goes white.
Any ideas how I can get the effect I'm looking for?
Thanks!
#interface VideoJoins : NSObject
-(instancetype)initWithURL:(NSURL*)url
andDelay:(NSTimeInterval)delay;
#property (nonatomic, strong) NSURL* url;
#property (nonatomic) NSTimeInterval delay;
#end
and
+(void)joinVideosSequentially:(NSArray*)videoJoins
withFileType:(NSString*)fileType
toOutput:(NSURL*)outputVideoURL
onCompletion:(dispatch_block_t) onCompletion
onError:(ErrorBlock) onError
onCancel:(dispatch_block_t) onCancel
{
//From original question on http://stackoverflow.com/questions/6575128/how-to-combine-video-clips-with-different-orientation-using-avfoundation
// Didn't add support for portrait+landscape.
AVMutableComposition *composition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
CMTime startTime = kCMTimeZero;
/*videoClipPaths is a array of paths of the video clips recorded*/
//for loop to combine clips into a single video
for (NSInteger i=0; i < [videoJoins count]; i++)
{
VideoJoins* vj = videoJoins[i];
NSURL *url = vj.url;
NSTimeInterval nextDelayTI = 0;
if(i+1 < [videoJoins count])
{
VideoJoins* vjNext = videoJoins[i+1];
nextDelayTI = vjNext.delay;
}
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];
CMTime assetDuration = [asset duration];
CMTime assetDurationWithNextDelay = assetDuration;
if(nextDelayTI != 0)
{
CMTime nextDelay = CMTimeMakeWithSeconds(nextDelayTI, 1000000);
assetDurationWithNextDelay = CMTimeAdd(assetDuration, nextDelay);
}
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
//set the orientation
if(i == 0)
{
[compositionVideoTrack setPreferredTransform:videoTrack.preferredTransform];
}
BOOL ok = [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDurationWithNextDelay) ofTrack:videoTrack atTime:startTime error:nil];
ok = [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDuration) ofTrack:audioTrack atTime:startTime error:nil];
startTime = CMTimeAdd(startTime, assetDurationWithNextDelay);
}
//Delete output video if it exists
NSString* outputVideoString = [outputVideoURL absoluteString];
if ([[NSFileManager defaultManager] fileExistsAtPath:outputVideoString])
{
[[NSFileManager defaultManager] removeItemAtPath:outputVideoString error:nil];
}
//export the combined video
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:composition
presetName:AVAssetExportPresetHighestQuality];
exporter.outputURL = outputVideoURL;
exporter.outputFileType = fileType;
exporter.shouldOptimizeForNetworkUse = YES;
[exporter exportAsynchronouslyWithCompletionHandler:^(void)
{
switch (exporter.status)
{
case AVAssetExportSessionStatusCompleted: {
onCompletion();
break;
}
case AVAssetExportSessionStatusFailed:
{
NSLog(#"Export Failed");
NSError* err = exporter.error;
NSLog(#"ExportSessionError: %#", [err localizedDescription]);
onError(err);
break;
}
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export Cancelled");
NSLog(#"ExportSessionError: %#", [exporter.error localizedDescription]);
onCancel();
break;
}
}];
}
EDIT: Got it working. Here is how I extract the images and generate the videos from those images:
+ (void)writeImageAsMovie:(UIImage*)image
toPath:(NSURL*)url
fileType:(NSString*)fileType
duration:(NSTimeInterval)duration
completion:(VoidBlock)completion
{
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:url
fileType:fileType
error:&error];
NSParameterAssert(videoWriter);
CGSize size = image.size;
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
//Write samples:
CMTime halfTime = CMTimeMakeWithSeconds(duration/2, 100000);
CMTime endTime = CMTimeMakeWithSeconds(duration, 100000);
CVPixelBufferRef buffer = [VideoCreator pixelBufferFromCGImage:image.CGImage];
[adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];
[adaptor appendPixelBuffer:buffer withPresentationTime:halfTime];
[adaptor appendPixelBuffer:buffer withPresentationTime:endTime];
//Finish the session:
[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:endTime];
[videoWriter finishWritingWithCompletionHandler:^{
if(videoWriter.error)
{
NSLog(#"Error:%#", [error localizedDescription]);
}
if(completion)
{
completion();
}
}];
}
+(void)generateVideoImageFromURL:(NSURL*)url
atTime:(CMTime)thumbTime
withMaxSize:(CGSize)maxSize
completion:(ImageBlock)handler
{
AVURLAsset *asset=[[AVURLAsset alloc] initWithURL:url options:nil];
if(!asset)
{
if(handler)
{
handler(nil);
return;
}
}
if(CMTIME_IS_POSITIVE_INFINITY(thumbTime))
{
thumbTime = asset.duration;
}
else if(CMTIME_IS_NEGATIVE_INFINITY(thumbTime) || CMTIME_IS_INVALID(thumbTime) || CMTIME_IS_INDEFINITE(thumbTime))
{
thumbTime = CMTimeMake(0, 30);
}
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform=TRUE;
generator.maximumSize = maxSize;
CMTime actualTime;
NSError* error;
CGImageRef image = [generator copyCGImageAtTime:thumbTime actualTime:&actualTime error:&error];
UIImage *thumb = [[UIImage alloc] initWithCGImage:image];
CGImageRelease(image);
if(handler)
{
handler(thumb);
}
}
AVMutableComposition can only stitch videos together. I did it by doing two things:
Extracting last frame of the first video as image.
Making a video using this image(duration depends on your requirement).
Then you can compose these three videos (V1,V2 and your single image video). Both tasks are very easy to do.
For extracting the image out of the video, look at this link. If you don't want to use MPMoviePlayerController,which is used by accepted answer, then look at other answer by Steve.
For making video using the image check out this link. Question is about the issue of audio but I don't think you need audio. So just look at the method mentioned in question itself.
UPDATE:
There is an easier way but it comes with a disadvantage. You can have two AVPlayer. First one plays your video which has white frames in between. Other one sits behind paused at last frame of video 1. So when the middle part comes, you will see the second AVPlayer loaded with last frame. So as a whole it will look like video 1 is paused. And trust me naked eye can't make out when player got changed. But the obvious disadvantage is that your exported video will be same with blank frames. So if you are just going to play it back in your app only, you can go with this approach.
The first frame of video asset is always black or white
CMTime delta = CMTimeMake(1, 25); //1 frame (if fps = 25)
CMTimeRange timeRangeInVideoAsset = CMTimeRangeMake(delta,clipVideoTrack.timeRange.duration);
nextVideoClipStartTime = CMTimeAdd(nextVideoClipStartTime, timeRangeInVideoAsset.duration);
Merged more then 400 shirt videos in one.
I am attempting to rotate video prior to upload on my iOS device because other platforms (such as android) do not properly interpret the rotation information in iOS-recorded videos and, as a result, play them improperly rotated.
I have looked at the following stack posts but have not had success apply any of them to my case:
iOS rotate every frame of video
Rotating Video w/ AVMutableVideoCompositionLayerInstruction
AVMutableVideoComposition rotated video captured in portrait mode
iOS AVFoundation: Setting Orientation of Video
I coped the Apple AVSimpleEditor project sample, but unfortunately all that ever happens is, upon creating an AVAssetExportSession and calling exportAsynchronouslyWithCompletionHandler, no rotation is performed, and what's worse, rotation metadata is stripped out of the resulting file.
Here is the code that runs the export:
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:[_mutableComposition copy] presetName:AVAssetExportPresetPassthrough];
exportSession.outputURL = outputURL;
exportSession.outputFileType = AVFileType3GPP;
exportSession.shouldOptimizeForNetworkUse = YES;
exportSession.videoComposition = _mutableVideoComposition;
[exportSession exportAsynchronouslyWithCompletionHandler:^(void)
{
NSLog(#"Status is %d %#", exportSession.status, exportSession.error);
handler(exportSession);
[exportSession release];
}];
The values _mutableComposition and _mutableVideoComposition are initialized by this method here:
- (void) getVideoComposition:(AVAsset*)asset
{
AVMutableComposition *mutableComposition = nil;
AVMutableVideoComposition *mutableVideoComposition = nil;
AVMutableVideoCompositionInstruction *instruction = nil;
AVMutableVideoCompositionLayerInstruction *layerInstruction = nil;
CGAffineTransform t1;
CGAffineTransform t2;
AVAssetTrack *assetVideoTrack = nil;
AVAssetTrack *assetAudioTrack = nil;
// Check if the asset contains video and audio tracks
if ([[asset tracksWithMediaType:AVMediaTypeVideo] count] != 0) {
assetVideoTrack = [asset tracksWithMediaType:AVMediaTypeVideo][0];
}
if ([[asset tracksWithMediaType:AVMediaTypeAudio] count] != 0) {
assetAudioTrack = [asset tracksWithMediaType:AVMediaTypeAudio][0];
}
CMTime insertionPoint = kCMTimeZero;
NSError *error = nil;
// Step 1
// Create a composition with the given asset and insert audio and video tracks into it from the asset
// Check whether a composition has already been created, i.e, some other tool has already been applied
// Create a new composition
mutableComposition = [AVMutableComposition composition];
// Insert the video and audio tracks from AVAsset
if (assetVideoTrack != nil) {
AVMutableCompositionTrack *compositionVideoTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetVideoTrack atTime:insertionPoint error:&error];
}
if (assetAudioTrack != nil) {
AVMutableCompositionTrack *compositionAudioTrack = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, [asset duration]) ofTrack:assetAudioTrack atTime:insertionPoint error:&error];
}
// Step 2
// Translate the composition to compensate the movement caused by rotation (since rotation would cause it to move out of frame)
t1 = CGAffineTransformMakeTranslation(assetVideoTrack.naturalSize.height, 0.0);
// Rotate transformation
t2 = CGAffineTransformRotate(t1, degreesToRadians(90.0));
// Step 3
// Set the appropriate render sizes and rotational transforms
// Create a new video composition
mutableVideoComposition = [AVMutableVideoComposition videoComposition];
mutableVideoComposition.renderSize = CGSizeMake(assetVideoTrack.naturalSize.height,assetVideoTrack.naturalSize.width);
mutableVideoComposition.frameDuration = CMTimeMake(1, 30);
// The rotate transform is set on a layer instruction
instruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
instruction.timeRange = CMTimeRangeMake(kCMTimeZero, [mutableComposition duration]);
layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:(mutableComposition.tracks)[0]];
[layerInstruction setTransform:t2 atTime:kCMTimeZero];
// Step 4
// Add the transform instructions to the video composition
instruction.layerInstructions = #[layerInstruction];
mutableVideoComposition.instructions = #[instruction];
TT_RELEASE_SAFELY(_mutableComposition);
_mutableComposition = [mutableComposition retain];
TT_RELEASE_SAFELY(_mutableVideoComposition);
_mutableVideoComposition = [mutableVideoComposition retain];
}
I pulled this method from AVSERotateCommand from here. Can anyone suggest why this method would not successfully rotate my video by the necessary 90 degrees?
because you are using AVAssetExportPresetPassthrough the AVAssetExportSession will ignore the videoComposition, use any other preset.
I'm trying to merge (append) 3 videos using AVAssetExportSession, but I keep getting this error. Weirdly for 1 or 2 videos it worked.
Error Domain=AVFoundationErrorDomain Code=-11820 "Cannot Complete Export" UserInfo=0x458120 {NSLocalizedRecoverySuggestion=Try exporting again., NSLocalizedDescription=Cannot Complete Export}
I even tried to redo the function in case of error but what I got is only infinite error message. This is the snippet of my code.
AVMutableComposition *mixComposition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
NSError * error = nil;
NSMutableArray * timeRanges = [NSMutableArray arrayWithCapacity:arrayMovieUrl.count];
NSMutableArray * tracks = [NSMutableArray arrayWithCapacity:arrayMovieUrl.count];
for (int i=0; i<[arrayMovieUrl count]; i++) {
AVURLAsset *assetClip = [arrayMovieUrl objectAtIndex:i];
AVAssetTrack *clipVideoTrackB = [[assetClip tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
[timeRanges addObject:[NSValue valueWithCMTimeRange:CMTimeRangeMake(kCMTimeZero, assetClip.duration)]];
[tracks addObject:clipVideoTrackB];
}
[compositionTrack insertTimeRanges:timeRanges ofTracks:tracks atTime:kCMTimeZero error:&error];
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:mixComposition presetName:AVAssetExportPreset1280x720];
NSParameterAssert(exporter != nil);
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.outputURL = outputUrl;
[exporter exportAsynchronouslyWithCompletionHandler:^{
switch ([exporter status]) {
case AVAssetExportSessionStatusFailed:
NSLog(#"Export failed: %#", [exporter error]);
break;
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export canceled");
break;
case AVAssetExportSessionStatusCompleted:
NSLog(#"Export successfully");
break;
default:
break;
}
if (exporter.status != AVAssetExportSessionStatusCompleted){
NSLog(#"Retry export");
[self renderMovie];
}
}];
Is there something wrong with my code or iOS 5 has some bug?
I've found the problem. So the problem was actually because I use AVPlayerLayer to display each video in preview mode simultaneously. Referring to this question AVPlayerItem fails with AVStatusFailed and error code "Cannot Decode" , there's undocumented limit of maximum 4 simultaneous AVPlayer to work. And this limit somehow hinders AVAssetExportSession from working when there's 4 AVPlayer instance at that moment.
The solution is to release the AVPlayer before exporting, or not using AVPlayer altogether.