AVAssetWriter convert aac 5.1 audio track from AVAsset fail on appendSampleBuffer - ios

I am trying to extract audio track from .mp4 video file and convert to .m4a audio file with this outputSettings using AVAssetWriter class
AudioChannelLayout channelLayout;
memset(&channelLayout, 0, sizeof(AudioChannelLayout));
channelLayout.mChannelLayoutTag = kAudioChannelLayoutTag_Stereo;
NSDictionary *outputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
[NSNumber numberWithInt: kAudioFormatMPEG4AAC], AVFormatIDKey,
[NSNumber numberWithFloat:44100.0], AVSampleRateKey,
[NSNumber numberWithInt:2], AVNumberOfChannelsKey,
[NSNumber numberWithInt:128000], AVEncoderBitRateKey, // 128 kbps
[NSData dataWithBytes:&channelLayout length:sizeof(AudioChannelLayout)], AVChannelLayoutKey,
nil];
Case 1: .mp4 video file with audio track params (print with CMFormatDescriptionRef):
mediaType:'soun'
mediaSubType:'aac '
mSampleRate: 44100.000000
mFormatID: 'aac '
mChannelsPerFrame: 2
ACL: {Stereo (L R)}
result: successfully create .m4a output file with defined output params
Case 2: .mp4 video file with audio track params(print with CMFormatDescriptionRef):
mediaType:'soun'
mediaSubType:'aac '
mSampleRate: 48000.000000
mFormatID: 'aac '
mChannelsPerFrame: 6
ACL: {5.1 (C L R Ls Rs LFE)}
result: convert fails, when adding sample buffer [AVAssetWriter appendSampleBuffer: ...] with unknown error:
Error Domain: NSOSStatusErrorDomain
code: -12780
description: The operation could not be completed
To convert video to audio I use the same algorithm as described there: https://github.com/rs/SDAVAssetExportSession/blob/master/SDAVAssetExportSession.m
Also I tried to setup channelLayout.mChannelLayoutTag with kAudioChannelLayoutTag_MPEG_5_1_D, and update AVNumberOfChannelsKey with 6 value, but it doesn't work for me.
Can anyone help me to understand what I am doing wrong? May be there are no solution to perform this task using only iOS AVFoundation framework? Should I use different outputParams for audio track with 5.1 aac 6 channels?

I haven't tried this with 5.1 audio, but when extracting audio from video I like to use a passthrough AVAssetExportSession on an audio-only AVMutableComposition as this avoids transcoding which would be slow and throw away audio quality. Something like:
AVMutableComposition* newAudioAsset = [AVMutableComposition composition];
AVMutableCompositionTrack* dstCompositionTrack;
dstCompositionTrack = [newAudioAsset addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
AVAsset* srcAsset = [AVURLAsset URLAssetWithURL:srcURL options:nil];
AVAssetTrack* srcTrack = [[srcAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
CMTimeRange timeRange = srcTrack.timeRange;//CMTimeRangeMake(kCMTimeZero, srcAsset.duration);
NSError* error;
if(NO == [dstCompositionTrack insertTimeRange:timeRange ofTrack:srcTrack atTime:kCMTimeZero error:&error]) {
NSLog(#"track insert failed: %#\n", error);
return 1;
}
__block AVAssetExportSession* exportSesh = [[AVAssetExportSession alloc] initWithAsset:newAudioAsset presetName:AVAssetExportPresetPassthrough];
exportSesh.outputFileType = AVFileTypeAppleM4A;
exportSesh.outputURL = dstURL;
[exportSesh exportAsynchronouslyWithCompletionHandler:^{
AVAssetExportSessionStatus status = exportSesh.status;
NSLog(#"exportAsynchronouslyWithCompletionHandler: %i\n", status);
if(AVAssetExportSessionStatusFailed == status) {
NSLog(#"FAILURE: %#\n", exportSesh.error);
} else if(AVAssetExportSessionStatusCompleted == status) {
NSLog(#"SUCCESS!\n");
}
}];
I don't have any 5.1 files on hand, so if that doesn't work you may need to look more closely at the line
AVAssetTrack* srcTrack = [[srcAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
p.s. this code from 2012 "just worked", which was nice.

Related

iOS Convert AVI to MP4

(Please note I've already looked at this other SO post.)
The Problem
I'm trying to convert an avi video to an mp4 so that I can play it natively on an iOS app using Objective-C
What I've Tried
I'm trying the following code to do that conversion:
- (void)convertVideoToLowQuailtyWithInputURL:(NSURL*)inputURL outputURL:(NSURL*)outputURL handler:(void (^)(AVAssetExportSession*))handler {
[[NSFileManager defaultManager] removeItemAtURL:outputURL error:nil];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:inputURL options:nil];
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPresetHighestQuality];
exportSession.outputURL = outputURL;
exportSession.outputFileType = AVFileTypeMPEG4;
[exportSession exportAsynchronouslyWithCompletionHandler:^(void) {
handler(exportSession);
}];
}
The error that is returned from the exportSession is Cannot Open
Extra Information
When I run the video that I'm trying to convert through Mediainfo I get the following for the video:
7 332kb/s, 1920*1080 (16:9), at 25.000 FPS, AVC (Baseline#L3.1) (CABAC / 1 Ref Frames)
And this for the audio:
128 kb/s, 8 000 Hz, 16 bits, 1 channel, PCM (Little / Signed)
I also used the exportPresetsCompatibleWithAsset: method on AVAssetExportSession and got the following results:
AVAssetExportPreset1920x1080,
AVAssetExportPresetLowQuality,
AVAssetExportPresetAppleM4A,
AVAssetExportPresetHEVCHighestQuality,
AVAssetExportPreset640x480,
AVAssetExportPreset3840x2160,
AVAssetExportPresetHEVC3840x2160,
AVAssetExportPresetHighestQuality,
AVAssetExportPreset1280x720,
AVAssetExportPresetMediumQuality,
AVAssetExportPreset960x540,
AVAssetExportPresetHEVC1920x1080
Another thing to note is that when playing with the preset and the output I managed to get an audio only file that was basically white noise. This was using the preset AVAssetExportPresetAppleM4A.
I hope that I've jotted down enough information.
Update
Using the comment by Ashley, i've created a function to return the export settings compatible with the asset.
- (void)determineCompatibleExportForAsset:(AVURLAsset *)asset completion:(void(^)(NSArray<ExportSettings *> *exports))handler {
NSArray<NSString *> *presets = #[
AVAssetExportPresetLowQuality,
AVAssetExportPresetMediumQuality,
AVAssetExportPresetHighestQuality,
AVAssetExportPresetHEVCHighestQuality,
AVAssetExportPreset640x480,
AVAssetExportPreset960x540,
AVAssetExportPreset1280x720,
AVAssetExportPreset1920x1080,
AVAssetExportPreset3840x2160,
AVAssetExportPresetHEVC1920x1080,
AVAssetExportPresetHEVC3840x2160,
AVAssetExportPresetAppleM4A,
AVAssetExportPresetPassthrough
];
NSArray<NSString *> *outputs = #[
AVFileTypeQuickTimeMovie,
AVFileTypeMPEG4,
AVFileTypeAppleM4V,
AVFileTypeAppleM4A,
AVFileType3GPP,
AVFileType3GPP2,
AVFileTypeCoreAudioFormat,
AVFileTypeWAVE,
AVFileTypeAIFF,
AVFileTypeAIFC,
AVFileTypeAMR,
AVFileTypeMPEGLayer3,
AVFileTypeSunAU,
AVFileTypeAC3,
AVFileTypeEnhancedAC3,
AVFileTypeJPEG,
AVFileTypeDNG,
AVFileTypeHEIC,
AVFileTypeAVCI,
AVFileTypeHEIF,
AVFileTypeTIFF
];
__block int counter = 0;
int totalCount = (int)presets.count * (int)outputs.count;
NSMutableArray<ExportSettings *> *exportSettingsArray = [#[] mutableCopy];
for (NSString *preset in presets) {
for (NSString *output in outputs) {
[AVAssetExportSession determineCompatibilityOfExportPreset:preset withAsset:asset outputFileType:output completionHandler:^(BOOL compatible) {
if (compatible) {
ExportSettings *exportSettings = [[ExportSettings alloc] initWithPreset:preset outputType:output];
[exportSettingsArray addObject:exportSettings];
}
counter++;
if (counter == totalCount) {
if (handler) {
handler([exportSettingsArray copy]);
}
}
}];
}
}
}
The results of this are as follows:
"Preset: AVAssetExportPresetAppleM4A Output: com.apple.m4a-audio",
"Preset: AVAssetExportPresetPassthrough Output: com.microsoft.waveform-audio",
"Preset: AVAssetExportPresetPassthrough Output: public.aifc-audio",
"Preset: AVAssetExportPresetPassthrough Output: public.aiff-audio",
"Preset: AVAssetExportPresetPassthrough Output: com.apple.coreaudio-format",
"Preset: AVAssetExportPresetPassthrough Output: com.apple.quicktime-movie"
From this I deduced that using the preset AVAssetExportPresetPassthrough and output type AVFileTypeQuickTimeMovie would be compatible.
However when running the following code: (i've tried .mp4, .mov and .qt for the file type)
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *filePath = [documentsDirectory stringByAppendingPathComponent:#"MyVideo.mov"];
NSURL *outputURL = [NSURL fileURLWithPath:filePath];
NSURL *localURL = [NSBundle URLForResource:#"20180626_145233-v" withExtension:#"avi" subdirectory:nil inBundleWithURL:[NSBundle mainBundle].bundleURL];
[self convertVideoToLowQuailtyWithInputURL:localURL outputURL:outputURL handler:^(AVAssetExportSession *exportSession) {
switch ([exportSession status]) {
case AVAssetExportSessionStatusFailed:
NSLog(#"Export failed: %#", [exportSession error]);
break;
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export canceled");
break;
case AVAssetExportSessionStatusCompleted:
NSLog(#"Successfully");
NSLog(#"OutputURL: %#", outputURL.absoluteString);
break;
default:
break;
}
}];
Which calls:
- (void)convertVideoToLowQuailtyWithInputURL:(NSURL*)inputURL outputURL:(NSURL*)outputURL handler:(void (^)(AVAssetExportSession*))handler {
[[NSFileManager defaultManager] removeItemAtURL:outputURL error:nil];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:inputURL options:nil];
AVAssetExportSession *exportSession = [[AVAssetExportSession alloc] initWithAsset:asset presetName:AVAssetExportPresetPassthrough];
exportSession.outputURL = outputURL;
exportSession.outputFileType = AVFileTypeQuickTimeMovie;
[exportSession exportAsynchronouslyWithCompletionHandler:^(void) {
handler(exportSession);
}];
}
I get this error:
Export failed: Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could not be completed" UserInfo={NSLocalizedFailureReason=An unknown error occurred (-12842), NSLocalizedDescription=The operation could not be completed, NSUnderlyingError=0x60400024def0 {Error Domain=NSOSStatusErrorDomain Code=-12842 "(null)"}}
The reason you cannot open the file is because iOS does not support AVI files.
Here is the link to the Apple documentation on supported file types within AVFoundation.
Or this image for posterity should the values change:
Finally, you can just inspect the definition of AVFileType from within XCode to see this list for yourself.
For what it is worth, a cursory inspection on AVI via Google indicates that there are limitations with the AVI container that have been rectified in newer formats that are supported by iOS, so there is likely little incentive to add support at a later date.
While also a potential issue being that Microsoft created AVI, I cannot locate any restrictive licensing that would prohibit someone from using it, but IANAL.
You should use FFmpeg library for that kind of purposes.
Here is an open-source video-player based on FFmpeg: https://github.com/kolyvan/kxmovie
I don't know if this player still works for the latest iOS but in any case in source codes, you can discover many interesting things that can help you with your problem (such as how to use FFmpeg library).
Very easy to do with MobileFFMpeg
https://stackoverflow.com/a/59325680/1466453
Once you have MobileFFMpeg then make a call as follows:
[MobileFFMpeg execute:#"-i infile.avi youroutput.mp4"];

Compositing 2 videos with half opacity using AVFoundation without CALayer becomes appending videos?

What I wanted: Insert multiple videos layers with some opacity ALL at time 0:00 of a AVVideoCompositionTrack.
I read carefully official AVFoundation documents and also many WWDC discussion on this topic. But I couldn't understand why the result is NOT following the API statements.
I can achieve the overlay result with 2 AVPlayerLayer during playback. That could also mean I can use AVVideoCompositionCoreAnimationTool to achieve similar stuff during export.
But I tend to leave CALayer for the subtitles/image overlays or the animations.
What I tried for any inserting AVAsset:
- (void)addVideo:(AVAsset *)asset_in withOpacity:(float)opacity
{
// This is demo for composition of opaque videos. So we all insert video at time - 0:00
[_videoCompositionTrack insertTimeRange:CMTimeRangeMake( kCMTimeZero, asset_in.duration )
ofTrack:[ [asset_in tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0 ]
atTime:kCMTimeZero error:nil ];
AVMutableVideoCompositionInstruction *mutableVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
AVAssetTrack *assettrack_in = [ [asset_in tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0 ];
mutableVideoCompositionInstruction.timeRange = CMTimeRangeMake( kCMTimeZero, assettrack_in.timeRange.duration );
AVMutableVideoCompositionLayerInstruction *videoCompositionLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:_videoCompositionTrack];
[videoCompositionLayerInstruction setTransform:assettrack_in.preferredTransform atTime:kCMTimeZero];
[videoCompositionLayerInstruction setOpacity:opacity atTime:kCMTimeZero];
mutableVideoCompositionInstruction.layerInstructions = #[videoCompositionLayerInstruction];
[_arrayVideoCompositionInstructions addObject:mutableVideoCompositionInstruction];
}
Please be aware that insertTimeRange has atTime:kCMTimeZero as parameter. So I expect they will be put at the beginning of the video composition.
What I tried for exporting:
- (IBAction)ExportAndPlay:(id)sender
{
_mutableVideoComposition.instructions = [_arrayVideoCompositionInstructions copy];
// Create a static date formatter so we only have to initialize it once.
static NSDateFormatter *kDateFormatter;
if (!kDateFormatter) {
kDateFormatter = [[NSDateFormatter alloc] init];
kDateFormatter.dateStyle = NSDateFormatterMediumStyle;
kDateFormatter.timeStyle = NSDateFormatterShortStyle;
}
// Create the export session with the composition and set the preset to the highest quality.
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:_mutableComposition presetName:AVAssetExportPresetHighestQuality];
// Set the desired output URL for the file created by the export process.
exporter.outputURL = [[[[NSFileManager defaultManager] URLForDirectory:NSDocumentDirectory inDomain:NSUserDomainMask appropriateForURL:nil create:#YES error:nil] URLByAppendingPathComponent:[kDateFormatter stringFromDate:[NSDate date]]] URLByAppendingPathExtension:CFBridgingRelease(UTTypeCopyPreferredTagWithClass((CFStringRef)AVFileTypeQuickTimeMovie, kUTTagClassFilenameExtension))];
// Set the output file type to be a QuickTime movie.
exporter.outputFileType = AVFileTypeQuickTimeMovie;
exporter.shouldOptimizeForNetworkUse = YES;
exporter.videoComposition = _mutableVideoComposition;
_mutableVideoComposition.instructions = [_arrayVideoCompositionInstructions copy];
// Asynchronously export the composition to a video file and save this file to the camera roll once export completes.
[exporter exportAsynchronouslyWithCompletionHandler:^{
dispatch_async(dispatch_get_main_queue(), ^{
switch ([exporter status]) {
case AVAssetExportSessionStatusFailed:
{
NSLog(#"Export failed: %# %#", [[exporter error] localizedDescription],[[exporter error]debugDescription]);
}
case AVAssetExportSessionStatusCancelled:
{
NSLog(#"Export canceled");
break;
}
case AVAssetExportSessionStatusCompleted:
{
NSLog(#"Export complete!");
NSLog( #"Export URL = %#", [exporter.outputURL absoluteString] );
[self altPlayWithUrl:exporter.outputURL];
}
default:
{
NSLog(#"default");
}
}
} );
}];
}
What turns out: It export a video with second video appended after first video, if I select 2 video clips.
This is not the same behaviour from what I read from :AVMutableCompositionTrack
May anyone shed some light for this helpless lamb?
Edit: Is there any detail missing so that no one can lend me a hand? If so, please leave comment so I can make them up.
Okay, sorry to ask this because of some misunderstanding of API about AVMutableCompositionTrack.
If you want to blend 2 video as 2 overlay as I do. You're going to need 2 AVMutableCompositionTrack instances, both instantiated from the same AVMutableComposition like this:
// 0. Setup AVMutableCompositionTracks <= FOR EACH AVAssets !!!
AVMutableCompositionTrack *mutableCompositionVideoTrack1 = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *mutableCompositionVideoTrack2 = [mutableComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
And insert both AVAssets you want into THEIR OWN AVMutableCompositionTrack:
AVAssetTrack *videoAssetTrack1 = [ [ [_arrayVideoAssets firstObject] tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0 ];
AVAssetTrack *videoAssetTrack2 = [ [ [_arrayVideoAssets lastObject] tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0 ];
[ mutableCompositionVideoTrack1 insertTimeRange:CMTimeRangeMake( kCMTimeZero, videoAssetTrack1.timeRange.duration ) ofTrack:videoAssetTrack1 atTime:kCMTimeZero error:nil ];
[ mutableCompositionVideoTrack2 insertTimeRange:CMTimeRangeMake( kCMTimeZero, videoAssetTrack2.timeRange.duration ) ofTrack:videoAssetTrack2 atTime:kCMTimeZero error:nil ];
Then setup the AVMutableVideoComposition with 2 layer instruction of each AVMutableCompositionTracks:
AVMutableVideoCompositionInstruction *compInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
compInstruction.timeRange = CMTimeRangeMake( kCMTimeZero, videoAssetTrack1.timeRange.duration );
AVMutableVideoCompositionLayerInstruction *layerInstruction1 = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack1];
[layerInstruction1 setOpacity:0.5f atTime:kCMTimeZero];
AVMutableVideoCompositionLayerInstruction *layerInstruction2 = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableCompositionVideoTrack2];
[layerInstruction2 setOpacity:0.8f atTime:kCMTimeZero];
CGAffineTransform transformScale = CGAffineTransformMakeScale( 0.5f, 0.5f );
CGAffineTransform transformTransition = CGAffineTransformMakeTranslation( videoComposition.renderSize.width / 2, videoComposition.renderSize.height / 2 );
[ layerInstruction2 setTransform:CGAffineTransformConcat(transformScale, transformTransition) atTime:kCMTimeZero ];
compInstruction.layerInstructions = #[ layerInstruction1, layerInstruction2 ];
videoComposition.instructions = #[ compInstruction ];
Finally, it should be fine during exporting.
Sorry to bother if any did take a look.

Memory leak in CVPixelBufferPoolCreatePixelBuffer

At the moment I am debugging some code. What it does it reads movie file into frames array, applies some transformations to the frames and compiles everything back to the video file. I have fixed all memory leaks which were called by myself the remaining one is pretty serious. It leaves almost 400 mb of memory after the process. Here is the screenshot of the leak.
As you can see, the higher level call is only in VideoToolbox library. Hovever, I do not even include this library into my project. I don't believe that this leak comes with Apple's library and there is nothing I can do about it.
Here is the only code that uses something related to h264 and decoding which were mentioned in the call tree.
-(void)writeImageAsMovie:(NSArray *)array toPath:(NSString*)path size:(CGSize)size
{
if ([[NSFileManager defaultManager] fileExistsAtPath:path]) {
[[NSFileManager defaultManager] removeItemAtPath:path error:nil];
}
NSError *error = nil;
// FIRST, start up an AVAssetWriter instance to write your video
// Give it a destination path (for us: tmp/temp.mov)
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:[NSURL fileURLWithPath:path]
fileType:AVFileTypeQuickTimeMovie
error:&error];
NSParameterAssert(videoWriter);
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
//Start a SESSION of writing.
// After you start a session, you will keep adding image frames
// until you are complete - then you will tell it you are done.
[videoWriter startWriting];
// This starts your video at time = 0
[videoWriter startSessionAtSourceTime:kCMTimeZero];
CVPixelBufferRef buffer = NULL;
//int frameDuration = (1/fps)*600.0;
//NSLog(#"FRAME DURATION: %d",frameDuration);
int i = 0;
while (1)
{
// Check if the writer is ready for more data, if not, just wait
if(writerInput.readyForMoreMediaData){
CMTime frameTime = CMTimeMake(1, fps);
// CMTime = Value and Timescale.
// Timescale = the number of tics per second you want
// Value is the number of tics
// For us - each frame we add will be 1/4th of a second
// Apple recommend 600 tics per second for video because it is a
// multiple of the standard video rates 24, 30, 60 fps etc.
CMTime lastTime=CMTimeMake(i, fps);
CMTime presentTime=CMTimeAdd(lastTime, frameTime);
if (i == 0) {presentTime = CMTimeMake(0, 600);}
// This ensures the first frame starts at 0.
if (i >= [array count])
{
buffer = NULL;
}
else
{
// This command grabs the next UIImage and converts it to a CGImage
CVPixelBufferRef tempBuffer = buffer;
CVPixelBufferRelease(tempBuffer);
buffer = [self pixelBufferFromCGImage:[[array objectAtIndex:i] CGImage]andSize:size];
}
if (buffer)
{
// Give the CGImage to the AVAssetWriter to add to your video
[adaptor appendPixelBuffer:buffer withPresentationTime:presentTime];
i++;
}
else
{
//Finish the session:
// This is important to be done exactly in this order
[writerInput markAsFinished];
// WARNING: finishWriting in the solution above is deprecated.
// You now need to give a completion handler.
[videoWriter finishWritingWithCompletionHandler:^{
NSLog(#"Finished writing...checking completion status...");
if (videoWriter.status != AVAssetWriterStatusFailed && videoWriter.status == AVAssetWriterStatusCompleted)
{
NSLog(#"Video writing succeeded.");
// Move video to camera roll
// NOTE: You cannot write directly to the camera roll.
// You must first write to an iOS director then move it!
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0]; // Get documents folder
NSString *audioTemp = [documentsDirectory stringByAppendingPathComponent:#"/temp-audio.m4a"];
NSString *finalTemp = [documentsDirectory stringByAppendingPathComponent:#"/temp.mov"];
[self mergeVideoFromPath:[NSURL fileURLWithPath:path] withAudioFromPath:[NSURL fileURLWithPath:audioTemp] atPath:[NSURL fileURLWithPath:finalTemp]];
} else
{
NSLog(#"Video writing failed: %#", videoWriter.error);
}
}]; // end videoWriter finishWriting Block
CVPixelBufferRelease(buffer);
CVPixelBufferPoolRelease(adaptor.pixelBufferPool);
NSLog (#"Done");
break;
}
}
}
}

AVAssetExportSession combine video files and freeze frame between videos

I have an app which combines video files together to make a long video. There could be a delay between videos (e.g. V1 starts at t=0s and runs for 5 seconds, V1 starts at t=10s). In this case, I want the video to freeze the last frame of V1 until V2 starts.
I'm using the code below, but between videos, the whole video goes white.
Any ideas how I can get the effect I'm looking for?
Thanks!
#interface VideoJoins : NSObject
-(instancetype)initWithURL:(NSURL*)url
andDelay:(NSTimeInterval)delay;
#property (nonatomic, strong) NSURL* url;
#property (nonatomic) NSTimeInterval delay;
#end
and
+(void)joinVideosSequentially:(NSArray*)videoJoins
withFileType:(NSString*)fileType
toOutput:(NSURL*)outputVideoURL
onCompletion:(dispatch_block_t) onCompletion
onError:(ErrorBlock) onError
onCancel:(dispatch_block_t) onCancel
{
//From original question on http://stackoverflow.com/questions/6575128/how-to-combine-video-clips-with-different-orientation-using-avfoundation
// Didn't add support for portrait+landscape.
AVMutableComposition *composition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
CMTime startTime = kCMTimeZero;
/*videoClipPaths is a array of paths of the video clips recorded*/
//for loop to combine clips into a single video
for (NSInteger i=0; i < [videoJoins count]; i++)
{
VideoJoins* vj = videoJoins[i];
NSURL *url = vj.url;
NSTimeInterval nextDelayTI = 0;
if(i+1 < [videoJoins count])
{
VideoJoins* vjNext = videoJoins[i+1];
nextDelayTI = vjNext.delay;
}
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];
CMTime assetDuration = [asset duration];
CMTime assetDurationWithNextDelay = assetDuration;
if(nextDelayTI != 0)
{
CMTime nextDelay = CMTimeMakeWithSeconds(nextDelayTI, 1000000);
assetDurationWithNextDelay = CMTimeAdd(assetDuration, nextDelay);
}
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
//set the orientation
if(i == 0)
{
[compositionVideoTrack setPreferredTransform:videoTrack.preferredTransform];
}
BOOL ok = [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDurationWithNextDelay) ofTrack:videoTrack atTime:startTime error:nil];
ok = [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDuration) ofTrack:audioTrack atTime:startTime error:nil];
startTime = CMTimeAdd(startTime, assetDurationWithNextDelay);
}
//Delete output video if it exists
NSString* outputVideoString = [outputVideoURL absoluteString];
if ([[NSFileManager defaultManager] fileExistsAtPath:outputVideoString])
{
[[NSFileManager defaultManager] removeItemAtPath:outputVideoString error:nil];
}
//export the combined video
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:composition
presetName:AVAssetExportPresetHighestQuality];
exporter.outputURL = outputVideoURL;
exporter.outputFileType = fileType;
exporter.shouldOptimizeForNetworkUse = YES;
[exporter exportAsynchronouslyWithCompletionHandler:^(void)
{
switch (exporter.status)
{
case AVAssetExportSessionStatusCompleted: {
onCompletion();
break;
}
case AVAssetExportSessionStatusFailed:
{
NSLog(#"Export Failed");
NSError* err = exporter.error;
NSLog(#"ExportSessionError: %#", [err localizedDescription]);
onError(err);
break;
}
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export Cancelled");
NSLog(#"ExportSessionError: %#", [exporter.error localizedDescription]);
onCancel();
break;
}
}];
}
EDIT: Got it working. Here is how I extract the images and generate the videos from those images:
+ (void)writeImageAsMovie:(UIImage*)image
toPath:(NSURL*)url
fileType:(NSString*)fileType
duration:(NSTimeInterval)duration
completion:(VoidBlock)completion
{
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:url
fileType:fileType
error:&error];
NSParameterAssert(videoWriter);
CGSize size = image.size;
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
//Write samples:
CMTime halfTime = CMTimeMakeWithSeconds(duration/2, 100000);
CMTime endTime = CMTimeMakeWithSeconds(duration, 100000);
CVPixelBufferRef buffer = [VideoCreator pixelBufferFromCGImage:image.CGImage];
[adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];
[adaptor appendPixelBuffer:buffer withPresentationTime:halfTime];
[adaptor appendPixelBuffer:buffer withPresentationTime:endTime];
//Finish the session:
[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:endTime];
[videoWriter finishWritingWithCompletionHandler:^{
if(videoWriter.error)
{
NSLog(#"Error:%#", [error localizedDescription]);
}
if(completion)
{
completion();
}
}];
}
+(void)generateVideoImageFromURL:(NSURL*)url
atTime:(CMTime)thumbTime
withMaxSize:(CGSize)maxSize
completion:(ImageBlock)handler
{
AVURLAsset *asset=[[AVURLAsset alloc] initWithURL:url options:nil];
if(!asset)
{
if(handler)
{
handler(nil);
return;
}
}
if(CMTIME_IS_POSITIVE_INFINITY(thumbTime))
{
thumbTime = asset.duration;
}
else if(CMTIME_IS_NEGATIVE_INFINITY(thumbTime) || CMTIME_IS_INVALID(thumbTime) || CMTIME_IS_INDEFINITE(thumbTime))
{
thumbTime = CMTimeMake(0, 30);
}
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform=TRUE;
generator.maximumSize = maxSize;
CMTime actualTime;
NSError* error;
CGImageRef image = [generator copyCGImageAtTime:thumbTime actualTime:&actualTime error:&error];
UIImage *thumb = [[UIImage alloc] initWithCGImage:image];
CGImageRelease(image);
if(handler)
{
handler(thumb);
}
}
AVMutableComposition can only stitch videos together. I did it by doing two things:
Extracting last frame of the first video as image.
Making a video using this image(duration depends on your requirement).
Then you can compose these three videos (V1,V2 and your single image video). Both tasks are very easy to do.
For extracting the image out of the video, look at this link. If you don't want to use MPMoviePlayerController,which is used by accepted answer, then look at other answer by Steve.
For making video using the image check out this link. Question is about the issue of audio but I don't think you need audio. So just look at the method mentioned in question itself.
UPDATE:
There is an easier way but it comes with a disadvantage. You can have two AVPlayer. First one plays your video which has white frames in between. Other one sits behind paused at last frame of video 1. So when the middle part comes, you will see the second AVPlayer loaded with last frame. So as a whole it will look like video 1 is paused. And trust me naked eye can't make out when player got changed. But the obvious disadvantage is that your exported video will be same with blank frames. So if you are just going to play it back in your app only, you can go with this approach.
The first frame of video asset is always black or white
CMTime delta = CMTimeMake(1, 25); //1 frame (if fps = 25)
CMTimeRange timeRangeInVideoAsset = CMTimeRangeMake(delta,clipVideoTrack.timeRange.duration);
nextVideoClipStartTime = CMTimeAdd(nextVideoClipStartTime, timeRangeInVideoAsset.duration);
Merged more then 400 shirt videos in one.

iOS video to audio file conversion [duplicate]

This question already has an answer here:
iPhone - Separating audio from a video file and saving it to a separate file
(1 answer)
Closed 9 years ago.
I managed it to download a youtube video using NSUrlConnection and save it to the device. Now I want to convert this (I guess .mp4) file to an .mp3 audio file. Does anyone know a solution for this problem? Maybe there's a way to only download the audio from the video? This would save a lot of time.
First of all, you don't want to convert anything, that's slow. Instead you want to extract the audio stream from the mp4 file. You can do this by creating an AVMutableComposition containing only the audio track of the original file and then exporting the composition with an AVAssetExportSession. This is currently m4a centric. If you want to handle both m4a and mp3 output, check the audio track type, make sure to set the right file extension and choose between AVFileTypeMPEGLayer3 or AVFileTypeAppleM4A in the export session.
NSURL* dstURL = [NSURL fileURLWithPath:dstPath];
[[NSFileManager defaultManager] removeItemAtURL:dstURL error:nil];
AVMutableComposition* newAudioAsset = [AVMutableComposition composition];
AVMutableCompositionTrack* dstCompositionTrack;
dstCompositionTrack = [newAudioAsset addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
AVAsset* srcAsset = [AVURLAsset URLAssetWithURL:srcURL options:nil];
AVAssetTrack* srcTrack = [[srcAsset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
CMTimeRange timeRange = srcTrack.timeRange;
NSError* error;
if(NO == [dstCompositionTrack insertTimeRange:timeRange ofTrack:srcTrack atTime:kCMTimeZero error:&error]) {
NSLog(#"track insert failed: %#\n", error);
return;
}
AVAssetExportSession* exportSesh = [[AVAssetExportSession alloc] initWithAsset:newAudioAsset presetName:AVAssetExportPresetPassthrough];
exportSesh.outputFileType = AVFileTypeAppleM4A;
exportSesh.outputURL = dstURL;
[exportSesh exportAsynchronouslyWithCompletionHandler:^{
AVAssetExportSessionStatus status = exportSesh.status;
NSLog(#"exportAsynchronouslyWithCompletionHandler: %i\n", status);
if(AVAssetExportSessionStatusFailed == status) {
NSLog(#"FAILURE: %#\n", exportSesh.error);
} else if(AVAssetExportSessionStatusCompleted == status) {
NSLog(#"SUCCESS!\n");
}
}];

Resources