I'm trying to use AVMutableComposition to play a sequence of sound files at precise times.
When the view loads, I create the composition with the intent of playing 4 sounds evenly spaced over 1 second. It shouldn't matter how long or short the sounds are, I just want to fire them at exactly 0, 0.25, 0.5 and 0.75 seconds:
AVMutableComposition *composition = [[AVMutableComposition alloc] init];
NSDictionary *options = #{AVURLAssetPreferPreciseDurationAndTimingKey : #YES};
for (NSInteger i = 0; i < 4; i++)
{
AVMutableCompositionTrack* track = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
NSURL *url = [[NSBundle mainBundle] URLForResource:[NSString stringWithFormat:#"sound_file_%i", i] withExtension:#"caf"];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:options];
AVAssetTrack *assetTrack = [asset tracksWithMediaType:AVMediaTypeAudio].firstObject;
CMTimeRange timeRange = [assetTrack timeRange];
Float64 t = i * 0.25;
NSError *error;
BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTimeMakeWithSeconds(t, 1) error:&error];
if (!success)
{
NSLog(#"unsuccesful creation of composition");
}
if (error)
{
NSLog(#"composition creation error: %#", error);
}
}
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
self.avPlayer = [[AVPlayer alloc] initWithPlayerItem:playerItem];
The composition is created successfully with no errors. Later, when I want to play the sequence I do this:
[self.avPlayer seekToTime:CMTimeMakeWithSeconds(0, 1)];
[self.avPlayer play];
For some reason, the sounds are not evenly spaced at all - but play almost all at once. I tried the same thing spaced over 4 seconds, replacing the time calculation like this:
Float64 t = i * 1.0;
And this plays perfectly. Any time interval under 1 second seems to generate unexpected results. What am I missing? Are AVCompositions not supposed to be used for time intervals under 1 second? Or perhaps I'm misunderstanding the time intervals?
Your CMTimeMakeWithSeconds(t, 1) is in whole second 'slices' because your timescale is set to 1. No matter what fraction t is, the atTime: will always end up as 0. This is why it works when you increase it to 1 second (t=i*1).
You need to set the timescale to 4 to get your desired 0.25 second slices. Since the CMTime is now in .25 second slices, you won't need the i * 0.25 calculcation. Just use the i directly; atTime:CMTimeMake(i, 4)
If you might need to get more precise in the future, you should account for it now so you won't have to adjust your code later. Apple recommends using a timescale of 600 as it is a multiple of the common video framerates (24, 25, and 30 FPS) but it works fine for audio-only too. So for your situation, you would use 24 slices to get your .25 second value; Float64 t = i * 24; atTime:CMTimeMake(t, 600)
As for your issue of all 4 sounds playing almost all at once, be aware of this unanswered SO question where it only happens on the first play. Even with the changes above, you might still run into this issue.
Unless each track is exactly 0.25 seconds long this is your problem:
Float64 t = i * 0.25;
NSError *error;
BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTimeMakeWithSeconds(t, 1) error:&error];
You need to be keeping track of the cumulative time range added so far, and inserting the next track at that time:
CMTime currentTime = kCMTimeZero;
for (NSInteger i = 0; i < 4; i++) {
/* Code to create track for insertion */
CMTimeRange trackTimeRange = [assetTrack timeRange];
BOOL success = [track insertTimeRange:trackTimeRange
ofTrack:assetTrack
atTime:currentTime
error:&error];
/* Error checking code */
//Update time range for insertion
currentTime = CMTimeAdd(currentTime,trackTimeRange.duration);
}
i changed a bit your code, sorry i had no time to test it.
AVMutableComposition *composition = [AVMutableComposition composition];
NSDictionary *options = #{AVURLAssetPreferPreciseDurationAndTimingKey : #YES};
CMTime totalDuration = kCMTimeZero;
for (NSInteger i = 0; i < 4; i++)
{
AVMutableCompositionTrack* track = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
NSURL *url = [NSURL fileURLWithPath:[[NSBundle mainBundle] pathForResource:[NSString stringWithFormat:#"Record_%i", i] ofType:#"caf"]];
AVURLAsset *asset = [[AVURLAsset alloc] initWithURL:url options:options];
AVAssetTrack *assetTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
CMTimeRange timeRange = [assetTrack timeRange];
NSError *error;
BOOL success = [track insertTimeRange:timeRange ofTrack:assetTrack atTime:CMTIME_COMPARE_INLINE(totalDuration, >, kCMTimeZero)? CMTimeAdd(totalDuration, CMTimeMake(1, 4)): totalDuration error:&error];
if (!success)
{
NSLog(#"unsuccesful creation of composition");
}
if (error)
{
NSLog(#"composition creation error: %#", error);
}
totalDuration = CMTimeAdd(CMTimeAdd(totalDuration,CMTimeMake(1, 4)), asset.duration);
}
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
self.avPlayer = [[AVPlayer alloc] initWithPlayerItem:playerItem];
P.S. use kCMTimeZero instead of CMTimeMakeWithSeconds(0, 1).
Related
[Edit: I was able to figure out a workaround for this, see below.]
I'm trying to stream multiple remote MP4 clips from S3, playing them in a sequence as one continuous video (to enable scrubbing within and between clips) with no stuttering, without explicitly downloading them to the device first. However, I find that the clips buffer very slowly (even on a fast network connection) and have been unable to find an adequate way to address that.
I've been trying to use AVPlayer for this, since AVPlayer with AVMutableComposition plays the supplied video tracks as one continuous track (unlike AVQueuePlayer, which I gather plays each video separately and thus doesn't support continuous scrubbing between the clips).
When I stick one of the assets directly into an AVPlayerItem and play that (with no AVMutableComposition), it buffers fast. But using AVMutableComposition, the video starts stuttering very badly on the second clip (my test case has 6 clips, each around 6 seconds), while the audio keeps going. After it plays through once, it plays perfectly smoothly if I rewind to the beginning, so I assume the problem lies in the buffering.
My current attempt to fix this problem feels convoluted, given that this seems like a rather basic use-case for AVPlayer - I do hope there's a simpler solution to all this that works properly. Somehow I doubt that the buffering player I use below is really necessary, but I'm running out of ideas.
Here's the main code that sets up the AVMutableComposition:
// Build an AVAsset for each of the source URIs
- (void)prepareAssetsForSources:(NSArray *)sources
{
NSMutableArray *assets = [[NSMutableArray alloc] init]; // the assets to be used in the AVMutableComposition
NSMutableArray *offsets = [[NSMutableArray alloc] init]; // for tracking buffering progress
CMTime currentOffset = kCMTimeZero;
for (NSDictionary* source in sources) {
bool isNetwork = [RCTConvert BOOL:[source objectForKey:#"isNetwork"]];
bool isAsset = [RCTConvert BOOL:[source objectForKey:#"isAsset"]];
NSString *uri = [source objectForKey:#"uri"];
NSString *type = [source objectForKey:#"type"];
NSURL *url = isNetwork ?
[NSURL URLWithString:uri] :
[[NSURL alloc] initFileURLWithPath:[[NSBundle mainBundle] pathForResource:uri ofType:type]];
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];
currentOffset = CMTimeAdd(currentOffset, asset.duration);
[assets addObject:asset];
[offsets addObject:[NSNumber numberWithFloat:CMTimeGetSeconds(currentOffset)]];
}
_clipAssets = assets;
_clipEndOffsets = offsets;
}
// Called with _clipAssets
- (AVPlayerItem*)playerItemForAssets:(NSMutableArray *)assets
{
AVMutableComposition* composition = [AVMutableComposition composition];
for (AVAsset* asset in assets) {
CMTimeRange editRange = CMTimeRangeMake(CMTimeMake(0, 600), asset.duration);
NSError *editError;
[composition insertTimeRange:editRange
ofAsset:asset
atTime:composition.duration
error:&editError];
}
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
return playerItem; // this is used to initialize the main player
}
My initial thought was: Since it buffers fast with a vanilla AVPlayerItem, why not maintain a separate buffering player that's loaded with each asset in turn (with no AVMutableComposition) to buffer the assets for the main player?
- (void)startBufferingClips
{
_bufferingPlayerItem = [AVPlayerItem playerItemWithAsset:_clipAssets[0]
automaticallyLoadedAssetKeys:#[#"tracks"]];
_bufferingPlayer = [AVPlayer playerWithPlayerItem:_bufferingPlayerItem];
_currentlyBufferingIndex = 0;
}
// called every 250 msecs via an addPeriodicTimeObserverForInterval on the main player
- (void)updateBufferingProgress
{
// If the playable (loaded) range is within 100 milliseconds of the clip
// currently being buffered, load the next clip into the buffering player.
float playableDuration = [[self calculateBufferedDuration] floatValue];
CMTime totalDurationTime = [self playerItemDuration :_bufferingPlayer];
Float64 totalDurationSeconds = CMTimeGetSeconds(totalDurationTime);
bool bufferingComplete = totalDurationSeconds - playableDuration < 0.1;
float bufferedSeconds = [self bufferedSeconds :playableDuration];
float playerTimeSeconds = CMTimeGetSeconds([_player currentTime]);
__block NSUInteger playingClipIndex = 0;
// find the index of _player's currently playing clip
[_clipEndOffsets enumerateObjectsUsingBlock:^(id offset, NSUInteger idx, BOOL *stop) {
if (playerTimeSeconds < [offset floatValue]) {
playingClipIndex = idx;
*stop = YES;
}
}];
// TODO: if bufferedSeconds - playerTimeSeconds <= 0, pause the main player
if (bufferingComplete && _currentlyBufferingIndex < [_clipAssets count] - 1) {
// We're done buffering this clip, load the buffering player with the next asset
_currentlyBufferingIndex += 1;
_bufferingPlayerItem = [AVPlayerItem playerItemWithAsset:_clipAssets[_currentlyBufferingIndex]
automaticallyLoadedAssetKeys:#[#"tracks"]];
_bufferingPlayer = [AVPlayer playerWithPlayerItem:_bufferingPlayerItem];
}
}
- (float)bufferedSeconds:(float)playableDuration {
__block float seconds = 0.0; // total duration of clips already buffered
if (_currentlyBufferingIndex > 0) {
[_clipEndOffsets enumerateObjectsUsingBlock:^(id offset, NSUInteger idx, BOOL *stop) {
if (idx + 1 >= _currentlyBufferingIndex) {
seconds = [offset floatValue];
*stop = YES;
}
}];
}
return seconds + playableDuration;
}
- (NSNumber *)calculateBufferedDuration {
AVPlayerItem *video = _bufferingPlayer.currentItem;
if (video.status == AVPlayerItemStatusReadyToPlay) {
__block float longestPlayableRangeSeconds;
[video.loadedTimeRanges enumerateObjectsUsingBlock:^(id obj, NSUInteger idx, BOOL *stop) {
CMTimeRange timeRange = [obj CMTimeRangeValue];
float seconds = CMTimeGetSeconds(CMTimeRangeGetEnd(timeRange));
if (seconds > 0.1) {
if (!longestPlayableRangeSeconds) {
longestPlayableRangeSeconds = seconds;
} else if (seconds > longestPlayableRangeSeconds) {
longestPlayableRangeSeconds = seconds;
}
}
}];
Float64 playableDuration = longestPlayableRangeSeconds;
if (playableDuration && playableDuration > 0) {
return [NSNumber numberWithFloat:longestPlayableRangeSeconds];
}
}
return [NSNumber numberWithInteger:0];
}
It initially seemed that this worked like a charm, but then I switched to another set of test clips and then the buffering was very slow again (the buffering player helped, but not enough). It seems like the loadedTimeRanges for the assets as loaded into the buffering player didn't match the loadedTimeRanges for the same assets inside the AVMutableComposition: Even after the loadedTimeRanges for each item loaded into the buffering player indicated that the whole asset had been buffered, the main player's video continued stuttering (while the audio played seamlessly to the end). Again, the playback was seamless after rewinding once the main player had run through all the clips once.
I hope the answer to this, whatever it is, will prove useful as a starting point for other iOS developers trying to implement this basic use-case. Thanks!
Edit: Since I posted this question, I made the following workaround for this. Hopefully this will save whoever runs into this some headache.
What I ended up doing was maintaining two buffering players (both AVPlayers) that started buffering the first two clips, moving on to the lowest-indexed unbuffered clip after their loadedTimeRanges indicated that buffering for their current clip was done. I made the logic pause/unpause playback based on the clips currently buffered, and the loadedTimeRanges of the buffering players, plus a small margin. This needed a few bookkeeping variables, but wasn't too complicated.
This is how the buffering players were initialized (I'm omitting the bookkeeping logic here):
- (void)startBufferingClips
{
_bufferingPlayerItemA = [AVPlayerItem playerItemWithAsset:_clipAssets[0]
automaticallyLoadedAssetKeys:#[#"tracks"]];
_bufferingPlayerA = [AVPlayer playerWithPlayerItem:_bufferingPlayerItemA];
_currentlyBufferingIndexA = [NSNumber numberWithInt:0];
if ([_clipAssets count] > 1) {
_bufferingPlayerItemB = [AVPlayerItem playerItemWithAsset:_clipAssets[1]
automaticallyLoadedAssetKeys:#[#"tracks"]];
_bufferingPlayerB = [AVPlayer playerWithPlayerItem:_bufferingPlayerItemB];
_currentlyBufferingIndexB = [NSNumber numberWithInt:1];
_nextIndexToBuffer = [NSNumber numberWithInt:2];
} else {
_nextIndexToBuffer = [NSNumber numberWithInt:1];
}
}
In addition, I needed to make sure that the video and audio tracks weren't being merged as they were added to AVMutableComposition, as this apparently interfered with the buffering (perhaps they didn't register as the same video/audio tracks as those the buffering players were loading, and thus didn't receive the new data). Here's the code where the AVMutableComposition is built from an array of NSAssets:
- (AVPlayerItem*)playerItemForAssets:(NSMutableArray *)assets
{
AVMutableComposition* composition = [AVMutableComposition composition];
AVMutableCompositionTrack *compVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo
preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *compAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio
preferredTrackID:kCMPersistentTrackID_Invalid];
CMTime timeOffset = kCMTimeZero;
for (AVAsset* asset in assets) {
CMTimeRange editRange = CMTimeRangeMake(CMTimeMake(0, 600), asset.duration);
NSError *editError;
NSArray *videoTracks = [asset tracksWithMediaType:AVMediaTypeVideo];
NSArray *audioTracks = [asset tracksWithMediaType:AVMediaTypeAudio];
if ([videoTracks count] > 0) {
AVAssetTrack *videoTrack = [videoTracks objectAtIndex:0];
[compVideoTrack insertTimeRange:editRange
ofTrack:videoTrack
atTime:timeOffset
error:&editError];
}
if ([audioTracks count] > 0) {
AVAssetTrack *audioTrack = [audioTracks objectAtIndex:0];
[compAudioTrack insertTimeRange:editRange
ofTrack:audioTrack
atTime:timeOffset
error:&editError];
}
if ([videoTracks count] > 0 || [audioTracks count] > 0) {
timeOffset = CMTimeAdd(timeOffset, asset.duration);
}
}
AVPlayerItem* playerItem = [AVPlayerItem playerItemWithAsset:composition];
return playerItem;
}
With this approach, buffering while using AVMutableComposition for the main player works nice and fast, at least in my setup.
I have a set of video clips that I would like to merge together and then put a watermark on it.
I am able to do both functions individually, however problems arise when performing the them together.
All clips that will be merged are either 1920x1080 or 960x540.
For some reason, AVAssetExportSession does not display them well together.
Here are the 2 bugs based on 3 different scenarios:
This image is a result of:
Merging Clips together
As you can see, there is nothing wrong here, the output video produces the desired effect.
However, when I then try to add a watermark, it creates the following issue:
This image is a result of:
Merging Clips together
Putting a watermark on it
BUG 1: Some clips in the video get resized for whatever reason while other clips do not.
This image is a result of:
Merging Clips together
Resizing clips that are 960x540 to 1920x1080
Putting a watermark on it
Bug 2 Now the clips that need to be resized get resized, however the old unresized clip is still there.
Merging/Resizing Code:
-(void) mergeClips{
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0), ^{
AVMutableComposition *mixComposition = [[AVMutableComposition alloc] init];
AVMutableCompositionTrack *mutableVideoTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *mutableAudioTrack = [mixComposition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
// loop through the list of videos and add them to the track
CMTime currentTime = kCMTimeZero;
NSMutableArray* instructionArray = [[NSMutableArray alloc] init];
if (_clipsArray){
for (int i = 0; i < (int)[_clipsArray count]; i++){
NSURL* url = [_clipsArray objectAtIndex:i];
AVAsset *asset = [AVAsset assetWithURL:url];
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
CGSize size = videoTrack.naturalSize;
CGFloat widthScale = 1920.0f/size.width;
CGFloat heightScale = 1080.0f/size.height;
// lines that performs resizing
AVMutableVideoCompositionLayerInstruction *layerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstructionWithAssetTrack:mutableVideoTrack];
CGAffineTransform scale = CGAffineTransformMakeScale(widthScale,heightScale);
CGAffineTransform move = CGAffineTransformMakeTranslation(0,0);
[layerInstruction setTransform:CGAffineTransformConcat(scale, move) atTime:currentTime];
[instructionArray addObject:layerInstruction];
[mutableVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, asset.duration)
ofTrack:videoTrack
atTime:currentTime error:nil];
[mutableAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, asset.duration)
ofTrack:audioTrack
atTime:currentTime error:nil];
currentTime = CMTimeMakeWithSeconds(CMTimeGetSeconds(asset.duration) + CMTimeGetSeconds(currentTime), asset.duration.timescale);
}
}
AVMutableVideoCompositionInstruction * mainInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, currentTime);
mainInstruction.layerInstructions = instructionArray;
// 4 - Get path
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *lastPostedDayPath = [documentsDirectory stringByAppendingPathComponent:#"lastPostedDay"];
//Check if folder exists, if not create folder
if (![[NSFileManager defaultManager] fileExistsAtPath:lastPostedDayPath]){
[[NSFileManager defaultManager] createDirectoryAtPath:lastPostedDayPath withIntermediateDirectories:NO attributes:nil error:nil];
}
NSString *fileName = [NSString stringWithFormat:#"%li_%li_%li.mov", (long)_month, (long)_day, (long)_year];
NSString *finalDayPath = [lastPostedDayPath stringByAppendingPathComponent:fileName];
NSURL *url = [NSURL fileURLWithPath:finalDayPath];
BOOL fileExists = [[NSFileManager defaultManager] fileExistsAtPath:finalDayPath];
if (fileExists){
NSLog(#"file exists");
[[NSFileManager defaultManager] removeItemAtURL:url error:nil];
}
AVMutableVideoComposition *mainComposition = [AVMutableVideoComposition videoComposition];
mainComposition.instructions = [NSArray arrayWithObject:mainInstruction];
mainComposition.frameDuration = CMTimeMake(1, 30);
mainComposition.renderSize = CGSizeMake(1920.0f, 1080.0f);
// 5 - Create exporter
_exportSession = [[AVAssetExportSession alloc] initWithAsset:mixComposition
presetName:AVAssetExportPresetHighestQuality];
_exportSession.outputURL=url;
_exportSession.outputFileType = AVFileTypeQuickTimeMovie;
_exportSession.shouldOptimizeForNetworkUse = YES;
_exportSession.videoComposition = mainComposition;
[_exportSession exportAsynchronouslyWithCompletionHandler:^{
[merge_timer invalidate];
merge_timer = nil;
switch (_exportSession.status) {
case AVAssetExportSessionStatusFailed:
NSLog(#"Export failed -> Reason: %#, User Info: %#",
_exportSession.error.localizedDescription,
_exportSession.error.userInfo.description);
[self showSavingFailedDialog];
break;
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export cancelled");
[self showSavingFailedDialog];
break;
case AVAssetExportSessionStatusCompleted:
NSLog(#"Export finished");
[self addWatermarkToExportSession:_exportSession];
break;
default:
break;
}
}];
});
}
Once it finishes this, I run it through a different Export Session that just simply adds a watermark.
Is there something I am doing wrong in my code or process?
Is there an easier way for achieving this?
Thank you for your time!
I was able to solve my issue.
For some reason, AVAssetExportSession will not actually create a 'flat' video file of the merged clips, so it still recognized the lower resolution clips and their locations when adding the watermark which caused them to resize.
What I did to solve this was, first use AVAssetWriter to merge my clips and create one 'flat' file. I then could add a watermark without having a resizing issue.
Hope this helps anyone who may come across this problem in the future!
I also encountered the same problem,
you can set opacity after one video end like this:
[layerInstruction setOpacity:0.0 atTime:duration];
I have an app which combines video files together to make a long video. There could be a delay between videos (e.g. V1 starts at t=0s and runs for 5 seconds, V1 starts at t=10s). In this case, I want the video to freeze the last frame of V1 until V2 starts.
I'm using the code below, but between videos, the whole video goes white.
Any ideas how I can get the effect I'm looking for?
Thanks!
#interface VideoJoins : NSObject
-(instancetype)initWithURL:(NSURL*)url
andDelay:(NSTimeInterval)delay;
#property (nonatomic, strong) NSURL* url;
#property (nonatomic) NSTimeInterval delay;
#end
and
+(void)joinVideosSequentially:(NSArray*)videoJoins
withFileType:(NSString*)fileType
toOutput:(NSURL*)outputVideoURL
onCompletion:(dispatch_block_t) onCompletion
onError:(ErrorBlock) onError
onCancel:(dispatch_block_t) onCancel
{
//From original question on http://stackoverflow.com/questions/6575128/how-to-combine-video-clips-with-different-orientation-using-avfoundation
// Didn't add support for portrait+landscape.
AVMutableComposition *composition = [AVMutableComposition composition];
AVMutableCompositionTrack *compositionVideoTrack = [composition addMutableTrackWithMediaType:AVMediaTypeVideo preferredTrackID:kCMPersistentTrackID_Invalid];
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
CMTime startTime = kCMTimeZero;
/*videoClipPaths is a array of paths of the video clips recorded*/
//for loop to combine clips into a single video
for (NSInteger i=0; i < [videoJoins count]; i++)
{
VideoJoins* vj = videoJoins[i];
NSURL *url = vj.url;
NSTimeInterval nextDelayTI = 0;
if(i+1 < [videoJoins count])
{
VideoJoins* vjNext = videoJoins[i+1];
nextDelayTI = vjNext.delay;
}
AVURLAsset *asset = [AVURLAsset URLAssetWithURL:url options:nil];
CMTime assetDuration = [asset duration];
CMTime assetDurationWithNextDelay = assetDuration;
if(nextDelayTI != 0)
{
CMTime nextDelay = CMTimeMakeWithSeconds(nextDelayTI, 1000000);
assetDurationWithNextDelay = CMTimeAdd(assetDuration, nextDelay);
}
AVAssetTrack *videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0];
AVAssetTrack *audioTrack = [[asset tracksWithMediaType:AVMediaTypeAudio] objectAtIndex:0];
//set the orientation
if(i == 0)
{
[compositionVideoTrack setPreferredTransform:videoTrack.preferredTransform];
}
BOOL ok = [compositionVideoTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDurationWithNextDelay) ofTrack:videoTrack atTime:startTime error:nil];
ok = [compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, assetDuration) ofTrack:audioTrack atTime:startTime error:nil];
startTime = CMTimeAdd(startTime, assetDurationWithNextDelay);
}
//Delete output video if it exists
NSString* outputVideoString = [outputVideoURL absoluteString];
if ([[NSFileManager defaultManager] fileExistsAtPath:outputVideoString])
{
[[NSFileManager defaultManager] removeItemAtPath:outputVideoString error:nil];
}
//export the combined video
AVAssetExportSession *exporter = [[AVAssetExportSession alloc] initWithAsset:composition
presetName:AVAssetExportPresetHighestQuality];
exporter.outputURL = outputVideoURL;
exporter.outputFileType = fileType;
exporter.shouldOptimizeForNetworkUse = YES;
[exporter exportAsynchronouslyWithCompletionHandler:^(void)
{
switch (exporter.status)
{
case AVAssetExportSessionStatusCompleted: {
onCompletion();
break;
}
case AVAssetExportSessionStatusFailed:
{
NSLog(#"Export Failed");
NSError* err = exporter.error;
NSLog(#"ExportSessionError: %#", [err localizedDescription]);
onError(err);
break;
}
case AVAssetExportSessionStatusCancelled:
NSLog(#"Export Cancelled");
NSLog(#"ExportSessionError: %#", [exporter.error localizedDescription]);
onCancel();
break;
}
}];
}
EDIT: Got it working. Here is how I extract the images and generate the videos from those images:
+ (void)writeImageAsMovie:(UIImage*)image
toPath:(NSURL*)url
fileType:(NSString*)fileType
duration:(NSTimeInterval)duration
completion:(VoidBlock)completion
{
NSError *error = nil;
AVAssetWriter *videoWriter = [[AVAssetWriter alloc] initWithURL:url
fileType:fileType
error:&error];
NSParameterAssert(videoWriter);
CGSize size = image.size;
NSDictionary *videoSettings = [NSDictionary dictionaryWithObjectsAndKeys:
AVVideoCodecH264, AVVideoCodecKey,
[NSNumber numberWithInt:size.width], AVVideoWidthKey,
[NSNumber numberWithInt:size.height], AVVideoHeightKey,
nil];
AVAssetWriterInput* writerInput = [AVAssetWriterInput
assetWriterInputWithMediaType:AVMediaTypeVideo
outputSettings:videoSettings];
AVAssetWriterInputPixelBufferAdaptor *adaptor = [AVAssetWriterInputPixelBufferAdaptor
assetWriterInputPixelBufferAdaptorWithAssetWriterInput:writerInput
sourcePixelBufferAttributes:nil];
NSParameterAssert(writerInput);
NSParameterAssert([videoWriter canAddInput:writerInput]);
[videoWriter addInput:writerInput];
//Start a session:
[videoWriter startWriting];
[videoWriter startSessionAtSourceTime:kCMTimeZero];
//Write samples:
CMTime halfTime = CMTimeMakeWithSeconds(duration/2, 100000);
CMTime endTime = CMTimeMakeWithSeconds(duration, 100000);
CVPixelBufferRef buffer = [VideoCreator pixelBufferFromCGImage:image.CGImage];
[adaptor appendPixelBuffer:buffer withPresentationTime:kCMTimeZero];
[adaptor appendPixelBuffer:buffer withPresentationTime:halfTime];
[adaptor appendPixelBuffer:buffer withPresentationTime:endTime];
//Finish the session:
[writerInput markAsFinished];
[videoWriter endSessionAtSourceTime:endTime];
[videoWriter finishWritingWithCompletionHandler:^{
if(videoWriter.error)
{
NSLog(#"Error:%#", [error localizedDescription]);
}
if(completion)
{
completion();
}
}];
}
+(void)generateVideoImageFromURL:(NSURL*)url
atTime:(CMTime)thumbTime
withMaxSize:(CGSize)maxSize
completion:(ImageBlock)handler
{
AVURLAsset *asset=[[AVURLAsset alloc] initWithURL:url options:nil];
if(!asset)
{
if(handler)
{
handler(nil);
return;
}
}
if(CMTIME_IS_POSITIVE_INFINITY(thumbTime))
{
thumbTime = asset.duration;
}
else if(CMTIME_IS_NEGATIVE_INFINITY(thumbTime) || CMTIME_IS_INVALID(thumbTime) || CMTIME_IS_INDEFINITE(thumbTime))
{
thumbTime = CMTimeMake(0, 30);
}
AVAssetImageGenerator *generator = [[AVAssetImageGenerator alloc] initWithAsset:asset];
generator.appliesPreferredTrackTransform=TRUE;
generator.maximumSize = maxSize;
CMTime actualTime;
NSError* error;
CGImageRef image = [generator copyCGImageAtTime:thumbTime actualTime:&actualTime error:&error];
UIImage *thumb = [[UIImage alloc] initWithCGImage:image];
CGImageRelease(image);
if(handler)
{
handler(thumb);
}
}
AVMutableComposition can only stitch videos together. I did it by doing two things:
Extracting last frame of the first video as image.
Making a video using this image(duration depends on your requirement).
Then you can compose these three videos (V1,V2 and your single image video). Both tasks are very easy to do.
For extracting the image out of the video, look at this link. If you don't want to use MPMoviePlayerController,which is used by accepted answer, then look at other answer by Steve.
For making video using the image check out this link. Question is about the issue of audio but I don't think you need audio. So just look at the method mentioned in question itself.
UPDATE:
There is an easier way but it comes with a disadvantage. You can have two AVPlayer. First one plays your video which has white frames in between. Other one sits behind paused at last frame of video 1. So when the middle part comes, you will see the second AVPlayer loaded with last frame. So as a whole it will look like video 1 is paused. And trust me naked eye can't make out when player got changed. But the obvious disadvantage is that your exported video will be same with blank frames. So if you are just going to play it back in your app only, you can go with this approach.
The first frame of video asset is always black or white
CMTime delta = CMTimeMake(1, 25); //1 frame (if fps = 25)
CMTimeRange timeRangeInVideoAsset = CMTimeRangeMake(delta,clipVideoTrack.timeRange.duration);
nextVideoClipStartTime = CMTimeAdd(nextVideoClipStartTime, timeRangeInVideoAsset.duration);
Merged more then 400 shirt videos in one.
I have 3 videos that I am sequencing using an AVMutableComposition and then playing the video using an AVPlayer and grabbing the frames using an AVPlayerItemVdeoOutput. The video sequence is as follows:
[Logo Video - n seconds][Main video - m seconds][Logo Video - l seconds]
The code looks like this:
// Build the composition.
pComposition = [AVMutableComposition composition];
// Fill in the assets that make up the composition
AVMutableCompositionTrack* pCompositionVideoTrack = [pComposition addMutableTrackWithMediaType: AVMediaTypeVideo preferredTrackID: 1];
AVMutableCompositionTrack* pCompositionAudioTrack = [pComposition addMutableTrackWithMediaType: AVMediaTypeAudio preferredTrackID: 2];
CMTime time = kCMTimeZero;
CMTimeRange keyTimeRange = kCMTimeRangeZero;
for( AVAsset* pAssetsAsset in pAssets )
{
AVAssetTrack* pAssetsAssetVideoTrack = [pAssetsAsset tracksWithMediaType: AVMediaTypeVideo].firstObject;
AVAssetTrack* pAssetsAssetAudioTrack = [pAssetsAsset tracksWithMediaType: AVMediaTypeAudio].firstObject;
CMTimeRange timeRange = CMTimeRangeMake( kCMTimeZero, pAssetsAsset.duration );
NSError* pVideoError = nil;
NSError* pAudioError = nil;
if ( pAssetsAssetVideoTrack )
{
[pCompositionVideoTrack insertTimeRange: timeRange ofTrack: pAssetsAssetVideoTrack atTime: time error: &pVideoError];
}
if ( pAssetsAssetAudioTrack )
{
[pCompositionAudioTrack insertTimeRange: timeRange ofTrack: pAssetsAssetAudioTrack atTime: time error: &pAudioError];
}
if ( pAssetsAsset == pKeyAsset )
{
keyTimeRange = CMTimeRangeMake( time, timeRange.duration );
}
NSLog( #"%#", [pVideoError description] );
NSLog( #"%#", [pAudioError description] );
time = CMTimeAdd( time, pAssetsAsset.duration );
}
The logo videos are silent and merely display my logo. I manually create these videos so everything is perfect here. The "Main Video" however can end up with the wrong orientation. To combat this an AVMutableVideoComposition looks like the perfect way forward. So I setup a simple video composition that does a simple setTransform as follows:
pAsset = pComposition;
pPlayerItem = [AVPlayerItem playerItemWithAsset: pAsset];
pPlayer = [AVPlayer playerWithPlayerItem: pPlayerItem];
NSArray* pPlayerTracks = [pAsset tracksWithMediaType: AVMediaTypeVideo];
AVAssetTrack* pPlayerTrack = pPlayerTracks[0];
pVideoCompositionLayerInstruction = [AVMutableVideoCompositionLayerInstruction videoCompositionLayerInstruction];
[pVideoCompositionLayerInstruction setTransform: [[pKeyAsset tracksWithMediaType: AVMediaTypeVideo].firstObject preferredTransform] atTime: kCMTimeZero];
pVideoCompositionInstruction = [AVMutableVideoCompositionInstruction videoCompositionInstruction];
pVideoCompositionInstruction.backgroundColor = [[UIColor blackColor] CGColor];
pVideoCompositionInstruction.timeRange = keyTimeRange;
pVideoCompositionInstruction.layerInstructions = #[ pVideoCompositionLayerInstruction ];
pVideoComposition = [AVMutableVideoComposition videoComposition];
pVideoComposition.renderSize = [[pKeyAsset tracksWithMediaType: AVMediaTypeVideo].firstObject naturalSize];
pVideoComposition.frameDuration = [[pKeyAsset tracksWithMediaType: AVMediaTypeVideo].firstObject minFrameDuration];
pVideoComposition.instructions = #[ pVideoCompositionInstruction ];
pPlayerItem.videoComposition = pVideoComposition;
However when I come to play the video sequence I get no output returned. AVPlayerItemVideoOutput hasNewPixelBufferForItemTime always returns NO. If I comment out the last line in the code above (ie the setting the videoComposition) then everything works as before (with videos with the wrong orientation). Does anybody know what I'm doing wrong? Any thoughts much appreciated!
The issue here is that keyTimeRange may not start at time zero if your Logo video has nonzero duration. pVideoCompositionInstruction will start at keyTimeRange.start, rather than kCMTimeZero (when the AVMutableComposition will start), which violates the rules for AVVideoCompositionInstructions
"For the first instruction in the array, timeRange.start must be less than or equal to the earliest time for which playback or other processing will be attempted (typically kCMTimeZero)", according to the docs
To solve this, set pVideoComposition.instructions to an array containing three AVMutableVideoCompositionInstruction objects, each with their own AVMutableVideoCompositionLayerInstruction according to each AVAsset's transform. The time range for each of the three instructions should be the times at which these assets appear in the composition track. Make sure they line up exactly.
I'm developing an ios audio app on xcode and I'm trying to use 2 audio files I have recorded - which are playing at the same time and export it to one audio file.
All I have managed to do is merge 2 audio files to one, but the 2 audios are playing one after another and not in sync at the same time.
Does anyone have a clue how I can sort it out?
Thanks
You should take a look at this for AAC conversion (http://atastypixel.com/blog/easy-aac-compressed-audio-conversion-on-ios/). It's super useful.
Another thing you might want to consider... combining two audio signals is as easy as adding the samples together. So what you could do is:
Open both recordings and get an array for each of the recordings that holds the audio samples.
Make a for() loop that adds each sample and puts it in an output array
for(int i = 0; i<numberOfSamples; i++) {
exportBuffer[i] = firstTrack[i] + secondTrack[i];
}
and then write the exportBuffer to an m4a file.
This code will only work if the two files are the same exact length, so adjust it to your needs. You'll need to add a conditional that fires if you've reached the end of one of the arrays. In that case, just add 0's.
Try Apple's MixerHost sample app.
/* Implement this method if you have already saved your recorded audio file */
-(void)mixAudio{
AVMutableComposition *composition = [[AVMutableComposition alloc] init];
AVMutableCompositionTrack *compositionAudioTrack = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack setPreferredVolume:0.8];
NSString *soundOne =[[NSBundle mainBundle]pathForResource:#"RecordAudio1" ofType:#"wav"];
NSURL *url = [NSURL fileURLWithPath:soundOne];
AVAsset *avAsset = [AVURLAsset URLAssetWithURL:url options:nil];
NSArray *tracks = [avAsset tracksWithMediaType:AVMediaTypeAudio];
AVAssetTrack *clipAudioTrack = [tracks objectAtIndex:0];
[compositionAudioTrack insertTimeRange:CMTimeRangeMake(kCMTimeZero, avAsset.duration) ofTrack:clipAudioTrack atTime:kCMTimeZero error:nil];
AVMutableCompositionTrack *compositionAudioTrack1 = [composition addMutableTrackWithMediaType:AVMediaTypeAudio preferredTrackID:kCMPersistentTrackID_Invalid];
[compositionAudioTrack setPreferredVolume:0.8];
NSString *soundOne1 =[[NSBundle mainBundle]pathForResource:#"RecordAudio2" ofType:#"wav"];
NSURL *url1 = [NSURL fileURLWithPath:soundOne1];
AVAsset *avAsset1 = [AVURLAsset URLAssetWithURL:url1 options:nil];
NSArray *tracks1 = [avAsset1 tracksWithMediaType:AVMediaTypeAudio];
AVAssetTrack *clipAudioTrack1 = [tracks1 objectAtIndex:0];
[compositionAudioTrack1 insertTimeRange:CMTimeRangeMake(kCMTimeZero, avAsset1.duration) ofTrack:clipAudioTrack1 atTime: kCMTimeZero error:nil];
AVAssetExportSession *exportSession = [AVAssetExportSession
exportSessionWithAsset:composition
presetName:AVAssetExportPresetAppleM4A];
if (nil == exportSession) return NO;
NSString *soundOneNew = [documentsDirectory stringByAppendingPathComponent:#"combined10.m4a"];
//NSLog(#"Output file path - %#",soundOneNew);
// configure export session output with all our parameters
exportSession.outputURL = [NSURL fileURLWithPath:soundOneNew]; // output path
exportSession.outputFileType = AVFileTypeAppleM4A; // output file type
// perform the export
[exportSession exportAsynchronouslyWithCompletionHandler:^{
if (AVAssetExportSessionStatusCompleted == exportSession.status) {
NSLog(#"AVAssetExportSessionStatusCompleted");
} else if (AVAssetExportSessionStatusFailed == exportSession.status) {
// a failure may happen because of an event out of your control
// for example, an interruption like a phone call comming in
// make sure and handle this case appropriately
NSLog(#"AVAssetExportSessionStatusFailed");
} else {
NSLog(#"Export Session Status: %d", exportSession.status);
}
}];
}