Merging of two or more video in ios - ios

How to combine video clips with different orientation using AVFoundation
I have gone with the above answer and is going good. But i am facing a problem that audio of the video is being removed. Even all of my videos have voice. But after merging the exported video is mute. Can anyone help. Thanks in Advance.

I was also facing the same problem, but i got the solution.
Swift 4.2 version.
// Merge All videos.
func mergeAllVideos(completionHandler: #escaping(Bool)->Void){
mixComposition = AVMutableComposition.init()
// To capture video.
let compositionVideoTrack = mixComposition?.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
// To capture audio.
let compositionAudioTrack = mixComposition?.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
var nextCliptStartTime: CMTime = CMTime.zero
// Iterate video array.
for file_url in Constants.videoFileNameArr{
// Do Merging here.
let videoAsset = AVURLAsset.init(url: file_url)
let timeRangeInAsset = CMTimeRangeMake(start: CMTime.zero, duration: videoAsset.duration);
do{
// Merge video.
try compositionVideoTrack?.insertTimeRange(CMTimeRange(start: CMTime.zero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: .video)[0], at: nextCliptStartTime)
// Merge Audio
try compositionAudioTrack?.insertTimeRange(CMTimeRange(start: CMTime.zero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: .audio)[0], at: nextCliptStartTime)
}catch{
print(error)
}
// Increment the time to which next clip add.
nextCliptStartTime = CMTimeAdd(nextCliptStartTime, timeRangeInAsset.duration)
}
// Add rotation to make it portrait.
let rotationTransform = CGAffineTransform(rotationAngle: CGFloat(Double.pi/2))
compositionVideoTrack!.preferredTransform = rotationTransform
// Save final file.
self.saveFinalFile(mixComposition!){
isDone in
completionHandler(true)
}
}

Related

Merge two videos with audio and video together in iOS

I am trying to merge two videos together in AVFoundation.
I am using AVMutableComposition and I add both tracks to the composition, resulting in a final video where I have the first video with its audio, and after that the 2nd audio but no video.
How can I get the audio and video of both tracks?
Thank you
let composition = AVMutableComposition()
let audioTrack: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid)!
let videoTrack: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)!
let audioTrack2: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid)!
let videoTrack2: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)!
var outputURL = documentDirectory.appendingPathComponent("output-temp")
do {
try! audioTrack.insertTimeRange(CMTimeRangeFromTimeToTime(start: startTime, end: endTime), of: asset.tracks(withMediaType: AVMediaType.audio)[0], at: CMTime.zero)
try! videoTrack.insertTimeRange(CMTimeRangeFromTimeToTime(start: startTime, end: endTime), of: asset.tracks(withMediaType: AVMediaType.video)[0], at: CMTime.zero)
try! audioTrack2.insertTimeRange(CMTimeRangeFromTimeToTime(start: startTime, end: asset2.duration), of: asset2.tracks(withMediaType: AVMediaType.audio)[0], at: CMTime.invalid)
try! videoTrack2.insertTimeRange(CMTimeRangeFromTimeToTime(start: startTime, end: asset2.duration), of: asset2.tracks(withMediaType: AVMediaType.video)[0], at: CMTime.invalid)
try manager.createDirectory(at: outputURL, withIntermediateDirectories: true, attributes: nil)
let id = "id-\(Int.random(in: 0...199))"
let mediaType = "mp4"
outputURL = outputURL.appendingPathComponent("preVideo-\(id).\(mediaType)")
} catch let error {
print(error)
}
The problem is that you are adding a second video track to the composition. You need to insert both videos into the same video track. Just delete your let videoTrack2 and go from there.

how to merge video clips using avfoundation?

I have successfully merge the video clips to a single video but I am having a problem in the final merged video, the final video shows a white frame after the end of every video clip. I have tried a lot to remove this but couldn't find success. Please review my code below.
func merge(arrayVideos:[AVAsset], completion:#escaping (_ exporter: AVAssetExportSession) -> ()) -> Void {
let mainComposition = AVMutableComposition()
let compositionVideoTrack = mainComposition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
compositionVideoTrack?.preferredTransform = CGAffineTransform(rotationAngle: .pi / 2)
let soundtrackTrack = mainComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
var time:Double = 0.0
for (index, videoAsset) in arrayVideos.enumerated() {
let atTime = CMTime(seconds: time, preferredTimescale: 1)
try! compositionVideoTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: .video)[0], at: atTime)
try! soundtrackTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: .audio)[0], at: atTime)
time += videoAsset.duration.seconds
}
let outputFileURL = URL(fileURLWithPath: NSTemporaryDirectory() + "merge.mp4")
print("final URL:\(outputFileURL)")
let fileManager = FileManager()
do {
try fileManager.removeItem(at: outputFileURL)
} catch let error as NSError {
print("Error: \(error.domain)")
}
let exporter = AVAssetExportSession(asset: mainComposition, presetName: AVAssetExportPresetHighestQuality)
exporter?.outputURL = outputFileURL
exporter?.outputFileType = AVFileType.mp4
exporter?.shouldOptimizeForNetworkUse = true
exporter?.exportAsynchronously {
DispatchQueue.main.async {
completion(exporter!)
}
}
}
Don't use a Double to track the insertion time, this can result in gaps due to rounding errors. And don't use a preferredTimescale of 1 when converting seconds, this will effectively round everything to whole seconds (1000 would be a more common timescale for this).
Instead to track the insertion time use a CMTime initialized to kCMTimeZero, and use CMTimeAdd to advance it.
And one more thing: Video and audio tracks can have different durations, particularly when recorded. So to keep things in sync, you may want to use CMTimeRangeGetIntersection to get the common time range of audio and video in the asset, and then use result to for insertion in the composition.

Swift AVFoundation stitching multiple videos together and keep preferred transform

I'm trying to stitch multiple video clips together. If I stitch each AVAsset in one AVMutableCompositionTrack it works but loses the transformation on the first asset by appending another one with enabled mirroring mode for front facing camera. Can I somehow use multiple AVMutableCompositionTrack of type video in one AVMutableComposition?
// create mix composition
let mixComposition = AVMutableComposition()
// insert video track
let videoCompositionTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)
// keep track of total duration
var totalDuration = kCMTimeZero
// for each video clip add to mutable composition and transform each video layer
for (index, videoClip) in videoClips.enumerated() {
if let videoAsset = videoClip.asset, let videoAssetTrack = videoAsset.tracks(withMediaType: AVMediaType.video).first {
// insert current video track to composition
try videoCompositionTrack!.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), of: videoAssetTrack, at: totalDuration)
videoCompositionTrack?.preferredTransform = videoAssetTrack.preferredTransform
// shift duration to next
totalDuration = CMTimeAdd(totalDuration, videoAsset.duration)
}
}
// Use AVAssetExportSession to export video
let assetExport = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPreset1920x1080)
assetExport?.outputFileType = AVFileType.mp4
// get needed save url to save the video to recommended url
let movieDestinationUrl = self.getRecommendedSaveUrl()
// seting up asset export session
assetExport?.outputURL = movieDestinationUrl
assetExport?.shouldOptimizeForNetworkUse = true
// export video to file system asyc
assetExport?.exportAsynchronously(completionHandler: {
assetExport?.cancelExport()
switch assetExport!.status {
case AVAssetExportSessionStatus.failed:
break
case AVAssetExportSessionStatus.cancelled:
break
default:
DispatchQueue.main.async {
completion?(movieDestinationUrl, nil)
}
}
if ((assetExport?.error) != nil) {
AppDelegate.logger.error("Could not create user video: \((assetExport?.error)!)")
DispatchQueue.main.async {
completion?(nil, assetExport?.error)
}
}
})
I'm trying to use something like this and multiple AVMutableCompositionTrack's with different CGAffineTransform objects.
// create mix composition
let mixComposition = AVMutableComposition()
// keep track of total duration
var totalDuration = kCMTimeZero
// for each video clip add to mutable composition and transform each video layer
for (index, videoClip) in videoClips.enumerated() {
if let videoAsset = videoClip.asset, let videoAssetTrack = videoAsset.tracks(withMediaType: AVMediaType.video).first {
// insert video track
let videoCompositionTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID(index))
// insert current video track to composition
try videoCompositionTrack!.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), of: videoAssetTrack, at: totalDuration)
videoCompositionTrack?.preferredTransform = videoAssetTrack.preferredTransform
// shift duration to next
totalDuration = CMTimeAdd(totalDuration, videoAsset.duration)
}
}
// Use AVAssetExportSession to export video
let assetExport = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPreset1920x1080)
assetExport?.outputFileType = AVFileType.mp4
// get needed save url to save the video to recommended url
let movieDestinationUrl = self.getRecommendedSaveUrl()
// seting up asset export session
assetExport?.outputURL = movieDestinationUrl
assetExport?.shouldOptimizeForNetworkUse = true
// export video to file system asyc
assetExport?.exportAsynchronously(completionHandler: {
assetExport?.cancelExport()
switch assetExport!.status {
case AVAssetExportSessionStatus.failed:
break
case AVAssetExportSessionStatus.cancelled:
break
default:
DispatchQueue.main.async {
completion?(movieDestinationUrl, nil)
}
}
if ((assetExport?.error) != nil) {
AppDelegate.logger.error("Could not create user video: \((assetExport?.error)!)")
DispatchQueue.main.async {
completion?(nil, assetExport?.error)
}
}
})
In the case above I'm not able to get any useable video: it is much shorter than it should be. I'm trying to avoid using any AVMutableVideoCompositionInstruction because it takes too long to process but it would still be an option if it worked for any resolution and especially with mirroring support.
// create mix composition
let mixComposition = AVMutableComposition()
// keep track of total duration
var totalDuration = kCMTimeZero
// keeps all layer transformations for each video asset
var videoCompositionLayerInstructions = [AVMutableVideoCompositionLayerInstruction]()
// for each video clip add to mutable composition and transform each video layer
for (index, videoClip) in videoClips.enumerated() {
if let videoAsset = videoClip.asset {
// use first video asset track for setting like height and width
let videoAssetTrack = videoAsset.tracks(withMediaType: AVMediaType.video).first!
// insert video trakc
let videoCompositionTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID(index))
// insert current video track to composition
try videoCompositionTrack!.insertTimeRange(CMTimeRangeMake(totalDuration, videoAssetTrack.timeRange.duration), of: videoAssetTrack, at: totalDuration)
videoCompositionTrack?.preferredTransform = videoAssetTrack.preferredTransform
let videoCompositionLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoAssetTrack)
videoCompositionLayerInstruction.setTransform((videoCompositionTrack?.preferredTransform)!, at: totalDuration)
videoCompositionLayerInstruction.setOpacity(0.0, at: videoAsset.duration)
// apply instruction
videoCompositionLayerInstructions.append(videoCompositionLayerInstruction)
// shift duration to next
totalDuration = CMTimeAdd(totalDuration, videoAsset.duration)
}
}
let videoCompositionInstruction = AVMutableVideoCompositionInstruction()
videoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, totalDuration)
videoCompositionInstruction.layerInstructions = videoCompositionLayerInstructions
let mainComposition = AVMutableVideoComposition()
mainComposition.renderSize = CGSize(width: 1080, height: 1920)
mainComposition.frameDuration = CMTimeMake(1, 30)
mainComposition.instructions = [videoCompositionInstruction]
// Use AVAssetExportSession to export video
let assetExport = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPreset1920x1080)
assetExport?.outputFileType = AVFileType.mp4
// get needed save url to save the video to recommended url
let movieDestinationUrl = self.getRecommendedSaveUrl()
// seting up asset export session
assetExport?.outputURL = movieDestinationUrl
assetExport?.shouldOptimizeForNetworkUse = true
assetExport?.videoComposition = mainComposition
Anybody an idea how to implement this functionality?
Note: I don't need to care about audio at all.

AVMutableAudioMix multiple volume changes to single track

I'm working on an app that merges multiple video clips into one final video. I would like to give users the ability to mute individual clips if desired (so, only parts of the final merged video would be muted). I have wrapped the AVAssets in a class called "Video" that has a "shouldMute" property.
My problem is, when I set the volume of one of the AVAssetTracks to zero, it stays muted for the remainder of the final video. Here is my code:
var completeDuration : CMTime = CMTimeMake(0, 1)
var insertTime = kCMTimeZero
var layerInstructions = [AVVideoCompositionLayerInstruction]()
let mixComposition = AVMutableComposition()
let audioMix = AVMutableAudioMix()
let videoTrack =
mixComposition.addMutableTrack(withMediaType: AVMediaType.video,
preferredTrackID: kCMPersistentTrackID_Invalid)
let audioTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
// iterate through video assets and merge together
for (i, video) in clips.enumerated() {
let videoAsset = video.asset
var clipDuration = videoAsset.duration
do {
if video == clips.first {
insertTime = kCMTimeZero
} else {
insertTime = completeDuration
}
if let videoAssetTrack = videoAsset.tracks(withMediaType: .video).first {
try videoTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipDuration), of: videoAssetTrack, at: insertTime)
completeDuration = CMTimeAdd(completeDuration, clipDuration)
}
if let audioAssetTrack = videoAsset.tracks(withMediaType: .audio).first {
try audioTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipDuration), of: audioAssetTrack, at: insertTime)
if video.shouldMute {
let audioMixInputParams = AVMutableAudioMixInputParameters()
audioMixInputParams.trackID = audioTrack!.trackID
audioMixInputParams.setVolume(0.0, at: insertTime)
audioMix.inputParameters.append(audioMixInputParams)
}
}
} catch let error as NSError {
print("error: \(error)")
}
let videoInstruction = videoCompositionInstructionForTrack(track: videoTrack!, video: video)
if video != clips.last{
videoInstruction.setOpacity(0.0, at: completeDuration)
}
layerInstructions.append(videoInstruction)
} // end of video asset iteration
If I add another setVolume:atTime instruction to increase the volume back to 1.0 at the end of the clip, then the first volume instruction is completely ignored and the whole video plays at full volume.
In other words, this isn't working:
if video.shouldMute {
let audioMixInputParams = AVMutableAudioMixInputParameters()
audioMixInputParams.trackID = audioTrack!.trackID
audioMixInputParams.setVolume(0.0, at: insertTime)
audioMixInputParams.setVolume(1.0, at: completeDuration)
audioMix.inputParameters.append(audioMixInputParams)
}
I have set the audioMix on both my AVPlayerItem and AVAssetExportSession. What am I doing wrong? What can I do to allow users to mute the time ranges of individual clips before merging into the final video?
Apparently I was going about this wrong. As you can see above, my composition has two AVMutableCompositionTracks: a video track, and an audio track. Even though I inserted the time ranges of a series of other tracks into those two tracks, there's still ultimately only two tracks. So, I only needed one AVMutableAudioMixInputParameters object to associate with my one audio track.
I initialized a single AVMutableAudioMixInputParameters object and then, after I inserted the time range of each clip, I'd check to see whether it should be muted and set a volume ramp for the clip's time range (the time range in relation to the entire audio track). Here's what that looks like, inside my clip iteration:
if let audioAssetTrack = videoAsset.tracks(withMediaType: .audio).first {
try audioTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipDuration), of: audioAssetTrack, at: insertTime)
if video.shouldMute {
audioMixInputParams.setVolumeRamp(fromStartVolume: 0.0, toEndVolume: 0.0, timeRange: CMTimeRangeMake(insertTime, clipDuration))
} else {
audioMixInputParams.setVolumeRamp(fromStartVolume: 1.0, toEndVolume: 1.0, timeRange: CMTimeRangeMake(insertTime, clipDuration))
}
}

Merge videos & images in AVMutableComposition using AVMutableCompositionTrack, not AVVideoCompositionCoreAnimationTool?

The code below exports a video using AVMutableComposition. But in the exported video, if you want an image to display for 3 seconds after the source video finishes, is there a way to do that with AVMutableCompositionTrack or do you need to add an image layer and animate its appearance after the video ends?
Eventually, the goal is to merge an arbitrary number of images and videos into one master video.
Unfortunately, during testing it seems like AVVideoCompositionCoreAnimationTool severely slows down the export process (from < 1 second to 10-20 seconds), so the goal is to avoid AVVideoCompositionCoreAnimationTool if possible.
// Create composition object
let composition = AVMutableComposition()
let compositionVideoTrack = composition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
let compositionAudioTrack = composition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
var insertTime = kCMTimeZero
// Extract tracks from slice video
let videoURL = NSURL(fileURLWithPath: videoPath)
let videoAsset = AVURLAsset(URL: videoURL, options: nil)
let sourceVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
let sourceAudioTrack = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
do {
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceVideoTrack, atTime: kCMTimeZero)
try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceAudioTrack, atTime: kCMTimeZero)
} catch {
print("Error with insertTimeRange while exporting video: \(error)")
}
// Export composition to video
let outputURL = getFilePath(getUniqueFilename(gMP4File))
let exporter = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)
exporter!.outputURL = NSURL(fileURLWithPath: outputURL)
exporter!.outputFileType = AVFileTypeMPEG4
exporter!.exportAsynchronouslyWithCompletionHandler({
self.exportDidFinish(exporter!)
})
After consulting others on SO and performing more web research, it seems like this is not possible. Merging an image with a video into a master video that is playable out of an app seems to require AVVideoCompositionCoreAnimationTool.

Resources