I am trying to merge two videos together in AVFoundation.
I am using AVMutableComposition and I add both tracks to the composition, resulting in a final video where I have the first video with its audio, and after that the 2nd audio but no video.
How can I get the audio and video of both tracks?
Thank you
let composition = AVMutableComposition()
let audioTrack: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid)!
let videoTrack: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)!
let audioTrack2: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid)!
let videoTrack2: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)!
var outputURL = documentDirectory.appendingPathComponent("output-temp")
do {
try! audioTrack.insertTimeRange(CMTimeRangeFromTimeToTime(start: startTime, end: endTime), of: asset.tracks(withMediaType: AVMediaType.audio)[0], at: CMTime.zero)
try! videoTrack.insertTimeRange(CMTimeRangeFromTimeToTime(start: startTime, end: endTime), of: asset.tracks(withMediaType: AVMediaType.video)[0], at: CMTime.zero)
try! audioTrack2.insertTimeRange(CMTimeRangeFromTimeToTime(start: startTime, end: asset2.duration), of: asset2.tracks(withMediaType: AVMediaType.audio)[0], at: CMTime.invalid)
try! videoTrack2.insertTimeRange(CMTimeRangeFromTimeToTime(start: startTime, end: asset2.duration), of: asset2.tracks(withMediaType: AVMediaType.video)[0], at: CMTime.invalid)
try manager.createDirectory(at: outputURL, withIntermediateDirectories: true, attributes: nil)
let id = "id-\(Int.random(in: 0...199))"
let mediaType = "mp4"
outputURL = outputURL.appendingPathComponent("preVideo-\(id).\(mediaType)")
} catch let error {
print(error)
}
The problem is that you are adding a second video track to the composition. You need to insert both videos into the same video track. Just delete your let videoTrack2 and go from there.
Related
I have successfully merge the video clips to a single video but I am having a problem in the final merged video, the final video shows a white frame after the end of every video clip. I have tried a lot to remove this but couldn't find success. Please review my code below.
func merge(arrayVideos:[AVAsset], completion:#escaping (_ exporter: AVAssetExportSession) -> ()) -> Void {
let mainComposition = AVMutableComposition()
let compositionVideoTrack = mainComposition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
compositionVideoTrack?.preferredTransform = CGAffineTransform(rotationAngle: .pi / 2)
let soundtrackTrack = mainComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
var time:Double = 0.0
for (index, videoAsset) in arrayVideos.enumerated() {
let atTime = CMTime(seconds: time, preferredTimescale: 1)
try! compositionVideoTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: .video)[0], at: atTime)
try! soundtrackTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: .audio)[0], at: atTime)
time += videoAsset.duration.seconds
}
let outputFileURL = URL(fileURLWithPath: NSTemporaryDirectory() + "merge.mp4")
print("final URL:\(outputFileURL)")
let fileManager = FileManager()
do {
try fileManager.removeItem(at: outputFileURL)
} catch let error as NSError {
print("Error: \(error.domain)")
}
let exporter = AVAssetExportSession(asset: mainComposition, presetName: AVAssetExportPresetHighestQuality)
exporter?.outputURL = outputFileURL
exporter?.outputFileType = AVFileType.mp4
exporter?.shouldOptimizeForNetworkUse = true
exporter?.exportAsynchronously {
DispatchQueue.main.async {
completion(exporter!)
}
}
}
Don't use a Double to track the insertion time, this can result in gaps due to rounding errors. And don't use a preferredTimescale of 1 when converting seconds, this will effectively round everything to whole seconds (1000 would be a more common timescale for this).
Instead to track the insertion time use a CMTime initialized to kCMTimeZero, and use CMTimeAdd to advance it.
And one more thing: Video and audio tracks can have different durations, particularly when recorded. So to keep things in sync, you may want to use CMTimeRangeGetIntersection to get the common time range of audio and video in the asset, and then use result to for insertion in the composition.
I am working on Apple TV application. I have one URL of Video and one URL of video Subtitle.
I want to play video with subtitle. Anybody have a solution for this problem
If you have please let me know.
There is a Video URL: "https://sample/hls_playlist.m3u8"
There is a Subtitle URL: "https://samplesubtitle=AKIAJS5BLYTDP3J5UOFQ&Expires=1507918258&Signature=r%2FcV7UVSejOBYDKFHwMYtrvXUmM%3D"
This is my code:
func playVideo(_ videoURL: String) {
// 1 - Load video asset
let videoAsset = AVURLAsset.init(url: URL(string:videoURL)!) as AVURLAsset
// 2 - Create AVMutableComposition object. This object will hold your AVMutableCompositionTrack instances.
let mixComposition = AVMutableComposition.init()
// 3 - Video track
let videoTrack: AVMutableCompositionTrack? = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
try? videoTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), of: (videoAsset.tracks(withMediaType: AVMediaTypeVideo)[0]), at: kCMTimeZero)
// 4 - Subtitle track
let subtitleAsset = AVURLAsset.init(url: URL(string: self.subtitle!)!)
let subtitleTrack: AVMutableCompositionTrack? = mixComposition.addMutableTrack(withMediaType: AVMediaTypeText, preferredTrackID: kCMPersistentTrackID_Invalid)
try? subtitleTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), of: subtitleAsset.tracks(withMediaType: AVMediaTypeText)[0], at: kCMTimeZero)
self.playerItem = AVPlayerItem.init(asset: mixComposition)
self.playerObj = AVPlayer.init(playerItem: self.playerItem)
self.playerController?.player = self.playerObj
self.playerItem.addObserver(self, forKeyPath: Notification.Name.status.rawValue, options: NSKeyValueObservingOptions.new, context: nil)
self.playerController?.player!.play()
NotificationCenter.default.addObserver(self, selector: #selector(PlayerViewController.playerItemDidReachEnd(_:)), name: NSNotification.Name.AVPlayerItemDidPlayToEndTime, object: self.playerObj.currentItem)
}
I am following this code to merge audio and video files. It works great. Suppose, audio length is 5 seconds and video length is 20 seconds. While I merged this files in the exported video there is no sound after 5 second. That is obvious cause my audio length is 5 seconds. But is it possible to run the audio in loop throughout the full video session.
var finalVideoURL = NSURL()
var finalVideoName = String()
func mergeAudio&VideoFiles(videoUrl:NSURL, audioUrl:NSURL){
let mixComposition : AVMutableComposition = AVMutableComposition()
var mutableCompositionVideoTrack : [AVMutableCompositionTrack] = []
var mutableCompositionAudioTrack : [AVMutableCompositionTrack] = []
let totalVideoCompositionInstruction : AVMutableVideoCompositionInstruction = AVMutableVideoCompositionInstruction()
//start merge
let aVideoAsset : AVAsset = AVAsset(url: videoUrl as URL)
let aAudioAsset : AVAsset = AVAsset(url: audioUrl as URL)
mutableCompositionVideoTrack.append(mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid))
mutableCompositionAudioTrack.append( mixComposition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid))
let aVideoAssetTrack : AVAssetTrack = aVideoAsset.tracks(withMediaType: AVMediaTypeVideo)[0]
let aAudioAssetTrack : AVAssetTrack = aAudioAsset.tracks(withMediaType: AVMediaTypeAudio)[0]
do{
try mutableCompositionVideoTrack[0].insertTimeRange(CMTimeRangeMake(kCMTimeZero, aVideoAssetTrack.timeRange.duration), of: aVideoAssetTrack, at: kCMTimeZero)
try mutableCompositionAudioTrack[0].insertTimeRange(CMTimeRangeMake(kCMTimeZero, aVideoAssetTrack.timeRange.duration), of: aAudioAssetTrack, at: kCMTimeZero)
}catch{
}
totalVideoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero,aVideoAssetTrack.timeRange.duration )
let mutableVideoComposition : AVMutableVideoComposition = AVMutableVideoComposition()
mutableVideoComposition.frameDuration = CMTimeMake(1, 30)
mutableVideoComposition.renderSize = CGSize(width: 1280, height: 720)
finalVideoURL = NSURL(fileURLWithPath: NSHomeDirectory() + "/Documents/FinalVideo.mp4")
finalVideoName = "FinalVideo.mp4"
let assetExport: AVAssetExportSession = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)!
assetExport.outputFileType = AVFileTypeMPEG4
assetExport.outputURL = finalVideoURL as URL
assetExport.shouldOptimizeForNetworkUse = true
assetExport.exportAsynchronously { () -> Void in
switch assetExport.status {
case AVAssetExportSessionStatus.completed:
print("Export movie to document directory from trimmed audio and mutable video complete :)")
case AVAssetExportSessionStatus.failed:
print("failed \(assetExport.error)")
case AVAssetExportSessionStatus.cancelled:
print("cancelled \(assetExport.error)")
default:
print("complete")
}
}
}
It was asked a long time ago, but it can certainly help someone in the future,
you can do it by inserting the time range of the AVAssetTrack multiple times:
let vDuration = aVideoAsset.duration
let audioAssetTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
do {
var currentDuration = CMTime.zero
while currentDuration < vDuration {
let restTime = CMTimeSubtract(vDuration, currentDuration)
let maxTime = CMTimeMinimum(aAudioAsset.duration, restTime)
try audioAssetTrack.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: maxTime),
of: aAudioAsset.tracks(withMediaType: .audio)[0],
at: currentDuration)
currentDuration = CMTimeAdd(currentDuration, aAudioAsset.duration)
}
} catch {
print("Failed to load audio track")
continue
}
The code below exports a video using AVMutableComposition. But in the exported video, if you want an image to display for 3 seconds after the source video finishes, is there a way to do that with AVMutableCompositionTrack or do you need to add an image layer and animate its appearance after the video ends?
Eventually, the goal is to merge an arbitrary number of images and videos into one master video.
Unfortunately, during testing it seems like AVVideoCompositionCoreAnimationTool severely slows down the export process (from < 1 second to 10-20 seconds), so the goal is to avoid AVVideoCompositionCoreAnimationTool if possible.
// Create composition object
let composition = AVMutableComposition()
let compositionVideoTrack = composition.addMutableTrackWithMediaType(AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
let compositionAudioTrack = composition.addMutableTrackWithMediaType(AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID(kCMPersistentTrackID_Invalid))
var insertTime = kCMTimeZero
// Extract tracks from slice video
let videoURL = NSURL(fileURLWithPath: videoPath)
let videoAsset = AVURLAsset(URL: videoURL, options: nil)
let sourceVideoTrack = videoAsset.tracksWithMediaType(AVMediaTypeVideo)[0]
let sourceAudioTrack = videoAsset.tracksWithMediaType(AVMediaTypeAudio)[0]
do {
try compositionVideoTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceVideoTrack, atTime: kCMTimeZero)
try compositionAudioTrack.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), ofTrack: sourceAudioTrack, atTime: kCMTimeZero)
} catch {
print("Error with insertTimeRange while exporting video: \(error)")
}
// Export composition to video
let outputURL = getFilePath(getUniqueFilename(gMP4File))
let exporter = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)
exporter!.outputURL = NSURL(fileURLWithPath: outputURL)
exporter!.outputFileType = AVFileTypeMPEG4
exporter!.exportAsynchronouslyWithCompletionHandler({
self.exportDidFinish(exporter!)
})
After consulting others on SO and performing more web research, it seems like this is not possible. Merging an image with a video into a master video that is playable out of an app seems to require AVVideoCompositionCoreAnimationTool.
How to combine video clips with different orientation using AVFoundation
I have gone with the above answer and is going good. But i am facing a problem that audio of the video is being removed. Even all of my videos have voice. But after merging the exported video is mute. Can anyone help. Thanks in Advance.
I was also facing the same problem, but i got the solution.
Swift 4.2 version.
// Merge All videos.
func mergeAllVideos(completionHandler: #escaping(Bool)->Void){
mixComposition = AVMutableComposition.init()
// To capture video.
let compositionVideoTrack = mixComposition?.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
// To capture audio.
let compositionAudioTrack = mixComposition?.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
var nextCliptStartTime: CMTime = CMTime.zero
// Iterate video array.
for file_url in Constants.videoFileNameArr{
// Do Merging here.
let videoAsset = AVURLAsset.init(url: file_url)
let timeRangeInAsset = CMTimeRangeMake(start: CMTime.zero, duration: videoAsset.duration);
do{
// Merge video.
try compositionVideoTrack?.insertTimeRange(CMTimeRange(start: CMTime.zero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: .video)[0], at: nextCliptStartTime)
// Merge Audio
try compositionAudioTrack?.insertTimeRange(CMTimeRange(start: CMTime.zero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: .audio)[0], at: nextCliptStartTime)
}catch{
print(error)
}
// Increment the time to which next clip add.
nextCliptStartTime = CMTimeAdd(nextCliptStartTime, timeRangeInAsset.duration)
}
// Add rotation to make it portrait.
let rotationTransform = CGAffineTransform(rotationAngle: CGFloat(Double.pi/2))
compositionVideoTrack!.preferredTransform = rotationTransform
// Save final file.
self.saveFinalFile(mixComposition!){
isDone in
completionHandler(true)
}
}