compute times for cross-fade between videos - ios

I have to apply opacity in a video. I have to apply it before the end of the video of a second. I am using the "firstInstruction" to have the total duration of the video. however when I call the "firstInstruction.setOpacityRamp" method I can not subtract the second one..
let mainInstruction = AVMutableVideoCompositionInstruction()
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, CMTimeAdd(firstAsset.duration, secondAsset.duration))
let firstInstruction = VideoHelper.videoCompositionInstruction(firstTrack, asset: firstAsset)
firstInstruction.setOpacityRamp(fromStartOpacity: 1, toEndOpacity: 0.1, timeRange: mainInstruction.timeRange)

I would use three instructions to apply the cross-fade:
A β€œpass-through” instruction that shows only the first video track, until one second before the end of the first asset.
A cross-fade instruction that simultaneously shows the last second of the first video track and the first second of the second video track, with opacity ramps.
A β€œpass-through” instruction that shows only the second video track, starting from one second into the second video track.
So, first, let's get the tracks:
import AVFoundation
import CoreVideo
func crossFade(asset0: AVAsset, asset1: AVAsset, crossFadeDuration: CMTime, to outputUrl: URL) throws {
guard
let asset0Track = asset0.tracks(withMediaType: .video).first,
let asset1Track = asset1.tracks(withMediaType: .video).first,
case let composition = AVMutableComposition(),
case let compositionTrack0Id = composition.unusedTrackID(),
let compositionTrack0 = composition.addMutableTrack(
withMediaType: .video, preferredTrackID: compositionTrack0Id),
case let compositionTrack1Id = composition.unusedTrackID(),
let compositionTrack1 = composition.addMutableTrack(
withMediaType: .video, preferredTrackID: compositionTrack1Id)
else { return }
Now let's compute all of the times we need. First, the entire range of asset0Track in the composition, include both the pass-through and cross-fade periods:
// When does asset0Track start, in the composition?
let asset0TrackStartTime = CMTime.zero
// When does asset0Track end, in the composition?
let asset0TrackEndTime = asset0TrackStartTime + asset0Track.timeRange.duration
Next, the cross-fade's time range:
// When does the cross-fade end, in the composition?
// It should end exactly at the end of asset0's video track.
let crossFadeEndTime = asset0TrackEndTime
// When does the cross-fade start, in the composition?
let crossFadeStartTime = crossFadeEndTime - crossFadeDuration
// What is the entire time range of the cross-fade, in the composition?
let crossFadeTimeRange = CMTimeRangeMake(
start: crossFadeStartTime,
duration: crossFadeDuration)
Next, the entire range of asset1Track in the composition, include both the cross-fade and pass-through periods:
// When does asset1Track start, in the composition?
// It should start exactly at the start of the cross-fade.
let asset1TrackStartTime = crossFadeStartTime
// When does asset1Track end, in the composition?
let asset1TrackEndTime = asset1TrackStartTime + asset1Track.timeRange.duration
And finally, the two pass-through time ranges:
// What is the time range during which only asset0 is visible, in the composition?
let compositionTrack0PassThroughTimeRange = CMTimeRangeMake(
start: asset0TrackStartTime,
duration: crossFadeStartTime - asset0TrackStartTime)
// What is the time range during which only asset1 is visible, in the composition?
let compositionTrack1PassThroughTimeRange = CMTimeRangeMake(
start: crossFadeEndTime,
duration: asset1TrackEndTime - crossFadeEndTime)
Now we can insert the input tracks into the composition's tracks:
// Put asset0Track into compositionTrack0.
try compositionTrack0.insertTimeRange(
asset0Track.timeRange,of: asset0Track, at: asset0TrackStartTime)
// Put asset1Track into compositionTrack1.
try compositionTrack1.insertTimeRange(
asset1Track.timeRange, of: asset1Track, at: asset1TrackStartTime)
That is all we need to do for the AVMutableComposition. But we also need to make an AVMutableVideoComposition:
let videoComposition = AVMutableVideoComposition()
videoComposition.frameDuration =
min(asset0Track.minFrameDuration, asset1Track.minFrameDuration)
videoComposition.renderSize = CGSize(
width: max(asset0Track.naturalSize.width, asset1Track.naturalSize.width),
height: max(asset0Track.naturalSize.height, asset1Track.naturalSize.height))
We need to set the video composition's instructions. The first instruction is to pass through just compositionTrack0 for the appropriate time range:
// I'm using a helper function defined below.
let compositionTrack0PassThroughInstruction = AVMutableVideoCompositionInstruction.passThrough(
trackId: compositionTrack0Id, timeRange: compositionTrack0PassThroughTimeRange)
The second instruction is for the cross-fade, so it's more complicated. It needs two child instructions, one for each layer in the cross-fade. Each layer instruction, and the overall cross-fade instruction, use the same time range:
let crossFadeLayer0Instruction = AVMutableVideoCompositionLayerInstruction()
crossFadeLayer0Instruction.trackID = compositionTrack0Id
crossFadeLayer0Instruction.setOpacityRamp(fromStartOpacity: 1, toEndOpacity: 0, timeRange: crossFadeTimeRange)
let crossFadeLayer1Instruction = AVMutableVideoCompositionLayerInstruction()
crossFadeLayer1Instruction.trackID = compositionTrack1Id
crossFadeLayer1Instruction.setOpacityRamp(fromStartOpacity: 0, toEndOpacity: 1, timeRange: crossFadeTimeRange)
let crossFadeInstruction = AVMutableVideoCompositionInstruction()
crossFadeInstruction.timeRange = crossFadeTimeRange
crossFadeInstruction.layerInstructions = [crossFadeLayer0Instruction, crossFadeLayer1Instruction]
The third instruction is to pass through just compositionTrack1 for the appropriate time range:
let compositionTrack1PassThroughInstruction = AVMutableVideoCompositionInstruction.passThrough(
trackId: compositionTrack1Id, timeRange: compositionTrack1PassThroughTimeRange)
Now that we have all three instruction, we can give them to the video composition:
videoComposition.instructions = [compositionTrack0PassThroughInstruction, crossFadeInstruction, compositionTrack1PassThroughInstruction]
And now we can use composition and videoComposition together, for example to export a new movie file:
let export = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetMediumQuality)!
export.outputURL = outputUrl
export.videoComposition = videoComposition
export.exportAsynchronously {
exit(0)
}
}
Here's the helper I used to create the pass-through instructions:
extension AVMutableVideoCompositionInstruction {
static func passThrough(trackId: CMPersistentTrackID, timeRange: CMTimeRange) -> AVMutableVideoCompositionInstruction {
let layerInstruction = AVMutableVideoCompositionLayerInstruction()
layerInstruction.trackID = trackId
let instruction = AVMutableVideoCompositionInstruction()
instruction.timeRange = timeRange
instruction.layerInstructions = [layerInstruction]
return instruction
}
}
And here's my test code. I used a macOS command-line app for testing:
let asset0 = AVURLAsset(url: URL(fileURLWithPath: "/tmp/asset0.mp4"))
let asset1 = AVURLAsset(url: URL(fileURLWithPath: "/tmp/asset1.mp4"))
let outputUrl = URL(fileURLWithPath: "/tmp/output.mp4")
try! crossFade(asset0: asset0, asset1: asset1, crossFadeDuration: CMTimeMake(value: 1, timescale: 1), to: outputUrl)
dispatchMain()
Result:
Note that I had to make the animation tiny and low color because of Stack Overflow's limit on image file size.
Input videos courtesy of Jeffrey Beach.

Related

Trouble applying scaleTimeRange on multiple videos in a AVMutableComposition video

I am attempting to merge videos with scaleTimeRanges (to make them slo-mo or speed-up); however, it is not working as desired. Only the first video has the timerange effect... not all of them.
The work is done in the merge videos function; it is pretty simple... however I am not sure why the scaling of the time range is not working for only the first video and not the next ones...
This is a test project to test with, it has my current code: https://github.com/meyesyesme/creationMergeProj
This is the merge function I use, with the time range scaling currently commented out (you can uncomment to see it not working):
func mergeVideosTestSQ(arrayVideos:[VideoSegment], completion:#escaping (URL?, Error?) -> ()) {
let mixComposition = AVMutableComposition()
var instructions: [AVMutableVideoCompositionLayerInstruction] = []
var insertTime = CMTime(seconds: 0, preferredTimescale: 1)
print(arrayVideos, "<- arrayVideos")
/// for each URL add the video and audio tracks and their duration to the composition
for videoSegment in arrayVideos {
let sourceAsset = AVAsset(url: videoSegment.videoURL!)
let frameRange = CMTimeRange(start: CMTime(seconds: 0, preferredTimescale: 1), duration: sourceAsset.duration)
guard
let nthVideoTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)),
let nthAudioTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)), //0 used to be kCMPersistentTrackID_Invalid
let assetVideoTrack = sourceAsset.tracks(withMediaType: .video).first
else {
print("didnt work")
return
}
var assetAudioTrack: AVAssetTrack?
assetAudioTrack = sourceAsset.tracks(withMediaType: .audio).first
print(assetAudioTrack, ",-- assetAudioTrack???", assetAudioTrack?.asset, "<-- hes", sourceAsset)
do {
try nthVideoTrack.insertTimeRange(frameRange, of: assetVideoTrack, at: insertTime)
try nthAudioTrack.insertTimeRange(frameRange, of: assetAudioTrack!, at: insertTime)
//MY CURRENT SPEED ATTEMPT:
let newDuration = CMTimeMultiplyByFloat64(frameRange.duration, multiplier: videoSegment.videoSpeed)
nthVideoTrack.scaleTimeRange(frameRange, toDuration: newDuration)
nthAudioTrack.scaleTimeRange(frameRange, toDuration: newDuration)
print(insertTime.value, "<-- fiji, newdur --->", newDuration.value, "sourceasset duration--->", sourceAsset.duration.value, "frameRange.duration -->", frameRange.duration.value)
//instructions:
let nthInstruction = ViewController.videoCompositionInstruction(nthVideoTrack, asset: sourceAsset)
nthInstruction.setOpacity(0.0, at: CMTimeAdd(insertTime, newDuration)) //sourceasset.duration
instructions.append(nthInstruction)
insertTime = insertTime + newDuration //sourceAsset.duration
} catch {
DispatchQueue.main.async {
print("didnt wor2k")
}
}
}
let mainInstruction = AVMutableVideoCompositionInstruction()
mainInstruction.timeRange = CMTimeRange(start: CMTime(seconds: 0, preferredTimescale: 1), duration: insertTime)
mainInstruction.layerInstructions = instructions
let mainComposition = AVMutableVideoComposition()
mainComposition.instructions = [mainInstruction]
mainComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
mainComposition.renderSize = CGSize(width: 1080, height: 1920)
let outputFileURL = URL(fileURLWithPath: NSTemporaryDirectory() + "merge.mp4")
//below to clear the video form docuent folder for new vid...
let fileManager = FileManager()
try? fileManager.removeItem(at: outputFileURL)
print("<now will export: πŸ”₯ πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯πŸ”₯")
/// try to start an export session and set the path and file type
if let exportSession = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) { //DOES NOT WORK WITH AVAssetExportPresetPassthrough
exportSession.outputFileType = .mov
exportSession.outputURL = outputFileURL
exportSession.videoComposition = mainComposition
exportSession.shouldOptimizeForNetworkUse = true
/// try to export the file and handle the status cases
exportSession.exportAsynchronously {
if let url = exportSession.outputURL{
completion(url, nil)
}
if let error = exportSession.error {
completion(nil, error)
}
}
}
}
You'll see this behavior: the first one is working well, but then the next videos do not and have issues with when they were set opacity, etc... I have tried different combinations and this is the closest one yet.
I've been stuck on this for a while!
After you scale the video, duration of the composition gets recalculated, so you need to append the next part according to this change. Replace
insertTime = insertTime + duration
with
insertTime = insertTime + newDuration
You also need to update setOpacity at value, I'd advise you to move that line after insertTime update and use new value, to remove duplicate work here
When you're applying scale, it's applied to new composition, so you need to use relative range:
let currentRange = CMTimeRange(start: insertTime, duration: frameRange.duration)
nthVideoTrack.scaleTimeRange(currentRange, toDuration: newDuration)
nthAudioTrack.scaleTimeRange(currentRange, toDuration: newDuration)

how to merge video clips using avfoundation?

I have successfully merge the video clips to a single video but I am having a problem in the final merged video, the final video shows a white frame after the end of every video clip. I have tried a lot to remove this but couldn't find success. Please review my code below.
func merge(arrayVideos:[AVAsset], completion:#escaping (_ exporter: AVAssetExportSession) -> ()) -> Void {
let mainComposition = AVMutableComposition()
let compositionVideoTrack = mainComposition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
compositionVideoTrack?.preferredTransform = CGAffineTransform(rotationAngle: .pi / 2)
let soundtrackTrack = mainComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
var time:Double = 0.0
for (index, videoAsset) in arrayVideos.enumerated() {
let atTime = CMTime(seconds: time, preferredTimescale: 1)
try! compositionVideoTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: .video)[0], at: atTime)
try! soundtrackTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: .audio)[0], at: atTime)
time += videoAsset.duration.seconds
}
let outputFileURL = URL(fileURLWithPath: NSTemporaryDirectory() + "merge.mp4")
print("final URL:\(outputFileURL)")
let fileManager = FileManager()
do {
try fileManager.removeItem(at: outputFileURL)
} catch let error as NSError {
print("Error: \(error.domain)")
}
let exporter = AVAssetExportSession(asset: mainComposition, presetName: AVAssetExportPresetHighestQuality)
exporter?.outputURL = outputFileURL
exporter?.outputFileType = AVFileType.mp4
exporter?.shouldOptimizeForNetworkUse = true
exporter?.exportAsynchronously {
DispatchQueue.main.async {
completion(exporter!)
}
}
}
Don't use a Double to track the insertion time, this can result in gaps due to rounding errors. And don't use a preferredTimescale of 1 when converting seconds, this will effectively round everything to whole seconds (1000 would be a more common timescale for this).
Instead to track the insertion time use a CMTime initialized to kCMTimeZero, and use CMTimeAdd to advance it.
And one more thing: Video and audio tracks can have different durations, particularly when recorded. So to keep things in sync, you may want to use CMTimeRangeGetIntersection to get the common time range of audio and video in the asset, and then use result to for insertion in the composition.

Swift AVFoundation stitching multiple videos together and keep preferred transform

I'm trying to stitch multiple video clips together. If I stitch each AVAsset in one AVMutableCompositionTrack it works but loses the transformation on the first asset by appending another one with enabled mirroring mode for front facing camera. Can I somehow use multiple AVMutableCompositionTrack of type video in one AVMutableComposition?
// create mix composition
let mixComposition = AVMutableComposition()
// insert video track
let videoCompositionTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)
// keep track of total duration
var totalDuration = kCMTimeZero
// for each video clip add to mutable composition and transform each video layer
for (index, videoClip) in videoClips.enumerated() {
if let videoAsset = videoClip.asset, let videoAssetTrack = videoAsset.tracks(withMediaType: AVMediaType.video).first {
// insert current video track to composition
try videoCompositionTrack!.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), of: videoAssetTrack, at: totalDuration)
videoCompositionTrack?.preferredTransform = videoAssetTrack.preferredTransform
// shift duration to next
totalDuration = CMTimeAdd(totalDuration, videoAsset.duration)
}
}
// Use AVAssetExportSession to export video
let assetExport = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPreset1920x1080)
assetExport?.outputFileType = AVFileType.mp4
// get needed save url to save the video to recommended url
let movieDestinationUrl = self.getRecommendedSaveUrl()
// seting up asset export session
assetExport?.outputURL = movieDestinationUrl
assetExport?.shouldOptimizeForNetworkUse = true
// export video to file system asyc
assetExport?.exportAsynchronously(completionHandler: {
assetExport?.cancelExport()
switch assetExport!.status {
case AVAssetExportSessionStatus.failed:
break
case AVAssetExportSessionStatus.cancelled:
break
default:
DispatchQueue.main.async {
completion?(movieDestinationUrl, nil)
}
}
if ((assetExport?.error) != nil) {
AppDelegate.logger.error("Could not create user video: \((assetExport?.error)!)")
DispatchQueue.main.async {
completion?(nil, assetExport?.error)
}
}
})
I'm trying to use something like this and multiple AVMutableCompositionTrack's with different CGAffineTransform objects.
// create mix composition
let mixComposition = AVMutableComposition()
// keep track of total duration
var totalDuration = kCMTimeZero
// for each video clip add to mutable composition and transform each video layer
for (index, videoClip) in videoClips.enumerated() {
if let videoAsset = videoClip.asset, let videoAssetTrack = videoAsset.tracks(withMediaType: AVMediaType.video).first {
// insert video track
let videoCompositionTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID(index))
// insert current video track to composition
try videoCompositionTrack!.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoAsset.duration), of: videoAssetTrack, at: totalDuration)
videoCompositionTrack?.preferredTransform = videoAssetTrack.preferredTransform
// shift duration to next
totalDuration = CMTimeAdd(totalDuration, videoAsset.duration)
}
}
// Use AVAssetExportSession to export video
let assetExport = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPreset1920x1080)
assetExport?.outputFileType = AVFileType.mp4
// get needed save url to save the video to recommended url
let movieDestinationUrl = self.getRecommendedSaveUrl()
// seting up asset export session
assetExport?.outputURL = movieDestinationUrl
assetExport?.shouldOptimizeForNetworkUse = true
// export video to file system asyc
assetExport?.exportAsynchronously(completionHandler: {
assetExport?.cancelExport()
switch assetExport!.status {
case AVAssetExportSessionStatus.failed:
break
case AVAssetExportSessionStatus.cancelled:
break
default:
DispatchQueue.main.async {
completion?(movieDestinationUrl, nil)
}
}
if ((assetExport?.error) != nil) {
AppDelegate.logger.error("Could not create user video: \((assetExport?.error)!)")
DispatchQueue.main.async {
completion?(nil, assetExport?.error)
}
}
})
In the case above I'm not able to get any useable video: it is much shorter than it should be. I'm trying to avoid using any AVMutableVideoCompositionInstruction because it takes too long to process but it would still be an option if it worked for any resolution and especially with mirroring support.
// create mix composition
let mixComposition = AVMutableComposition()
// keep track of total duration
var totalDuration = kCMTimeZero
// keeps all layer transformations for each video asset
var videoCompositionLayerInstructions = [AVMutableVideoCompositionLayerInstruction]()
// for each video clip add to mutable composition and transform each video layer
for (index, videoClip) in videoClips.enumerated() {
if let videoAsset = videoClip.asset {
// use first video asset track for setting like height and width
let videoAssetTrack = videoAsset.tracks(withMediaType: AVMediaType.video).first!
// insert video trakc
let videoCompositionTrack = mixComposition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID(index))
// insert current video track to composition
try videoCompositionTrack!.insertTimeRange(CMTimeRangeMake(totalDuration, videoAssetTrack.timeRange.duration), of: videoAssetTrack, at: totalDuration)
videoCompositionTrack?.preferredTransform = videoAssetTrack.preferredTransform
let videoCompositionLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoAssetTrack)
videoCompositionLayerInstruction.setTransform((videoCompositionTrack?.preferredTransform)!, at: totalDuration)
videoCompositionLayerInstruction.setOpacity(0.0, at: videoAsset.duration)
// apply instruction
videoCompositionLayerInstructions.append(videoCompositionLayerInstruction)
// shift duration to next
totalDuration = CMTimeAdd(totalDuration, videoAsset.duration)
}
}
let videoCompositionInstruction = AVMutableVideoCompositionInstruction()
videoCompositionInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, totalDuration)
videoCompositionInstruction.layerInstructions = videoCompositionLayerInstructions
let mainComposition = AVMutableVideoComposition()
mainComposition.renderSize = CGSize(width: 1080, height: 1920)
mainComposition.frameDuration = CMTimeMake(1, 30)
mainComposition.instructions = [videoCompositionInstruction]
// Use AVAssetExportSession to export video
let assetExport = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPreset1920x1080)
assetExport?.outputFileType = AVFileType.mp4
// get needed save url to save the video to recommended url
let movieDestinationUrl = self.getRecommendedSaveUrl()
// seting up asset export session
assetExport?.outputURL = movieDestinationUrl
assetExport?.shouldOptimizeForNetworkUse = true
assetExport?.videoComposition = mainComposition
Anybody an idea how to implement this functionality?
Note: I don't need to care about audio at all.

AVMutableAudioMix multiple volume changes to single track

I'm working on an app that merges multiple video clips into one final video. I would like to give users the ability to mute individual clips if desired (so, only parts of the final merged video would be muted). I have wrapped the AVAssets in a class called "Video" that has a "shouldMute" property.
My problem is, when I set the volume of one of the AVAssetTracks to zero, it stays muted for the remainder of the final video. Here is my code:
var completeDuration : CMTime = CMTimeMake(0, 1)
var insertTime = kCMTimeZero
var layerInstructions = [AVVideoCompositionLayerInstruction]()
let mixComposition = AVMutableComposition()
let audioMix = AVMutableAudioMix()
let videoTrack =
mixComposition.addMutableTrack(withMediaType: AVMediaType.video,
preferredTrackID: kCMPersistentTrackID_Invalid)
let audioTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
// iterate through video assets and merge together
for (i, video) in clips.enumerated() {
let videoAsset = video.asset
var clipDuration = videoAsset.duration
do {
if video == clips.first {
insertTime = kCMTimeZero
} else {
insertTime = completeDuration
}
if let videoAssetTrack = videoAsset.tracks(withMediaType: .video).first {
try videoTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipDuration), of: videoAssetTrack, at: insertTime)
completeDuration = CMTimeAdd(completeDuration, clipDuration)
}
if let audioAssetTrack = videoAsset.tracks(withMediaType: .audio).first {
try audioTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipDuration), of: audioAssetTrack, at: insertTime)
if video.shouldMute {
let audioMixInputParams = AVMutableAudioMixInputParameters()
audioMixInputParams.trackID = audioTrack!.trackID
audioMixInputParams.setVolume(0.0, at: insertTime)
audioMix.inputParameters.append(audioMixInputParams)
}
}
} catch let error as NSError {
print("error: \(error)")
}
let videoInstruction = videoCompositionInstructionForTrack(track: videoTrack!, video: video)
if video != clips.last{
videoInstruction.setOpacity(0.0, at: completeDuration)
}
layerInstructions.append(videoInstruction)
} // end of video asset iteration
If I add another setVolume:atTime instruction to increase the volume back to 1.0 at the end of the clip, then the first volume instruction is completely ignored and the whole video plays at full volume.
In other words, this isn't working:
if video.shouldMute {
let audioMixInputParams = AVMutableAudioMixInputParameters()
audioMixInputParams.trackID = audioTrack!.trackID
audioMixInputParams.setVolume(0.0, at: insertTime)
audioMixInputParams.setVolume(1.0, at: completeDuration)
audioMix.inputParameters.append(audioMixInputParams)
}
I have set the audioMix on both my AVPlayerItem and AVAssetExportSession. What am I doing wrong? What can I do to allow users to mute the time ranges of individual clips before merging into the final video?
Apparently I was going about this wrong. As you can see above, my composition has two AVMutableCompositionTracks: a video track, and an audio track. Even though I inserted the time ranges of a series of other tracks into those two tracks, there's still ultimately only two tracks. So, I only needed one AVMutableAudioMixInputParameters object to associate with my one audio track.
I initialized a single AVMutableAudioMixInputParameters object and then, after I inserted the time range of each clip, I'd check to see whether it should be muted and set a volume ramp for the clip's time range (the time range in relation to the entire audio track). Here's what that looks like, inside my clip iteration:
if let audioAssetTrack = videoAsset.tracks(withMediaType: .audio).first {
try audioTrack?.insertTimeRange(CMTimeRangeMake(kCMTimeZero, clipDuration), of: audioAssetTrack, at: insertTime)
if video.shouldMute {
audioMixInputParams.setVolumeRamp(fromStartVolume: 0.0, toEndVolume: 0.0, timeRange: CMTimeRangeMake(insertTime, clipDuration))
} else {
audioMixInputParams.setVolumeRamp(fromStartVolume: 1.0, toEndVolume: 1.0, timeRange: CMTimeRangeMake(insertTime, clipDuration))
}
}

AVAudioMix audio ducking not working

I'm trying to do audio editing on an AVMutableComposition that I have build.
var commentaryTimeRange = CMTimeRange(start: commentaryItem.startTimeInTimeline, duration: commentaryItem.timeRange.duration)
if CMTimeCompare(CMTimeRangeGetEnd(commentaryTimeRange), composition.duration) == 1 {
commentaryTimeRange.duration = CMTimeSubtract(composition.duration, commentaryTimeRange.start);
commentaryItem.timeRange = commentaryTimeRange
}
// Add the commentary track
let compositionCommentaryTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid)
let track = commentaryItem.asset.tracks(withMediaType: AVMediaTypeAudio).first!
try! compositionCommentaryTrack.insertTimeRange(CMTimeRange(start: kCMTimeZero, duration:commentaryTimeRange.duration), of: track, at: commentaryTimeRange.start)
let tracksToDuck = composition.tracks(withMediaType: AVMediaTypeAudio)
var trackMixArray = [AVMutableAudioMixInputParameters]()
let rampDuration = CMTime(seconds: 1, preferredTimescale: 2)
for track in tracksToDuck {
let trackMix = AVMutableAudioMixInputParameters(track: track)
trackMix.setVolumeRamp(fromStartVolume: 1.0, toEndVolume: 0.2, timeRange: CMTimeRange(start: CMTimeSubtract(commentaryTimeRange.start, rampDuration), duration: CMTimeSubtract(commentaryTimeRange.duration, rampDuration)))
trackMix.setVolumeRamp(fromStartVolume: 0.2, toEndVolume: 1.0, timeRange: CMTimeRange(start: CMTimeRangeGetEnd(commentaryTimeRange), duration: rampDuration))
trackMixArray.append(trackMix)
}
let audioMix = AVMutableAudioMix()
audioMix.inputParameters = trackMixArray
Basically I'm truing to add a commentary on a video track by ducking the original volume.
The audio is correctly mixed in the output, but audio instructions seem to ignored.
Of course the audiomix is passed to the AVPlayerItem, from debugging I can see that all the instructions are there and correctly passed to it.
func makePlayable() -> AVPlayerItem {
let playerItem = AVPlayerItem(asset: composition.copy() as! AVAsset, automaticallyLoadedAssetKeys: NewsPlayerViewController.assetKeysRequiredToPlay)
playerItem.videoComposition = videoComposition
playerItem.audioMix = audioMix?.copy() as! AVAudioMix?
if let overlayLayer = overlayLayer {
let syncLayer = AVSynchronizedLayer(playerItem: playerItem)
syncLayer.addSublayer(overlayLayer)
playerItem.syncLayer = syncLayer
}
return playerItem
}
I've found some answers that indicate as reason the lack of a track identifiers, or a sort of mismatch between composition that has one and a track that hasn't.
My composition doesn't use any track id, plus the AVEdit sample code from Apple doesn't use them and it works.
The solution was simply to count the tracks to duck BEFORE adding the commentary track.
let tracksToDuck = composition.tracks(withMediaType: AVMediaTypeAudio)// <- MOVE HERE, AT THE TOP
var commentaryTimeRange = CMTimeRange(start: commentaryItem.startTimeInTimeline, duration: commentaryItem.timeRange.duration)
if CMTimeCompare(CMTimeRangeGetEnd(commentaryTimeRange), composition.duration) == 1 {
commentaryTimeRange.duration = CMTimeSubtract(composition.duration, commentaryTimeRange.start);
commentaryItem.timeRange = commentaryTimeRange
}
// Add the commentary track
let compositionCommentaryTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid)
let track = commentaryItem.asset.tracks(withMediaType: AVMediaTypeAudio).first!
try! compositionCommentaryTrack.insertTimeRange(CMTimeRange(start: kCMTimeZero, duration:commentaryTimeRange.duration), of: track, at: commentaryTimeRange.start)
var trackMixArray = [AVMutableAudioMixInputParameters]()
let rampDuration = CMTime(seconds: 1, preferredTimescale: 2)
for track in tracksToDuck {
let trackMix = AVMutableAudioMixInputParameters(track: track)
trackMix.setVolumeRamp(fromStartVolume: 1.0, toEndVolume: 0.2, timeRange: CMTimeRange(start: CMTimeSubtract(commentaryTimeRange.start, rampDuration), duration: CMTimeSubtract(commentaryTimeRange.duration, rampDuration)))
trackMix.setVolumeRamp(fromStartVolume: 0.2, toEndVolume: 1.0, timeRange: CMTimeRange(start: CMTimeRangeGetEnd(commentaryTimeRange), duration: rampDuration))
trackMixArray.append(trackMix)
}
let audioMix = AVMutableAudioMix()
audioMix.inputParameters = trackMixArray

Resources