This is sort of an extension of this question of mine, but I think it is different enough to merit its own question:
I am filtering videos of various sizes, scales, etc. by feeding them into an AVMutableVideoComposition.
This is the code that I currently have:
private func filterVideo(with filter: Filter?) {
if let player = playerLayer?.player, let playerItem = player.currentItem {
let composition = AVMutableComposition()
let videoAssetTrack = playerItem.asset.tracks(withMediaType: .video).first
let videoCompositionTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
try? videoCompositionTrack?.insertTimeRange(CMTimeRange(start: kCMTimeZero, duration: playerItem.asset.duration), of: videoAssetTrack!, at: kCMTimeZero)
let videoComposition = AVMutableVideoComposition(asset: composition, applyingCIFiltersWithHandler: { (request) in
print(request.sourceImage.pixelBuffer) // Sometimes => nil
if let filter = filter {
if let filteredImage = filter.filterImage(request.sourceImage) {
request.finish(with: filteredImage, context: nil)
} else {
request.finish(with: RenderError.couldNotFilter)
}
} else {
request.finish(with: request.sourceImage, context: nil)
}
})
playerItem.videoComposition = videoComposition
}
}
filter is an instance of my custom Filter class, which has functions to filter a UIImage or CIImage.
The problem is that some videos get messed up. This is the case for only the problematic videos for which filteredImage => nil as well. This suggests that some images are empty: their pixelBuffers are nil. By the way, the pixelBuffer is nil before I even feed it into the filter.
Why is this happening, and how can I fix it?
Related
I am working on Video based Application in Swift. As per the requirement I have to select multiple Videos from Device Gallery, setting up different different CIFilter effects and Volume for each Video Asset and then merge all the Videos and have to Save the Final Video. As an output, when I will play the Final Video then Video sound volume should change accordingly.
I have already merged all the selected Video Assets into one with different different CIFilter effects but my problem is when I am trying to set Volume for each Video Clips then it's not working. I am getting the default Volume for my Final Video. Here is my code:
func addFilerEffectAndVolumeToIndividualVideoClip(_ assetURL: URL, video: VideoFileModel, completion : ((_ session: AVAssetExportSession?, _ outputURL : URL?) -> ())?){
let videoFilteredAsset = AVAsset(url: assetURL)
print(videoFilteredAsset)
createVideoComposition(myAsset: videoFilteredAsset, videos: video)
let documentDirectory = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
let url = URL(fileURLWithPath: documentDirectory).appendingPathComponent("\(video.fileID)_\("FilterVideo").mov")
let filePath = url.path
let fileManager = FileManager.default
do {
if fileManager.fileExists(atPath: filePath) {
print("FILE AVAILABLE")
try fileManager.removeItem(atPath:filePath)
} else {
print("FILE NOT AVAILABLE")
}
} catch _ {
}
let composition: AVMutableComposition = AVMutableComposition()
let compositionVideo: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
let compositionAudioVideo: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
//Add video to the final record
do {
try compositionVideo.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoFilteredAsset.duration), of: videoFilteredAsset.tracks(withMediaType: AVMediaTypeVideo)[0], at: kCMTimeZero)
} catch _ {
}
//Extract audio from the video and the music
let audioMix: AVMutableAudioMix = AVMutableAudioMix()
var audioMixParam: [AVMutableAudioMixInputParameters] = []
let assetVideoTrack: AVAssetTrack = videoFilteredAsset.tracks(withMediaType: AVMediaTypeAudio)[0]
let videoParam: AVMutableAudioMixInputParameters = AVMutableAudioMixInputParameters(track: assetVideoTrack)
videoParam.trackID = compositionAudioVideo.trackID
//Set final volume of the audio record and the music
videoParam.setVolume(video.videoClipVolume, at: kCMTimeZero)
//Add setting
audioMixParam.append(videoParam)
//Add audio on final record
do {
try compositionAudioVideo.insertTimeRange(CMTimeRangeMake(kCMTimeZero, videoFilteredAsset.duration), of: assetVideoTrack, at: kCMTimeZero)
} catch _ {
assertionFailure()
}
//Fading volume out for background music
let durationInSeconds = CMTimeGetSeconds(videoFilteredAsset.duration)
let firstSecond = CMTimeRangeMake(CMTimeMakeWithSeconds(0, 1), CMTimeMakeWithSeconds(1, 1))
let lastSecond = CMTimeRangeMake(CMTimeMakeWithSeconds(durationInSeconds-1, 1), CMTimeMakeWithSeconds(1, 1))
videoParam.setVolumeRamp(fromStartVolume: 0, toEndVolume: video.videoClipVolume, timeRange: firstSecond)
videoParam.setVolumeRamp(fromStartVolume: video.videoClipVolume, toEndVolume: 0, timeRange: lastSecond)
//Add parameter
audioMix.inputParameters = audioMixParam
// Export part, left for facility
let exporter = AVAssetExportSession(asset: videoFilteredAsset, presetName: AVAssetExportPresetHighestQuality)!
exporter.videoComposition = videoFilterComposition
exporter.outputURL = url
exporter.outputFileType = AVFileTypeQuickTimeMovie
exporter.audioMix = audioMix
exporter.exportAsynchronously(completionHandler: { () -> Void in
completion!(exporter, url)
})
}
After that again I am using a method to merge all the Video Clips using AVAssetExportSession, there I am not setting any AudioMixInputParameters.
Note: When I am setting up volume in final merging method using AVAssetExportSession's AudioMixInputParameters, then Volume is getting change for full Video.
My question: Is it possible to set multiple volume for each Video Clips. Please suggest. Thank you!
Here is the working solution for my question:
func addVolumeToIndividualVideoClip(_ assetURL: URL, video: VideoFileModel, completion : ((_ session: AVAssetExportSession?, _ outputURL : URL?) -> ())?){
//Create Asset from Url
let filteredVideoAsset: AVAsset = AVAsset(url: assetURL)
video.fileID = String(video.videoID)
//Get the path of App Document Directory
let documentDirectory = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
let url = URL(fileURLWithPath: documentDirectory).appendingPathComponent("\(video.fileID)_\("FilterVideo").mov")
let filePath = url.path
let fileManager = FileManager.default
do {
if fileManager.fileExists(atPath: filePath) {
print("FILE AVAILABLE")
try fileManager.removeItem(atPath:filePath)
} else {
print("FILE NOT AVAILABLE")
}
} catch _ {
}
let composition: AVMutableComposition = AVMutableComposition()
let compositionVideo: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: CMPersistentTrackID())
let compositionAudioVideo: AVMutableCompositionTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: CMPersistentTrackID())
//Add video to the final record
do {
try compositionVideo.insertTimeRange(CMTimeRangeMake(kCMTimeZero, filteredVideoAsset.duration), of: filteredVideoAsset.tracks(withMediaType: AVMediaTypeVideo)[0], at: kCMTimeZero)
} catch _ {
}
//Extract audio from the video and the music
let audioMix: AVMutableAudioMix = AVMutableAudioMix()
var audioMixParam: [AVMutableAudioMixInputParameters] = []
let assetVideoTrack: AVAssetTrack = filteredVideoAsset.tracks(withMediaType: AVMediaTypeAudio)[0]
let videoParam: AVMutableAudioMixInputParameters = AVMutableAudioMixInputParameters(track: assetVideoTrack)
videoParam.trackID = compositionAudioVideo.trackID
//Set final volume of the audio record and the music
videoParam.setVolume(video.videoVolume, at: kCMTimeZero)
//Add setting
audioMixParam.append(videoParam)
//Add audio on final record
//First: the audio of the record and Second: the music
do {
try compositionAudioVideo.insertTimeRange(CMTimeRangeMake(kCMTimeZero, filteredVideoAsset.duration), of: assetVideoTrack, at: kCMTimeZero)
} catch _ {
assertionFailure()
}
//Fading volume out for background music
let durationInSeconds = CMTimeGetSeconds(filteredVideoAsset.duration)
let firstSecond = CMTimeRangeMake(CMTimeMakeWithSeconds(0, 1), CMTimeMakeWithSeconds(1, 1))
let lastSecond = CMTimeRangeMake(CMTimeMakeWithSeconds(durationInSeconds-1, 1), CMTimeMakeWithSeconds(1, 1))
videoParam.setVolumeRamp(fromStartVolume: 0, toEndVolume: video.videoVolume, timeRange: firstSecond)
videoParam.setVolumeRamp(fromStartVolume: video.videoVolume, toEndVolume: 0, timeRange: lastSecond)
//Add parameter
audioMix.inputParameters = audioMixParam
//Remove the previous temp video if exist
let filemgr = FileManager.default
do {
if filemgr.fileExists(atPath: "\(video.fileID)_\("FilterVideo").mov") {
try filemgr.removeItem(atPath: "\(video.fileID)_\("FilterVideo").mov")
} else {
}
} catch _ {
}
//Exporte the final record’
let exporter: AVAssetExportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)!
exporter.outputURL = url
exporter.outputFileType = AVFileTypeMPEG4
exporter.audioMix = audioMix
exporter.exportAsynchronously(completionHandler: { () -> Void in
completion!(exporter, url)
// self.saveVideoToLibrary(from: filePath)
})
}
I found, that exporting an asset with preset of AVAssetExportPresetPassthrough doesn't have an impact on output volume. When I tried to use AVAssetExportPresetLowQuality, volume change successfully applied.
I wish it is better documented somewhere :(
The working code:
// Assume we have:
let composition: AVMutableComposition
var inputParameters = [AVAudioMixInputParameters]()
// We add a track
let trackComposition = composition.addMutableTrack(...)
// Configure volume for this track
let inputParameter = AVMutableAudioMixInputParameters(track: trackComposition)
inputParameter.setVolume(desiredVolume, at: startTime)
// It works even without setting the `trackID`
// inputParameter.trackID = trackComposition.trackID
inputParameters.append(inputParameter)
// Apply gathered `inputParameters` before exporting
let audioMix = AVMutableAudioMix()
audioMix.inputParameters = inputParameters
// I found it's not working, if using `AVAssetExportPresetPassthrough`,
// so try `AVAssetExportPresetLowQuality` first
let export = AVAssetExportSession(..., presetName: AVAssetExportPresetLowQuality)
export.audioMix = audioMix
Tested this with multiple assetTrack insertions to the same compositionTrack, setting different volume for each insertion. Seems to be working.
I want to combine multiple videos and their audio in one video frame for that I am using AVFoundation framework.
For that I have created a method which accepts array of asset and as of now I am passing three different video's asset.
So far I have combined their audio but problem is with video frame in which only first asset's video is repeating in every frame.
I am using below code to combine videos which combine all three video's audio perfectly but first video in input array is repeating three times which is the main issue:
I want all three different video in frames.
func merge(Videos aArrAssets: [AVAsset]){
let mixComposition = AVMutableComposition()
func setup(asset aAsset: AVAsset, WithComposition aComposition: AVMutableComposition) -> AVAssetTrack{
let aMutableCompositionVideoTrack = aComposition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
let aMutableCompositionAudioTrack = aComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
let aVideoAssetTrack: AVAssetTrack = aAsset.tracks(withMediaType: .video)[0]
let aAudioAssetTrack: AVAssetTrack = aAsset.tracks(withMediaType: .audio)[0]
do{
try aMutableCompositionVideoTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: aAsset.duration), of: aVideoAssetTrack, at: .zero)
try aMutableCompositionAudioTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: aAsset.duration), of: aAudioAssetTrack, at: .zero)
}catch{}
return aVideoAssetTrack
}
let aArrVideoTracks = aArrAssets.map { setup(asset: $0, WithComposition: mixComposition) }
var aArrLayerInstructions : [AVMutableVideoCompositionLayerInstruction] = []
//Transform every video
var aNewHeight : CGFloat = 0
for (aIndex,aTrack) in aArrVideoTracks.enumerated(){
aNewHeight += aIndex > 0 ? aArrVideoTracks[aIndex - 1].naturalSize.height : 0
let aLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: aTrack)
let aFristTransform = CGAffineTransform(translationX: 0, y: aNewHeight)
aLayerInstruction.setTransform(aFristTransform, at: .zero)
aArrLayerInstructions.append(aLayerInstruction)
}
let aTotalTime = aArrVideoTracks.map { $0.timeRange.duration }.max()
let aInstruction = AVMutableVideoCompositionInstruction()
aInstruction.timeRange = CMTimeRangeMake(start: .zero, duration: aTotalTime!)
aInstruction.layerInstructions = aArrLayerInstructions
let aVideoComposition = AVMutableVideoComposition()
aVideoComposition.instructions = [aInstruction]
aVideoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
let aTotalWidth = aArrVideoTracks.map { $0.naturalSize.width }.max()!
let aTotalHeight = aArrVideoTracks.map { $0.naturalSize.height }.reduce(0){ $0 + $1 }
aVideoComposition.renderSize = CGSize(width: aTotalWidth, height: aTotalHeight)
saveVideo(WithAsset: mixComposition, videoComp : aVideoComposition) { (aError, aUrl) in
print("Location : \(String(describing: aUrl))")
}
}
private func saveVideo(WithAsset aAsset : AVAsset, videoComp : AVVideoComposition, completion: #escaping (_ error: Error?, _ url: URL?) -> Void){
let dateFormatter = DateFormatter()
dateFormatter.dateFormat = "ddMMyyyy_HHmm"
let date = dateFormatter.string(from: NSDate() as Date)
// Exporting
let savePathUrl: URL = URL(fileURLWithPath: NSHomeDirectory() + "/Documents/newVideo_\(date).mov")
do { // delete old video
try FileManager.default.removeItem(at: savePathUrl)
} catch { print(error.localizedDescription) }
let assetExport: AVAssetExportSession = AVAssetExportSession(asset: aAsset, presetName: AVAssetExportPresetMediumQuality)!
assetExport.outputFileType = .mov
assetExport.outputURL = savePathUrl
// assetExport.shouldOptimizeForNetworkUse = true
assetExport.videoComposition = videoComp
assetExport.exportAsynchronously { () -> Void in
switch assetExport.status {
case .completed:
print("success")
completion(nil, savePathUrl)
case .failed:
print("failed \(assetExport.error?.localizedDescription ?? "error nil")")
completion(assetExport.error, nil)
case .cancelled:
print("cancelled \(assetExport.error?.localizedDescription ?? "error nil")")
completion(assetExport.error, nil)
default:
print("complete")
completion(assetExport.error, nil)
}
}
}
I know I am doing something wrong in code but couldn't figure out where so I need some help to find it out.
Thanks in advance.
Your issue is that when you're constructing your AVMutableVideoCompositionLayerInstruction the aTrack reference is a reference to the track of the original asset which your are setting with
let aVideoAssetTrack: AVAssetTrack = aAsset.tracks(withMediaType: .video)[0]
It's trackID is 1, because it is the first track in it's source AVAsset. Accordingly, when you inspect your aArrLayerInstructions you will see that the trackIDs of your instructions are all 1. Which is why you're getting the first video three times
(lldb) p aArrLayerInstructions[0].trackID
(CMPersistentTrackID) $R8 = 1
(lldb) p aArrLayerInstructions[1].trackID
(CMPersistentTrackID) $R10 = 1
...
The solution is not to enumerate your source tracks but the tracks of your composition when constructing the composition layer instructions.
let tracks = mixComposition.tracks(withMediaType: .video)
for (aIndex,aTrack) in tracks.enumerated(){
...
If you do it like that you will get the correct trackIDs for your layer instructions
(lldb) p aArrLayerInstructions[0].trackID
(CMPersistentTrackID) $R2 = 1
(lldb) p aArrLayerInstructions[1].trackID
(CMPersistentTrackID) $R4 = 3
...
This question is different that in Ios Xcode Message from debugger: Terminated due to memory issue . I am using different device and my app is being killed in foreground, besides that I cannot use Instruments to see allocations.
I am trying to merge short intervals of many AVAssets into one video file. I need to apply additional filters and transformations on them.
I implemented classes, which can take one asset and make everything exactly as I want, but now, when I try to do the same thing with many (cca 7 aasets is still ok) shorter assets (complete duration could be even shorter then with one asset), the application crashes and I get only "Message from debugger: Terminated due to memory issue" log.
I cannot event use most of Instruments tools, because the application crashes immediately with them. I tried many things to solve it, but I was unsuccessful and I would really appreciate some help.
Thank you
Relevant code snippets are here:
Creation of composition:
func export(toURL url: URL, callback: #escaping (_ url: URL?) -> Void){
var lastTime = kCMTimeZero
var instructions : [VideoFilterCompositionInstruction] = []
let composition = AVMutableComposition()
composition.naturalSize = CGSize(width: 1080, height: 1920)
for (index, assetURL) in assets.enumerated() {
let asset : AVURLAsset? = AVURLAsset(url: assetURL)
guard let track: AVAssetTrack = asset!.tracks(withMediaType: AVMediaType.video).first else{callback(nil); return}
let range = CMTimeRange(start: CMTime(seconds: ranges[index].lowerBound, preferredTimescale: 1000),
end: CMTime(seconds: ranges[index].upperBound, preferredTimescale: 1000))
let videoTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)!
let audioTrack = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid)!
do{try videoTrack.insertTimeRange(range, of: track, at: lastTime)}
catch _{callback(nil); return}
if let audio = asset!.tracks(withMediaType: AVMediaType.audio).first{
do{try audioTrack.insertTimeRange(range, of: audio, at: lastTime)}
catch _{callback(nil); return}
}
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
layerInstruction.trackID = videoTrack.trackID
let instruction = VideoFilterCompositionInstruction(trackID: videoTrack.trackID,
filters: self.filters,
context: self.context,
preferredTransform: track.preferredTransform,
rotate : false)
instruction.timeRange = CMTimeRange(start: lastTime, duration: range.duration)
instruction.layerInstructions = [layerInstruction]
instructions.append(instruction)
lastTime = lastTime + range.duration
}
let videoComposition = AVMutableVideoComposition()
videoComposition.customVideoCompositorClass = VideoFilterCompositor.self
videoComposition.frameDuration = CMTimeMake(1, 30)
videoComposition.renderSize = CGSize(width: 1080, height: 1920)
videoComposition.instructions = instructions
let session: AVAssetExportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)!
session.videoComposition = videoComposition
session.outputURL = url
session.outputFileType = AVFileType.mp4
session.exportAsynchronously(){
DispatchQueue.main.async{
callback(url)
}
}
and part of AVVideoCompositing class:
func startRequest(_ request: AVAsynchronousVideoCompositionRequest){
autoreleasepool() {
self.getDispatchQueue().sync{
guard let instruction = request.videoCompositionInstruction as? VideoFilterCompositionInstruction else{
request.finish(with: NSError(domain: "jojodmo.com", code: 760, userInfo: nil))
return
}
guard let pixels = request.sourceFrame(byTrackID: instruction.trackID) else{
request.finish(with: NSError(domain: "jojodmo.com", code: 761, userInfo: nil))
return
}
var image : CIImage? = CIImage(cvPixelBuffer: pixels)
for filter in instruction.filters{
filter.setValue(image, forKey: kCIInputImageKey)
image = filter.outputImage ?? image
}
let newBuffer: CVPixelBuffer? = self.renderContext.newPixelBuffer()
if let buffer = newBuffer{
instruction.context.render(image!, to: buffer)
request.finish(withComposedVideoFrame: buffer)
}
else{
request.finish(withComposedVideoFrame: pixels)
}
}
}
There are some circumventive when App returns memory warning. This is due to when we are trying to process a big amount of data.
To avoid this kind of memory warning, we have to make habit of using [unowned self] transcript.
If we are not using this [unowned self] in closure, then we will get memory leak warning and at some stage, app will crash.
You can find more on [unowned self] from below link:
Shall we always use [unowned self] inside closure in Swift
After adding [unowned self], add deinit(){ } function in your class and release or nil unwanted data.
I recorded a 240 fps video after changing the AVCaptureDeviceFormat. If I save that video in the photo library, the slowmo effect is there. But, If I play that file from documents directory, using an AVPlayer, I cant see the slowmo effect.
Code to play the video:
AVPlayerItem *playerItem = [AVPlayerItem playerItemWithAsset:[AVAsset assetWithURL:[NSURL fileURLWithPath:fullPath]]];
AVPlayer *feedVideoPlayer = [AVPlayer playerWithPlayerItem:playerItem];
AVPlayerViewController *playerController = [[AVPlayerViewController alloc] init];
playerController.view.frame = CGRectMake(0, 0, videoPreviewView.frame.size.width, videoPreviewView.frame.size.height);
playerController.player = feedVideoPlayer;
It's a bit annoying, but I believe you'll need to re-create the video in an AVComposition if you don't want to lose quality. I'd love to know if there is another way, but this is what I've come up with. You can technically export the video via AVAssetExportSession, but using a PassThrough quality will result in the same video file, which won't be slow motion- you'll need to transcode it, which loses quality (AFAIK. See Issue playing slow-mo AVAsset in AVPlayer for that solution).
The first thing you'll need to do is grab the source media's original time mapping objects. You can do that like so:
let options = PHVideoRequestOptions()
options.version = PHVideoRequestOptionsVersion.current
options.deliveryMode = .highQualityFormat
PHImageManager().requestAVAsset(forVideo: phAsset, options: options, resultHandler: { (avAsset, mix, info) in
guard let avAsset = avAsset else { return }
let originalTimeMaps = avAsset.tracks(withMediaType: AVMediaTypeVideo)
.first?
.segments
.flatMap { $0.timeMapping } ?? []
}
Once you have timeMappings of the original media (the one sitting in your documents directory), you can pass in the URL of that media and the original CMTimeMapping objects that you would like to recreate. Then create a new AVComposition that is ready to play in an AVPlayer. You'll need a class similar to this:
class CompositionMapper {
let url: URL
let timeMappings: [CMTimeMapping]
init(for url: URL, with timeMappings: [CMTimeMapping]) {
self.url = url
self.timeMappings = timeMappings
}
init(with asset: AVAsset, and timeMappings: [CMTimeMapping]) {
guard let asset = asset as? AVURLAsset else {
print("cannot get a base URL from this asset.")
fatalError()
}
self.timeMappings = timeMappings
self.url = asset.url
}
func compose() -> AVComposition {
let composition = AVMutableComposition(urlAssetInitializationOptions: [AVURLAssetPreferPreciseDurationAndTimingKey: true])
let emptyTrack = composition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
let audioTrack = composition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid)
let asset = AVAsset(url: url)
guard let videoAssetTrack = asset.tracks(withMediaType: AVMediaTypeVideo).first else { return composition }
var segments: [AVCompositionTrackSegment] = []
for map in timeMappings {
let segment = AVCompositionTrackSegment(url: url, trackID: kCMPersistentTrackID_Invalid, sourceTimeRange: map.source, targetTimeRange: map.target)
segments.append(segment)
}
emptyTrack.preferredTransform = videoAssetTrack.preferredTransform
emptyTrack.segments = segments
if let _ = asset.tracks(withMediaType: AVMediaTypeVideo).first {
audioTrack.segments = segments
}
return composition.copy() as! AVComposition
}
You can then use the compose() function of your CompositionMapper class to give you an AVComposition that is ready to play in an AVPlayer, which should respect the CMTimeMapping objects that you've passed in.
let compositionMapper = CompositionMapper(for: someAVAssetURL, with: originalTimeMaps)
let mappedComposition = compositionMapper.compose()
let playerItem = AVPlayerItem(asset: mappedComposition)
let player = AVPlayer(playerItem: playerItem)
playerItem.audioTimePitchAlgorithm = AVAudioTimePitchAlgorithmVarispeed
Let me know if you need help converting this to Objective-C, but it should be relatively straight forward.
I'm using AVVideoComposition in order to apply filters to a video. It works fine, but it takes a very long time for the video to be generated. On the other hand, I notice that exporting the video without applying filters is very fast so I know that the process of applying the filter is the issue here.
This is some of the code I'm using to add the filters to the video:
This code generates the composition for the video.
public func generateComposition() -> AVVideoComposition {
let composition = AVVideoComposition(asset: asset) {
request in
let source = request.sourceImage.clampingToExtent()
self.zoomBlurFilter.setValue(source, forKey: kCIInputImageKey)
let currentTime = CMTimeGetSeconds(request.compositionTime)
let totalTime = CMTimeGetSeconds(self.asset.duration)
let timePercentage = currentTime/totalTime
self.zoomBlurFilter.setValue(timePercentage * Float64(self.blurAmount), forKey: "inputAmount")
let imageCenter = self.centerOfFirstFace(for: request.sourceImage) ?? CIVector(x: request.sourceImage.extent.midX, y: request.sourceImage.extent.midY)
self.zoomBlurFilter.setValue(imageCenter, forKey: "inputCenter")
guard let blurFilterOutput = self.zoomBlurFilter.outputImage else { request.finish(with: self.compositionError); return }
self.vignetteFilter.setValue(blurFilterOutput, forKey: kCIInputImageKey)
guard let vignetteFilterOutput = self.vignetteFilter.outputImage?.cropping(to: request.sourceImage.extent) else { request.finish(with: self.compositionError); return }
request.finish(with: vignetteFilterOutput, context: self.context)
}
return composition
}
This code exports the video to an output file.
public func exportFilteredVideo(to outputURL: URL, completion: #escaping (Void) -> Void) {
if FileManager.default.fileExists(atPath: outputURL.path) {
try? FileManager.default.removeItem(at: outputURL)
}
let export = AVAssetExportSession(asset: asset, presetName: AVAssetExportPresetMediumQuality)!
export.outputFileType = AVFileTypeQuickTimeMovie
export.outputURL = outputURL
export.videoComposition = generateComposition() // Commenting out this line makes it go very fast, but obviously there is no filter if I comment this line out.
export.exportAsynchronously(completionHandler: completion)
}
Here's the context I'm using:
let context = CIContext(options: [kCIContextCacheIntermediates: false])
Are there any steps I should take in order to maximize the speed and performance of the CIFilters? The video itself is only three seconds so I see no reason it should be taking this long :(.