This question is different that in Ios Xcode Message from debugger: Terminated due to memory issue . I am using different device and my app is being killed in foreground, besides that I cannot use Instruments to see allocations.
I am trying to merge short intervals of many AVAssets into one video file. I need to apply additional filters and transformations on them.
I implemented classes, which can take one asset and make everything exactly as I want, but now, when I try to do the same thing with many (cca 7 aasets is still ok) shorter assets (complete duration could be even shorter then with one asset), the application crashes and I get only "Message from debugger: Terminated due to memory issue" log.
I cannot event use most of Instruments tools, because the application crashes immediately with them. I tried many things to solve it, but I was unsuccessful and I would really appreciate some help.
Thank you
Relevant code snippets are here:
Creation of composition:
func export(toURL url: URL, callback: #escaping (_ url: URL?) -> Void){
var lastTime = kCMTimeZero
var instructions : [VideoFilterCompositionInstruction] = []
let composition = AVMutableComposition()
composition.naturalSize = CGSize(width: 1080, height: 1920)
for (index, assetURL) in assets.enumerated() {
let asset : AVURLAsset? = AVURLAsset(url: assetURL)
guard let track: AVAssetTrack = asset!.tracks(withMediaType: AVMediaType.video).first else{callback(nil); return}
let range = CMTimeRange(start: CMTime(seconds: ranges[index].lowerBound, preferredTimescale: 1000),
end: CMTime(seconds: ranges[index].upperBound, preferredTimescale: 1000))
let videoTrack = composition.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: kCMPersistentTrackID_Invalid)!
let audioTrack = composition.addMutableTrack(withMediaType: AVMediaType.audio, preferredTrackID: kCMPersistentTrackID_Invalid)!
do{try videoTrack.insertTimeRange(range, of: track, at: lastTime)}
catch _{callback(nil); return}
if let audio = asset!.tracks(withMediaType: AVMediaType.audio).first{
do{try audioTrack.insertTimeRange(range, of: audio, at: lastTime)}
catch _{callback(nil); return}
}
let layerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: videoTrack)
layerInstruction.trackID = videoTrack.trackID
let instruction = VideoFilterCompositionInstruction(trackID: videoTrack.trackID,
filters: self.filters,
context: self.context,
preferredTransform: track.preferredTransform,
rotate : false)
instruction.timeRange = CMTimeRange(start: lastTime, duration: range.duration)
instruction.layerInstructions = [layerInstruction]
instructions.append(instruction)
lastTime = lastTime + range.duration
}
let videoComposition = AVMutableVideoComposition()
videoComposition.customVideoCompositorClass = VideoFilterCompositor.self
videoComposition.frameDuration = CMTimeMake(1, 30)
videoComposition.renderSize = CGSize(width: 1080, height: 1920)
videoComposition.instructions = instructions
let session: AVAssetExportSession = AVAssetExportSession(asset: composition, presetName: AVAssetExportPresetHighestQuality)!
session.videoComposition = videoComposition
session.outputURL = url
session.outputFileType = AVFileType.mp4
session.exportAsynchronously(){
DispatchQueue.main.async{
callback(url)
}
}
and part of AVVideoCompositing class:
func startRequest(_ request: AVAsynchronousVideoCompositionRequest){
autoreleasepool() {
self.getDispatchQueue().sync{
guard let instruction = request.videoCompositionInstruction as? VideoFilterCompositionInstruction else{
request.finish(with: NSError(domain: "jojodmo.com", code: 760, userInfo: nil))
return
}
guard let pixels = request.sourceFrame(byTrackID: instruction.trackID) else{
request.finish(with: NSError(domain: "jojodmo.com", code: 761, userInfo: nil))
return
}
var image : CIImage? = CIImage(cvPixelBuffer: pixels)
for filter in instruction.filters{
filter.setValue(image, forKey: kCIInputImageKey)
image = filter.outputImage ?? image
}
let newBuffer: CVPixelBuffer? = self.renderContext.newPixelBuffer()
if let buffer = newBuffer{
instruction.context.render(image!, to: buffer)
request.finish(withComposedVideoFrame: buffer)
}
else{
request.finish(withComposedVideoFrame: pixels)
}
}
}
There are some circumventive when App returns memory warning. This is due to when we are trying to process a big amount of data.
To avoid this kind of memory warning, we have to make habit of using [unowned self] transcript.
If we are not using this [unowned self] in closure, then we will get memory leak warning and at some stage, app will crash.
You can find more on [unowned self] from below link:
Shall we always use [unowned self] inside closure in Swift
After adding [unowned self], add deinit(){ } function in your class and release or nil unwanted data.
Related
I am attempting to merge videos with scaleTimeRanges (to make them slo-mo or speed-up); however, it is not working as desired. Only the first video has the timerange effect... not all of them.
The work is done in the merge videos function; it is pretty simple... however I am not sure why the scaling of the time range is not working for only the first video and not the next ones...
This is a test project to test with, it has my current code: https://github.com/meyesyesme/creationMergeProj
This is the merge function I use, with the time range scaling currently commented out (you can uncomment to see it not working):
func mergeVideosTestSQ(arrayVideos:[VideoSegment], completion:#escaping (URL?, Error?) -> ()) {
let mixComposition = AVMutableComposition()
var instructions: [AVMutableVideoCompositionLayerInstruction] = []
var insertTime = CMTime(seconds: 0, preferredTimescale: 1)
print(arrayVideos, "<- arrayVideos")
/// for each URL add the video and audio tracks and their duration to the composition
for videoSegment in arrayVideos {
let sourceAsset = AVAsset(url: videoSegment.videoURL!)
let frameRange = CMTimeRange(start: CMTime(seconds: 0, preferredTimescale: 1), duration: sourceAsset.duration)
guard
let nthVideoTrack = mixComposition.addMutableTrack(withMediaType: .video, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)),
let nthAudioTrack = mixComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: Int32(kCMPersistentTrackID_Invalid)), //0 used to be kCMPersistentTrackID_Invalid
let assetVideoTrack = sourceAsset.tracks(withMediaType: .video).first
else {
print("didnt work")
return
}
var assetAudioTrack: AVAssetTrack?
assetAudioTrack = sourceAsset.tracks(withMediaType: .audio).first
print(assetAudioTrack, ",-- assetAudioTrack???", assetAudioTrack?.asset, "<-- hes", sourceAsset)
do {
try nthVideoTrack.insertTimeRange(frameRange, of: assetVideoTrack, at: insertTime)
try nthAudioTrack.insertTimeRange(frameRange, of: assetAudioTrack!, at: insertTime)
//MY CURRENT SPEED ATTEMPT:
let newDuration = CMTimeMultiplyByFloat64(frameRange.duration, multiplier: videoSegment.videoSpeed)
nthVideoTrack.scaleTimeRange(frameRange, toDuration: newDuration)
nthAudioTrack.scaleTimeRange(frameRange, toDuration: newDuration)
print(insertTime.value, "<-- fiji, newdur --->", newDuration.value, "sourceasset duration--->", sourceAsset.duration.value, "frameRange.duration -->", frameRange.duration.value)
//instructions:
let nthInstruction = ViewController.videoCompositionInstruction(nthVideoTrack, asset: sourceAsset)
nthInstruction.setOpacity(0.0, at: CMTimeAdd(insertTime, newDuration)) //sourceasset.duration
instructions.append(nthInstruction)
insertTime = insertTime + newDuration //sourceAsset.duration
} catch {
DispatchQueue.main.async {
print("didnt wor2k")
}
}
}
let mainInstruction = AVMutableVideoCompositionInstruction()
mainInstruction.timeRange = CMTimeRange(start: CMTime(seconds: 0, preferredTimescale: 1), duration: insertTime)
mainInstruction.layerInstructions = instructions
let mainComposition = AVMutableVideoComposition()
mainComposition.instructions = [mainInstruction]
mainComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
mainComposition.renderSize = CGSize(width: 1080, height: 1920)
let outputFileURL = URL(fileURLWithPath: NSTemporaryDirectory() + "merge.mp4")
//below to clear the video form docuent folder for new vid...
let fileManager = FileManager()
try? fileManager.removeItem(at: outputFileURL)
print("<now will export: 🔥 🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥🔥")
/// try to start an export session and set the path and file type
if let exportSession = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality) { //DOES NOT WORK WITH AVAssetExportPresetPassthrough
exportSession.outputFileType = .mov
exportSession.outputURL = outputFileURL
exportSession.videoComposition = mainComposition
exportSession.shouldOptimizeForNetworkUse = true
/// try to export the file and handle the status cases
exportSession.exportAsynchronously {
if let url = exportSession.outputURL{
completion(url, nil)
}
if let error = exportSession.error {
completion(nil, error)
}
}
}
}
You'll see this behavior: the first one is working well, but then the next videos do not and have issues with when they were set opacity, etc... I have tried different combinations and this is the closest one yet.
I've been stuck on this for a while!
After you scale the video, duration of the composition gets recalculated, so you need to append the next part according to this change. Replace
insertTime = insertTime + duration
with
insertTime = insertTime + newDuration
You also need to update setOpacity at value, I'd advise you to move that line after insertTime update and use new value, to remove duplicate work here
When you're applying scale, it's applied to new composition, so you need to use relative range:
let currentRange = CMTimeRange(start: insertTime, duration: frameRange.duration)
nthVideoTrack.scaleTimeRange(currentRange, toDuration: newDuration)
nthAudioTrack.scaleTimeRange(currentRange, toDuration: newDuration)
I have successfully merge the video clips to a single video but I am having a problem in the final merged video, the final video shows a white frame after the end of every video clip. I have tried a lot to remove this but couldn't find success. Please review my code below.
func merge(arrayVideos:[AVAsset], completion:#escaping (_ exporter: AVAssetExportSession) -> ()) -> Void {
let mainComposition = AVMutableComposition()
let compositionVideoTrack = mainComposition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
compositionVideoTrack?.preferredTransform = CGAffineTransform(rotationAngle: .pi / 2)
let soundtrackTrack = mainComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
var time:Double = 0.0
for (index, videoAsset) in arrayVideos.enumerated() {
let atTime = CMTime(seconds: time, preferredTimescale: 1)
try! compositionVideoTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: .video)[0], at: atTime)
try! soundtrackTrack?.insertTimeRange(CMTimeRangeMake(start: CMTime.zero, duration: videoAsset.duration), of: videoAsset.tracks(withMediaType: .audio)[0], at: atTime)
time += videoAsset.duration.seconds
}
let outputFileURL = URL(fileURLWithPath: NSTemporaryDirectory() + "merge.mp4")
print("final URL:\(outputFileURL)")
let fileManager = FileManager()
do {
try fileManager.removeItem(at: outputFileURL)
} catch let error as NSError {
print("Error: \(error.domain)")
}
let exporter = AVAssetExportSession(asset: mainComposition, presetName: AVAssetExportPresetHighestQuality)
exporter?.outputURL = outputFileURL
exporter?.outputFileType = AVFileType.mp4
exporter?.shouldOptimizeForNetworkUse = true
exporter?.exportAsynchronously {
DispatchQueue.main.async {
completion(exporter!)
}
}
}
Don't use a Double to track the insertion time, this can result in gaps due to rounding errors. And don't use a preferredTimescale of 1 when converting seconds, this will effectively round everything to whole seconds (1000 would be a more common timescale for this).
Instead to track the insertion time use a CMTime initialized to kCMTimeZero, and use CMTimeAdd to advance it.
And one more thing: Video and audio tracks can have different durations, particularly when recorded. So to keep things in sync, you may want to use CMTimeRangeGetIntersection to get the common time range of audio and video in the asset, and then use result to for insertion in the composition.
I want to combine multiple videos and their audio in one video frame for that I am using AVFoundation framework.
For that I have created a method which accepts array of asset and as of now I am passing three different video's asset.
So far I have combined their audio but problem is with video frame in which only first asset's video is repeating in every frame.
I am using below code to combine videos which combine all three video's audio perfectly but first video in input array is repeating three times which is the main issue:
I want all three different video in frames.
func merge(Videos aArrAssets: [AVAsset]){
let mixComposition = AVMutableComposition()
func setup(asset aAsset: AVAsset, WithComposition aComposition: AVMutableComposition) -> AVAssetTrack{
let aMutableCompositionVideoTrack = aComposition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
let aMutableCompositionAudioTrack = aComposition.addMutableTrack(withMediaType: .audio, preferredTrackID: kCMPersistentTrackID_Invalid)
let aVideoAssetTrack: AVAssetTrack = aAsset.tracks(withMediaType: .video)[0]
let aAudioAssetTrack: AVAssetTrack = aAsset.tracks(withMediaType: .audio)[0]
do{
try aMutableCompositionVideoTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: aAsset.duration), of: aVideoAssetTrack, at: .zero)
try aMutableCompositionAudioTrack?.insertTimeRange(CMTimeRangeMake(start: .zero, duration: aAsset.duration), of: aAudioAssetTrack, at: .zero)
}catch{}
return aVideoAssetTrack
}
let aArrVideoTracks = aArrAssets.map { setup(asset: $0, WithComposition: mixComposition) }
var aArrLayerInstructions : [AVMutableVideoCompositionLayerInstruction] = []
//Transform every video
var aNewHeight : CGFloat = 0
for (aIndex,aTrack) in aArrVideoTracks.enumerated(){
aNewHeight += aIndex > 0 ? aArrVideoTracks[aIndex - 1].naturalSize.height : 0
let aLayerInstruction = AVMutableVideoCompositionLayerInstruction(assetTrack: aTrack)
let aFristTransform = CGAffineTransform(translationX: 0, y: aNewHeight)
aLayerInstruction.setTransform(aFristTransform, at: .zero)
aArrLayerInstructions.append(aLayerInstruction)
}
let aTotalTime = aArrVideoTracks.map { $0.timeRange.duration }.max()
let aInstruction = AVMutableVideoCompositionInstruction()
aInstruction.timeRange = CMTimeRangeMake(start: .zero, duration: aTotalTime!)
aInstruction.layerInstructions = aArrLayerInstructions
let aVideoComposition = AVMutableVideoComposition()
aVideoComposition.instructions = [aInstruction]
aVideoComposition.frameDuration = CMTimeMake(value: 1, timescale: 30)
let aTotalWidth = aArrVideoTracks.map { $0.naturalSize.width }.max()!
let aTotalHeight = aArrVideoTracks.map { $0.naturalSize.height }.reduce(0){ $0 + $1 }
aVideoComposition.renderSize = CGSize(width: aTotalWidth, height: aTotalHeight)
saveVideo(WithAsset: mixComposition, videoComp : aVideoComposition) { (aError, aUrl) in
print("Location : \(String(describing: aUrl))")
}
}
private func saveVideo(WithAsset aAsset : AVAsset, videoComp : AVVideoComposition, completion: #escaping (_ error: Error?, _ url: URL?) -> Void){
let dateFormatter = DateFormatter()
dateFormatter.dateFormat = "ddMMyyyy_HHmm"
let date = dateFormatter.string(from: NSDate() as Date)
// Exporting
let savePathUrl: URL = URL(fileURLWithPath: NSHomeDirectory() + "/Documents/newVideo_\(date).mov")
do { // delete old video
try FileManager.default.removeItem(at: savePathUrl)
} catch { print(error.localizedDescription) }
let assetExport: AVAssetExportSession = AVAssetExportSession(asset: aAsset, presetName: AVAssetExportPresetMediumQuality)!
assetExport.outputFileType = .mov
assetExport.outputURL = savePathUrl
// assetExport.shouldOptimizeForNetworkUse = true
assetExport.videoComposition = videoComp
assetExport.exportAsynchronously { () -> Void in
switch assetExport.status {
case .completed:
print("success")
completion(nil, savePathUrl)
case .failed:
print("failed \(assetExport.error?.localizedDescription ?? "error nil")")
completion(assetExport.error, nil)
case .cancelled:
print("cancelled \(assetExport.error?.localizedDescription ?? "error nil")")
completion(assetExport.error, nil)
default:
print("complete")
completion(assetExport.error, nil)
}
}
}
I know I am doing something wrong in code but couldn't figure out where so I need some help to find it out.
Thanks in advance.
Your issue is that when you're constructing your AVMutableVideoCompositionLayerInstruction the aTrack reference is a reference to the track of the original asset which your are setting with
let aVideoAssetTrack: AVAssetTrack = aAsset.tracks(withMediaType: .video)[0]
It's trackID is 1, because it is the first track in it's source AVAsset. Accordingly, when you inspect your aArrLayerInstructions you will see that the trackIDs of your instructions are all 1. Which is why you're getting the first video three times
(lldb) p aArrLayerInstructions[0].trackID
(CMPersistentTrackID) $R8 = 1
(lldb) p aArrLayerInstructions[1].trackID
(CMPersistentTrackID) $R10 = 1
...
The solution is not to enumerate your source tracks but the tracks of your composition when constructing the composition layer instructions.
let tracks = mixComposition.tracks(withMediaType: .video)
for (aIndex,aTrack) in tracks.enumerated(){
...
If you do it like that you will get the correct trackIDs for your layer instructions
(lldb) p aArrLayerInstructions[0].trackID
(CMPersistentTrackID) $R2 = 1
(lldb) p aArrLayerInstructions[1].trackID
(CMPersistentTrackID) $R4 = 3
...
I am working on Video application in Swift3 iOS. Basically I have to merged the Video Assets and Audios into one with Fade Effect and save this to iPhone gallery. To achieve this, I am using below method:
private func doMerge(arrayVideos:[AVAsset], arrayAudios:[AVAsset], animation:Bool, completion:#escaping Completion) -> Void {
var insertTime = kCMTimeZero
var audioInsertTime = kCMTimeZero
var arrayLayerInstructions:[AVMutableVideoCompositionLayerInstruction] = []
var outputSize = CGSize.init(width: 0, height: 0)
// Determine video output size
for videoAsset in arrayVideos {
let videoTrack = videoAsset.tracks(withMediaType: AVMediaTypeVideo)[0]
let assetInfo = orientationFromTransform(transform: videoTrack.preferredTransform)
var videoSize = videoTrack.naturalSize
if assetInfo.isPortrait == true {
videoSize.width = videoTrack.naturalSize.height
videoSize.height = videoTrack.naturalSize.width
}
outputSize = videoSize
}
// Init composition
let mixComposition = AVMutableComposition.init()
for index in 0..<arrayVideos.count {
// Get video track
guard let videoTrack = arrayVideos[index].tracks(withMediaType: AVMediaTypeVideo).first else { continue }
// Get audio track
var audioTrack:AVAssetTrack?
if index < arrayAudios.count {
if arrayAudios[index].tracks(withMediaType: AVMediaTypeAudio).count > 0 {
audioTrack = arrayAudios[index].tracks(withMediaType: AVMediaTypeAudio).first
}
}
// Init video & audio composition track
let videoCompositionTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
let audioCompositionTrack = mixComposition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: Int32(kCMPersistentTrackID_Invalid))
do {
let startTime = kCMTimeZero
let duration = arrayVideos[index].duration
// Add video track to video composition at specific time
try videoCompositionTrack.insertTimeRange(CMTimeRangeMake(startTime, duration), of: videoTrack, at: insertTime)
// Add audio track to audio composition at specific time
var audioDuration = kCMTimeZero
if index < arrayAudios.count {
audioDuration = arrayAudios[index].duration
}
if let audioTrack = audioTrack {
do {
try audioCompositionTrack.insertTimeRange(CMTimeRangeMake(startTime, audioDuration), of: audioTrack, at: audioInsertTime)
}
catch {
print(error.localizedDescription)
}
}
// Add instruction for video track
let layerInstruction = videoCompositionInstructionForTrack(track: videoCompositionTrack, asset: arrayVideos[index], standardSize: outputSize, atTime: insertTime)
// Hide video track before changing to new track
let endTime = CMTimeAdd(insertTime, duration)
if animation {
let timeScale = arrayVideos[index].duration.timescale
let durationAnimation = CMTime.init(seconds: 1, preferredTimescale: timeScale)
layerInstruction.setOpacityRamp (fromStartOpacity: 1.0, toEndOpacity: 0.0, timeRange: CMTimeRange.init(start: endTime, duration: durationAnimation))
}
else {
layerInstruction.setOpacity(0, at: endTime)
}
arrayLayerInstructions.append(layerInstruction)
// Increase the insert time
audioInsertTime = CMTimeAdd(audioInsertTime, audioDuration)
insertTime = CMTimeAdd(insertTime, duration)
}
catch {
print("Load track error")
}
}
// Main video composition instruction
let mainInstruction = AVMutableVideoCompositionInstruction()
mainInstruction.timeRange = CMTimeRangeMake(kCMTimeZero, insertTime)
mainInstruction.layerInstructions = arrayLayerInstructions
// Main video composition
let mainComposition = AVMutableVideoComposition()
mainComposition.instructions = [mainInstruction]
mainComposition.frameDuration = CMTimeMake(1, 30)
mainComposition.renderSize = outputSize
// Export to file
let path = NSTemporaryDirectory().appending("mergedVideo.mp4")
let exportURL = URL.init(fileURLWithPath: path)
// Remove file if existed
FileManager.default.removeItemIfExisted(exportURL)
// Init exporter
let exporter = AVAssetExportSession.init(asset: mixComposition, presetName: AVAssetExportPresetHighestQuality)
exporter?.outputURL = exportURL
exporter?.outputFileType = AVFileTypeQuickTimeMovie//AVFileType.mp4
exporter?.shouldOptimizeForNetworkUse = false //true
exporter?.videoComposition = mainComposition
// Do export
exporter?.exportAsynchronously(completionHandler: {
DispatchQueue.main.async {
self.exportDidFinish(exporter: exporter, videoURL: exportURL, completion: completion)
}
})
}
fileprivate func exportDidFinish(exporter:AVAssetExportSession?, videoURL:URL, completion:#escaping Completion) -> Void {
if exporter?.status == AVAssetExportSessionStatus.completed {
print("Exported file: \(videoURL.absoluteString)")
completion(videoURL,nil)
}
else if exporter?.status == AVAssetExportSessionStatus.failed {
completion(videoURL,exporter?.error)
print(exporter?.error as Any)
}
}
Problem: In my exportDidFinish method, AVAssetExportSessionStatus is getting failed with below error message:
Error Domain=AVFoundationErrorDomain Code=-11800 "The operation could
not be completed" UserInfo={NSLocalizedFailureReason=An unknown error
occurred (-16976), NSLocalizedDescription=The operation could not be
completed, NSUnderlyingError=0x1c065fb30 {Error
Domain=NSOSStatusErrorDomain Code=-16976 "(null)"}}
Can anyone suggest me on this.
I had the exact same error and only on the iPhone 5S simulator running iOS11. I fixed it by changing the quality setting on the export operation from "Highest" (AVAssetExportPresetHighestQuality) to "Pass through" (AVAssetExportPresetPassthrough) (keeping original quality):
/// try to start an export session and set the path and file type
if let exportSession = AVAssetExportSession(asset: mixComposition, presetName: AVAssetExportPresetPassthrough) { /* AVAssetExportPresetHighestQuality */
exportSession.outputURL = videoOutputURL
exportSession.outputFileType = AVFileType.mp4
exportSession.shouldOptimizeForNetworkUse = true
exportSession.exportAsynchronously(completionHandler: {
switch exportSession.status {
case .failed:
if let _error = exportSession.error {
// !!!used to fail over here with 11800, -16976 codes, if using AVAssetExportPresetHighestQuality. But works fine when using: AVAssetExportPresetPassthrough
failure(_error)
}
....
Hope this helps someone, because that error code and message doesn't provide any information. It's just an "Unknown error". Besides changing the quality setting, I would try changing other settings and stripping down the export operation to identify a specific component of that operation that may be failing. (Some specific image, audio or video asset). When you have such a general error message, it's good to use the process of elimination, cutting the code in half each time, to get to the problem in Logarithmic time.
This is sort of an extension of this question of mine, but I think it is different enough to merit its own question:
I am filtering videos of various sizes, scales, etc. by feeding them into an AVMutableVideoComposition.
This is the code that I currently have:
private func filterVideo(with filter: Filter?) {
if let player = playerLayer?.player, let playerItem = player.currentItem {
let composition = AVMutableComposition()
let videoAssetTrack = playerItem.asset.tracks(withMediaType: .video).first
let videoCompositionTrack = composition.addMutableTrack(withMediaType: .video, preferredTrackID: kCMPersistentTrackID_Invalid)
try? videoCompositionTrack?.insertTimeRange(CMTimeRange(start: kCMTimeZero, duration: playerItem.asset.duration), of: videoAssetTrack!, at: kCMTimeZero)
let videoComposition = AVMutableVideoComposition(asset: composition, applyingCIFiltersWithHandler: { (request) in
print(request.sourceImage.pixelBuffer) // Sometimes => nil
if let filter = filter {
if let filteredImage = filter.filterImage(request.sourceImage) {
request.finish(with: filteredImage, context: nil)
} else {
request.finish(with: RenderError.couldNotFilter)
}
} else {
request.finish(with: request.sourceImage, context: nil)
}
})
playerItem.videoComposition = videoComposition
}
}
filter is an instance of my custom Filter class, which has functions to filter a UIImage or CIImage.
The problem is that some videos get messed up. This is the case for only the problematic videos for which filteredImage => nil as well. This suggests that some images are empty: their pixelBuffers are nil. By the way, the pixelBuffer is nil before I even feed it into the filter.
Why is this happening, and how can I fix it?