How to add Image to video using GPUImage in Swift - ios

I have some codes to add image to video using GPUImage, but somehow it doesn't work as I thought. I have searched some solution, and still have this problem.
I'm trying to merge image when a recoding is done. Now the output is just a video not with image.
Here's my code.
override func viewDidLoad(){
super.viewDidLoad
setupCamera()
}
func setupCamera(){
let myBoundSize: CGSize = UIScreen.mainScreen().bounds.size
cameraSubPreview = GPUImageView(frame: CGRectMake(0, 0, myBoundSize.width, myBoundSize.height))
cameraInput = GPUImageVideoCamera(sessionPreset: AVCaptureSessionPreset1280x720, cameraPosition: .Front)
cameraInput.horizontallyMirrorFrontFacingCamera = true
cameraInput.outputImageOrientation = .Portrait
cameraInput.addTarget(mainVideoFilter)
mainVideoFilter.addTarget(cameraSubPreview)
cameraPreview.addSubview(cameraSubPreview)
cameraInput.startCameraCapture()
}
func setupFilter(){
let logoImageForGPU = GPUImagePicture(image: logoImageView.image)
logoImageForGPU.addTarget(transformFilter)
logoImageForGPU.processImage()
// apply transform to filter
let tx: CGFloat = logoImageView.frame.size.width
let ty: CGFloat = logoImageView.frame.size.height
print("tx and ty: \(tx), \(ty)")
let t: CGAffineTransform = CGAffineTransformMakeScale(tx, ty);
transformFilter.affineTransform = t
transformFilter.addTarget(mainVideoFilter, atTextureLocation: 1)
}
func startRecordingVideo(sender: AnyObject) {
if(sender.state == UIGestureRecognizerState.Began) {
if !self.isRecording {
setupFilter()
let ud = NSUserDefaults.standardUserDefaults()
var IntForFilePath : Int? = ud.objectForKey("IntForFilePath") as? Int
if IntForFilePath == nil{
IntForFilePath = 10000
}
let paths = NSSearchPathForDirectoriesInDomains(.DocumentDirectory, .UserDomainMask, true)
let documentsDirectory = paths[0]
IntForFilePath = IntForFilePath! + 1
let filePath : String? = "\(documentsDirectory)/temp\(IntForFilePath!).mp4"
ud.setObject(IntForFilePath, forKey: "IntForFilePath")
// URL.
let fileURL : NSURL = NSURL(fileURLWithPath: filePath!)
//print(filePath)
self.delegate.filePathForVideo = filePath!
movieWriter = GPUImageMovieWriter(movieURL: fileURL, size: CGSize(width: 480, height: 800))
movieWriter.shouldPassthroughAudio = true
//cameraInput.addTarget(stillImageFilter)
//print("sif: \()")
stillImageFilter.addTarget(movieWriter)
cameraInput.audioEncodingTarget = movieWriter
// start recording
movieWriter.startRecording()
circle.hidden = false
drawCircleAnimation("strokeEnd", animeName: "updateGageAnimation", fromValue: 0.0, toValue: 1.0, duration: 10.0, repeat: 1, flag: false)
self.isRecording = true
}
} else if (sender.state == UIGestureRecognizerState.Ended) {
if self.isRecording {
//stop recording
movieWriter.finishRecording()
stopCircleAnimation()
self.isRecording = false
self.moveToVideoSaveView()
}
}
}
If you have any ideas could solve this, please let me know.
Thanks in advance.

Related

How to sync AVPlayer and MTKView

I have a project where users can take a video and later add filters to them or change basic settings like brightness and contrast. To accomplish this, I use BBMetalImage, which basically returns the video in a MTKView (named a BBMetalView in the project).
Everything works great - I can play the video, add filters and the desired effects, but there is no audio. I asked the author about this, who recommended using an AVPlayer (or AVAudioPlayer) for this. So I did. However, the video and audio are out of sync. Possibly because of different bitrates in the first place, and the author of the library also mentioned the frame rate can differ because of the filter process (the time this consumes is variable):
The render view FPS is not exactly the same to the actual rate.
Because the video source output frame is processed by filters and the
filter process time is variable.
First, I crop my video to the desired aspect ratio (4:5). I save this file (480x600) locally, using AVVideoProfileLevelH264HighAutoLevel as AVVideoProfileLevelKey. My audio configuration, using NextLevelSessionExporter, has the following setup: AVEncoderBitRateKey: 128000, AVNumberOfChannelsKey: 2, AVSampleRateKey: 44100.
Then, the BBMetalImage library takes this saved audio file and provides a MTKView (BBMetalView) to display the video, allowing me to add filters and effects in real time. The setup kind of looks like this:
self.metalView = BBMetalView(frame: CGRect(x: 0, y: self.view.center.y - ((UIScreen.main.bounds.width * 1.25) / 2), width: UIScreen.main.bounds.width, height: UIScreen.main.bounds.width * 1.25))
self.view.addSubview(self.metalView)
self.videoSource = BBMetalVideoSource(url: outputURL)
self.videoSource.playWithVideoRate = true
self.videoSource.audioConsumer = self.metalAudio
self.videoSource.add(consumer: self.metalView)
self.videoSource.add(consumer: self.videoWriter)
self.audioItem = AVPlayerItem(url: outputURL)
self.audioPlayer = AVPlayer(playerItem: self.audioItem)
self.playerLayer = AVPlayerLayer(player: self.audioPlayer)
self.videoPreview.layer.addSublayer(self.playerLayer!)
self.playerLayer?.frame = CGRect(x: 0, y: 0, width: 0, height: 0)
self.playerLayer?.backgroundColor = UIColor.black.cgColor
self.startVideo()
And startVideo() goes like this:
audioPlayer.seek(to: .zero)
audioPlayer.play()
videoSource.start(progress: { (frameTime) in
print(frameTime)
}) { [weak self] (finish) in
guard let self = self else { return }
self.startVideo()
}
This is all probably pretty vague because of the external library/libraries. However, my question is pretty simple: is there any way I can sync the MTKView with my AVPlayer? It would help me a lot and I'm sure Silence-GitHub would also implement this feature into the library to help a lot of other users. Any ideas on how to approach this are welcome!
I custom the BBMetalVideoSource as follow then it worked:
Create a delegate in BBMetalVideoSource to get the current time of the audio player with which we want to sync
In func private func processAsset(progress:, completion:), I replace this block of code if useVideoRate { //... } by:
if useVideoRate {
if let playerTime = delegate.getAudioPlayerCurrentTime() {
let diff = CMTimeGetSeconds(sampleFrameTime) - playerTime
if diff > 0.0 {
sleepTime = diff
if sleepTime > 1.0 {
sleepTime = 0.0
}
usleep(UInt32(1000000 * sleepTime))
} else {
sleepTime = 0
}
}
}
This code help us resolve both problems: 1. No audio when preview video effect, and 2. Sync audio with video.
Due to your circumstances, you seem to need to try 1 of 2 things:
1) Try and apply some sort of overlay that has the desired effect for your video. I could attempt something like this, but I have personally not done this.
2) This takes a little more time beforehand - in the sense that the program would have to take a few moments (depending on your filtering, time varies), to recreate a new video with the desired effects. You can try this out and see if it works for you.
I have made my own VideoCreator using some sourcecode from SO from somewhere.
//Recreates a new video with applied filter
public static func createFilteredVideo(asset: AVAsset, completionHandler: #escaping (_ asset: AVAsset) -> Void) {
let url = (asset as? AVURLAsset)!.url
let snapshot = url.videoSnapshot()
guard let image = snapshot else { return }
let fps = Int32(asset.tracks(withMediaType: .video)[0].nominalFrameRate)
let writer = VideoCreator(fps: Int32(fps), width: image.size.width, height: image.size.height, audioSettings: nil)
let timeScale = asset.duration.timescale
let timeValue = asset.duration.value
let frameTime = 1/Double(fps) * Double(timeScale)
let numberOfImages = Int(Double(timeValue)/Double(frameTime))
let queue = DispatchQueue(label: "com.queue.queue", qos: .utility)
let composition = AVVideoComposition(asset: asset) { (request) in
let source = request.sourceImage.clampedToExtent()
//This is where you create your filter and get your filtered result.
//Here is an example
let filter = CIFilter(name: "CIBlendWithMask")
filter!.setValue(maskImage, forKey: "inputMaskImage")
filter!.setValue(regCIImage, forKey: "inputImage")
let filteredImage = filter!.outputImage.clamped(to: source.extent)
request.finish(with: filteredImage, context: nil)
}
var i = 0
getAudioFromURL(url: url) { (buffer) in
writer.addAudio(audio: buffer, time: .zero)
i == 0 ? writer.startCreatingVideo(initialBuffer: buffer, completion: {}) : nil
i += 1
}
let group = DispatchGroup()
for i in 0..<numberOfImages {
group.enter()
autoreleasepool {
let time = CMTime(seconds: Double(Double(i) * frameTime / Double(timeScale)), preferredTimescale: timeScale)
let image = url.videoSnapshot(time: time, composition: composition)
queue.async {
writer.addImageAndAudio(image: image!, audio: nil, time: time.seconds)
group.leave()
}
}
}
group.notify(queue: queue) {
writer.finishWriting()
let url = writer.getURL()
//Now create exporter to add audio then do completion handler
completionHandler(AVAsset(url: url))
}
}
static func getAudioFromURL(url: URL, completionHandlerPerBuffer: #escaping ((_ buffer:CMSampleBuffer) -> Void)) {
let asset = AVURLAsset(url: url, options: [AVURLAssetPreferPreciseDurationAndTimingKey: NSNumber(value: true as Bool)])
guard let assetTrack = asset.tracks(withMediaType: AVMediaType.audio).first else {
fatalError("Couldn't load AVAssetTrack")
}
guard let reader = try? AVAssetReader(asset: asset)
else {
fatalError("Couldn't initialize the AVAssetReader")
}
reader.timeRange = CMTimeRange(start: .zero, duration: asset.duration)
let outputSettingsDict: [String : Any] = [
AVFormatIDKey: Int(kAudioFormatLinearPCM),
AVLinearPCMBitDepthKey: 16,
AVLinearPCMIsBigEndianKey: false,
AVLinearPCMIsFloatKey: false,
AVLinearPCMIsNonInterleaved: false
]
let readerOutput = AVAssetReaderTrackOutput(track: assetTrack,
outputSettings: outputSettingsDict)
readerOutput.alwaysCopiesSampleData = false
reader.add(readerOutput)
while reader.status == .reading {
guard let readSampleBuffer = readerOutput.copyNextSampleBuffer() else { break }
completionHandlerPerBuffer(readSampleBuffer)
}
}
extension URL {
func videoSnapshot(time:CMTime? = nil, composition:AVVideoComposition? = nil) -> UIImage? {
let asset = AVURLAsset(url: self)
let generator = AVAssetImageGenerator(asset: asset)
generator.appliesPreferredTrackTransform = true
generator.requestedTimeToleranceBefore = .zero
generator.requestedTimeToleranceAfter = .zero
generator.videoComposition = composition
let timestamp = time == nil ? CMTime(seconds: 1, preferredTimescale: 60) : time
do {
let imageRef = try generator.copyCGImage(at: timestamp!, actualTime: nil)
return UIImage(cgImage: imageRef)
}
catch let error as NSError
{
print("Image generation failed with error \(error)")
return nil
}
}
}
Below is the VideoCreator
//
// VideoCreator.swift
// AKPickerView-Swift
//
// Created by Impression7vx on 7/16/19.
//
import UIKit
import AVFoundation
import UIKit
import Photos
#available(iOS 11.0, *)
public class VideoCreator: NSObject {
private var settings:RenderSettings!
private var imageAnimator:ImageAnimator!
public override init() {
self.settings = RenderSettings()
self.imageAnimator = ImageAnimator(renderSettings: self.settings)
}
public convenience init(fps: Int32, width: CGFloat, height: CGFloat, audioSettings: [String:Any]?) {
self.init()
self.settings = RenderSettings(fps: fps, width: width, height: height)
self.imageAnimator = ImageAnimator(renderSettings: self.settings, audioSettings: audioSettings)
}
public convenience init(width: CGFloat, height: CGFloat) {
self.init()
self.settings = RenderSettings(width: width, height: height)
self.imageAnimator = ImageAnimator(renderSettings: self.settings)
}
func startCreatingVideo(initialBuffer: CMSampleBuffer?, completion: #escaping (() -> Void)) {
self.imageAnimator.render(initialBuffer: initialBuffer) {
completion()
}
}
func finishWriting() {
self.imageAnimator.isDone = true
}
func addImageAndAudio(image:UIImage, audio:CMSampleBuffer?, time:CFAbsoluteTime) {
self.imageAnimator.addImageAndAudio(image: image, audio: audio, time: time)
}
func getURL() -> URL {
return settings!.outputURL
}
func addAudio(audio: CMSampleBuffer, time: CMTime) {
self.imageAnimator.videoWriter.addAudio(buffer: audio, time: time)
}
}
#available(iOS 11.0, *)
public struct RenderSettings {
var width: CGFloat = 1280
var height: CGFloat = 720
var fps: Int32 = 2 // 2 frames per second
var avCodecKey = AVVideoCodecType.h264
var videoFilename = "video"
var videoFilenameExt = "mov"
init() { }
init(width: CGFloat, height: CGFloat) {
self.width = width
self.height = height
}
init(fps: Int32) {
self.fps = fps
}
init(fps: Int32, width: CGFloat, height: CGFloat) {
self.fps = fps
self.width = width
self.height = height
}
var size: CGSize {
return CGSize(width: width, height: height)
}
var outputURL: URL {
// Use the CachesDirectory so the rendered video file sticks around as long as we need it to.
// Using the CachesDirectory ensures the file won't be included in a backup of the app.
let fileManager = FileManager.default
if let tmpDirURL = try? fileManager.url(for: .cachesDirectory, in: .userDomainMask, appropriateFor: nil, create: true) {
return tmpDirURL.appendingPathComponent(videoFilename).appendingPathExtension(videoFilenameExt)
}
fatalError("URLForDirectory() failed")
}
}
#available(iOS 11.0, *)
public class ImageAnimator {
// Apple suggests a timescale of 600 because it's a multiple of standard video rates 24, 25, 30, 60 fps etc.
static let kTimescale: Int32 = 600
let settings: RenderSettings
let videoWriter: VideoWriter
var imagesAndAudio:SynchronizedArray<(UIImage, CMSampleBuffer?, CFAbsoluteTime)> = SynchronizedArray<(UIImage, CMSampleBuffer?, CFAbsoluteTime)>()
var isDone:Bool = false
let semaphore = DispatchSemaphore(value: 1)
var frameNum = 0
class func removeFileAtURL(fileURL: URL) {
do {
try FileManager.default.removeItem(atPath: fileURL.path)
}
catch _ as NSError {
// Assume file doesn't exist.
}
}
init(renderSettings: RenderSettings, audioSettings:[String:Any]? = nil) {
settings = renderSettings
videoWriter = VideoWriter(renderSettings: settings, audioSettings: audioSettings)
}
func addImageAndAudio(image: UIImage, audio: CMSampleBuffer?, time:CFAbsoluteTime) {
self.imagesAndAudio.append((image, audio, time))
// print("Adding to array -- \(self.imagesAndAudio.count)")
}
func render(initialBuffer: CMSampleBuffer?, completion: #escaping ()->Void) {
// The VideoWriter will fail if a file exists at the URL, so clear it out first.
ImageAnimator.removeFileAtURL(fileURL: settings.outputURL)
videoWriter.start(initialBuffer: initialBuffer)
videoWriter.render(appendPixelBuffers: appendPixelBuffers) {
//ImageAnimator.saveToLibrary(self.settings.outputURL)
completion()
}
}
// This is the callback function for VideoWriter.render()
func appendPixelBuffers(writer: VideoWriter) -> Bool {
//Don't stop while images are NOT empty
while !imagesAndAudio.isEmpty || !isDone {
if(!imagesAndAudio.isEmpty) {
let date = Date()
if writer.isReadyForVideoData == false {
// Inform writer we have more buffers to write.
// print("Writer is not ready for more data")
return false
}
autoreleasepool {
//This should help but truly doesn't suffice - still need a mutex/lock
if(!imagesAndAudio.isEmpty) {
semaphore.wait() // requesting resource
let imageAndAudio = imagesAndAudio.first()!
let image = imageAndAudio.0
// let audio = imageAndAudio.1
let time = imageAndAudio.2
self.imagesAndAudio.removeAtIndex(index: 0)
semaphore.signal() // releasing resource
let presentationTime = CMTime(seconds: time, preferredTimescale: 600)
// if(audio != nil) { videoWriter.addAudio(buffer: audio!) }
let success = videoWriter.addImage(image: image, withPresentationTime: presentationTime)
if success == false {
fatalError("addImage() failed")
}
else {
// print("Added image # frame \(frameNum) with presTime: \(presentationTime)")
}
frameNum += 1
let final = Date()
let timeDiff = final.timeIntervalSince(date)
// print("Time: \(timeDiff)")
}
else {
// print("Images was empty")
}
}
}
}
print("Done writing")
// Inform writer all buffers have been written.
return true
}
}
#available(iOS 11.0, *)
public class VideoWriter {
let renderSettings: RenderSettings
var audioSettings: [String:Any]?
var videoWriter: AVAssetWriter!
var videoWriterInput: AVAssetWriterInput!
var pixelBufferAdaptor: AVAssetWriterInputPixelBufferAdaptor!
var audioWriterInput: AVAssetWriterInput!
static var ci:Int = 0
var initialTime:CMTime!
var isReadyForVideoData: Bool {
return (videoWriterInput == nil ? false : videoWriterInput!.isReadyForMoreMediaData )
}
var isReadyForAudioData: Bool {
return (audioWriterInput == nil ? false : audioWriterInput!.isReadyForMoreMediaData)
}
class func pixelBufferFromImage(image: UIImage, pixelBufferPool: CVPixelBufferPool, size: CGSize, alpha:CGImageAlphaInfo) -> CVPixelBuffer? {
var pixelBufferOut: CVPixelBuffer?
let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferPool, &pixelBufferOut)
if status != kCVReturnSuccess {
fatalError("CVPixelBufferPoolCreatePixelBuffer() failed")
}
let pixelBuffer = pixelBufferOut!
CVPixelBufferLockBaseAddress(pixelBuffer, [])
let data = CVPixelBufferGetBaseAddress(pixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: data, width: Int(size.width), height: Int(size.height),
bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer), space: rgbColorSpace, bitmapInfo: alpha.rawValue)
context!.clear(CGRect(x: 0, y: 0, width: size.width, height: size.height))
let horizontalRatio = size.width / image.size.width
let verticalRatio = size.height / image.size.height
//aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit
let newSize = CGSize(width: image.size.width * aspectRatio, height: image.size.height * aspectRatio)
let x = newSize.width < size.width ? (size.width - newSize.width) / 2 : 0
let y = newSize.height < size.height ? (size.height - newSize.height) / 2 : 0
let cgImage = image.cgImage != nil ? image.cgImage! : image.ciImage!.convertCIImageToCGImage()
context!.draw(cgImage!, in: CGRect(x: x, y: y, width: newSize.width, height: newSize.height))
CVPixelBufferUnlockBaseAddress(pixelBuffer, [])
return pixelBuffer
}
#available(iOS 11.0, *)
init(renderSettings: RenderSettings, audioSettings:[String:Any]? = nil) {
self.renderSettings = renderSettings
self.audioSettings = audioSettings
}
func start(initialBuffer: CMSampleBuffer?) {
let avOutputSettings: [String: AnyObject] = [
AVVideoCodecKey: renderSettings.avCodecKey as AnyObject,
AVVideoWidthKey: NSNumber(value: Float(renderSettings.width)),
AVVideoHeightKey: NSNumber(value: Float(renderSettings.height))
]
let avAudioSettings = audioSettings
func createPixelBufferAdaptor() {
let sourcePixelBufferAttributesDictionary = [
kCVPixelBufferPixelFormatTypeKey as String: NSNumber(value: kCVPixelFormatType_32ARGB),
kCVPixelBufferWidthKey as String: NSNumber(value: Float(renderSettings.width)),
kCVPixelBufferHeightKey as String: NSNumber(value: Float(renderSettings.height))
]
pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput,
sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
}
func createAssetWriter(outputURL: URL) -> AVAssetWriter {
guard let assetWriter = try? AVAssetWriter(outputURL: outputURL, fileType: AVFileType.mov) else {
fatalError("AVAssetWriter() failed")
}
guard assetWriter.canApply(outputSettings: avOutputSettings, forMediaType: AVMediaType.video) else {
fatalError("canApplyOutputSettings() failed")
}
return assetWriter
}
videoWriter = createAssetWriter(outputURL: renderSettings.outputURL)
videoWriterInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: avOutputSettings)
// if(audioSettings != nil) {
audioWriterInput = AVAssetWriterInput(mediaType: .audio, outputSettings: nil)
audioWriterInput.expectsMediaDataInRealTime = true
// }
if videoWriter.canAdd(videoWriterInput) {
videoWriter.add(videoWriterInput)
}
else {
fatalError("canAddInput() returned false")
}
// if(audioSettings != nil) {
if videoWriter.canAdd(audioWriterInput) {
videoWriter.add(audioWriterInput)
}
else {
fatalError("canAddInput() returned false")
}
// }
// The pixel buffer adaptor must be created before we start writing.
createPixelBufferAdaptor()
if videoWriter.startWriting() == false {
fatalError("startWriting() failed")
}
self.initialTime = initialBuffer != nil ? CMSampleBufferGetPresentationTimeStamp(initialBuffer!) : CMTime.zero
videoWriter.startSession(atSourceTime: self.initialTime)
precondition(pixelBufferAdaptor.pixelBufferPool != nil, "nil pixelBufferPool")
}
func render(appendPixelBuffers: #escaping (VideoWriter)->Bool, completion: #escaping ()->Void) {
precondition(videoWriter != nil, "Call start() to initialze the writer")
let queue = DispatchQueue(__label: "mediaInputQueue", attr: nil)
videoWriterInput.requestMediaDataWhenReady(on: queue) {
let isFinished = appendPixelBuffers(self)
if isFinished {
self.videoWriterInput.markAsFinished()
self.videoWriter.finishWriting() {
DispatchQueue.main.async {
print("Done Creating Video")
completion()
}
}
}
else {
// Fall through. The closure will be called again when the writer is ready.
}
}
}
func addAudio(buffer: CMSampleBuffer, time: CMTime) {
if(isReadyForAudioData) {
print("Writing audio \(VideoWriter.ci) of a time of \(CMSampleBufferGetPresentationTimeStamp(buffer))")
let duration = CMSampleBufferGetDuration(buffer)
let offsetBuffer = CMSampleBuffer.createSampleBuffer(fromSampleBuffer: buffer, withTimeOffset: time, duration: duration)
if(offsetBuffer != nil) {
print("Added audio")
self.audioWriterInput.append(offsetBuffer!)
}
else {
print("Not adding audio")
}
}
VideoWriter.ci += 1
}
func addImage(image: UIImage, withPresentationTime presentationTime: CMTime) -> Bool {
precondition(pixelBufferAdaptor != nil, "Call start() to initialze the writer")
//1
let pixelBuffer = VideoWriter.pixelBufferFromImage(image: image, pixelBufferPool: pixelBufferAdaptor.pixelBufferPool!, size: renderSettings.size, alpha: CGImageAlphaInfo.premultipliedFirst)!
return pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime + self.initialTime)
}
}
I was looking a little further into this - and while I could have updated my answer, I'd rather open this tangent in a new area to separate these ideas. Apple states that we can use an AVVideoComposition to "To use the created video composition for playback, create an AVPlayerItem object from the same asset used as the composition’s source, then assign the composition to the player item’s videoComposition property. To export the composition to a new movie file, create an AVAssetExportSession object from the same source asset, then assign the composition to the export session’s videoComposition property.".
https://developer.apple.com/documentation/avfoundation/avasynchronousciimagefilteringrequest
So, what you COULD try is using the AVPlayer for the ORIGINAL URL. Then try applying your filter.
let filter = CIFilter(name: "CIGaussianBlur")!
let composition = AVVideoComposition(asset: asset, applyingCIFiltersWithHandler: { request in
// Clamp to avoid blurring transparent pixels at the image edges
let source = request.sourceImage.imageByClampingToExtent()
filter.setValue(source, forKey: kCIInputImageKey)
// Vary filter parameters based on video timing
let seconds = CMTimeGetSeconds(request.compositionTime)
filter.setValue(seconds * 10.0, forKey: kCIInputRadiusKey)
// Crop the blurred output to the bounds of the original image
let output = filter.outputImage!.imageByCroppingToRect(request.sourceImage.extent)
// Provide the filter output to the composition
request.finishWithImage(output, context: nil)
})
let asset = AVAsset(url: originalURL)
let item = AVPlayerItem(asset: asset)
item.videoComposition = composition
let player = AVPlayer(playerItem: item)
I'm sure you know what to do from here. This may allow you to do a "Real-time" of your filtering. What I could see as a potential issue is that this runs into the same issues as your original thing, whereas it still takes a set time to run each frame and leading to a delay between audio and video. However, this may not happen. If you do get this working, once the user selects their filter, you can use AVAssetExportSession to export the specific videoComposition.
More here if you need help!

iOS Swift Implementation of Instagram Story Editor

I'm working on a project similar to editing photos/videos on Instagram Story (with the functionality of adding stickers, etc). My initial approach was to use
videoCompositionInstructions!.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: containerLayer)
but I realized that there are many challenges with this method. First, if the input is a landscape video, I cannot recover the background gradient color - it becomes all black (https://imgur.com/a/wYpknE4). Not to mention the cropping issues - if the user moves the video out of bounds, the video should be clipped, but with my current approach, this would be difficult. Also, if I add stickers, I have to scale the x and y to fit the render size of the video.
What really would be the best approach to this? Surely there would be an easier way? Intuitively, it would make sense to start off with a container view and the user can add stickers, video, etc to it and it would be the easiest to simply export the container view with clipsToBounds = true (no need to scale x/y, crop the video, landscape issues, etc).
If anyone has worked on a similar project, or has any inputs, it would be appreciated.
class AVFoundationClient {
var selectedVideoURL: URL?
var mutableComposition: AVMutableComposition?
var videoCompositionInstructions: AVMutableVideoComposition?
var videoTrack: AVMutableCompositionTrack?
var sourceAsset: AVURLAsset?
var insertTime = CMTime.zero
var sourceVideoAsset: AVAsset?
var sourceVideoTrack: AVAssetTrack?
var sourceRange: CMTimeRange?
var renderWidth: CGFloat?
var renderHeight: CGFloat?
var endTime: CMTime?
var videoBounds: CGRect?
var stickerLayers = [CALayer]()
func exportVideoFileFromStickersAndOriginalVideo(_ stickers: [Int:Sticker], sourceURL: URL) {
createNewMutableCompositionAndTrack()
getSourceAssetFromURL(sourceURL)
getVideoParamsAndAppendTracks()
createVideoCompositionInstructions()
for (_, sticker) in stickers {
createStickerLayer(sticker.image!, x: sticker.x!, y: sticker.y!, width: sticker.width!, height: sticker.height!, scale: sticker.scale!)
}
mergeStickerLayersAndFinalizeInstructions()
export(mutableComposition!)
}
func createStickerLayer(_ image: UIImage, x: CGFloat, y: CGFloat, width: CGFloat, height: CGFloat, scale: CGFloat) {
let scaleRatio = renderWidth!/UIScreen.main.bounds.width
let stickerX = x*scaleRatio
let stickerY = y*scaleRatio
let imageLayer = CALayer()
imageLayer.frame = CGRect(x: stickerX, y: stickerY, width: width*scaleRatio, height: height*scaleRatio)
imageLayer.contents = image.cgImage
imageLayer.contentsGravity = CALayerContentsGravity.resize
imageLayer.masksToBounds = true
stickerLayers.append(imageLayer)
}
func mergeStickerLayersAndFinalizeInstructions() {
let videoLayer = CALayer()
videoLayer.frame = CGRect(x: 0, y: 0, width: renderWidth!, height: renderWidth!*16/9)
videoLayer.contentsGravity = .resizeAspectFill
let containerLayer = CALayer()
containerLayer.backgroundColor = UIColor.mainBlue().cgColor
containerLayer.isGeometryFlipped = true
containerLayer.frame = CGRect(x: 0, y: 0, width: renderWidth!, height: renderWidth!*16/9)
containerLayer.addSublayer(videoLayer)
for stickerLayer in stickerLayers {
containerLayer.addSublayer(stickerLayer)
}
videoCompositionInstructions!.animationTool = AVVideoCompositionCoreAnimationTool(postProcessingAsVideoLayer: videoLayer, in: containerLayer)
}
func createNewMutableCompositionAndTrack() {
mutableComposition = AVMutableComposition()
videoTrack = mutableComposition!.addMutableTrack(withMediaType: AVMediaType.video, preferredTrackID: CMPersistentTrackID())
}
func getSourceAssetFromURL(_ fileURL: URL) {
sourceAsset = AVURLAsset(url: fileURL, options: nil)
}
func getVideoParamsAndAppendTracks() {
let sourceDuration = CMTimeRangeMake(start: CMTime.zero, duration: sourceAsset!.duration)
sourceVideoTrack = sourceAsset!.tracks(withMediaType: AVMediaType.video)[0]
renderWidth = sourceVideoTrack!.renderSize().width
renderHeight = sourceVideoTrack!.renderSize().height
endTime = sourceAsset!.duration
sourceRange = sourceDuration
do {
try videoTrack!.insertTimeRange(sourceDuration, of: sourceVideoTrack!, at: insertTime)
}catch {
print("error inserting time range")
}
}
func createVideoCompositionInstructions() {
let mainInstruction = AVMutableVideoCompositionInstruction()
mainInstruction.timeRange = sourceRange!
let videolayerInstruction = videoCompositionInstruction(videoTrack!, asset: sourceAsset!)
videolayerInstruction.setOpacity(0.0, at: endTime!)
//Add instructions
mainInstruction.layerInstructions = [videolayerInstruction]
videoCompositionInstructions = AVMutableVideoComposition()
videoCompositionInstructions!.renderScale = 1.0
videoCompositionInstructions!.renderSize = CGSize(width: renderWidth!, height: renderWidth!*16/9)
videoCompositionInstructions!.frameDuration = CMTimeMake(value: 1, timescale: 30)
videoCompositionInstructions!.instructions = [mainInstruction]
}
func videoCompositionInstruction(_ track: AVCompositionTrack, asset: AVAsset)
-> AVMutableVideoCompositionLayerInstruction {
let instruction = AVMutableVideoCompositionLayerInstruction(assetTrack: track)
let assetTrack = asset.tracks(withMediaType: .video)[0]
instruction.setTransform(assetTrack.preferredTransform.concatenating(CGAffineTransform(translationX: 0, y: -(renderHeight! - renderWidth!*16/9)/2)), at: CMTime.zero)
return instruction
}
}
extension AVFoundationClient {
//Export the AV Mutable Composition
func export(_ mutableComposition: AVMutableComposition) {
// Set up exporter
guard let exporter = AVAssetExportSession(asset: mutableComposition, presetName: AVAssetExportPreset1920x1080) else { return }
exporter.outputURL = generateExportUrl()
exporter.outputFileType = AVFileType.mov
exporter.shouldOptimizeForNetworkUse = false
exporter.videoComposition = videoCompositionInstructions
exporter.exportAsynchronously() {
DispatchQueue.main.async {
self.exportDidComplete(exportURL: exporter.outputURL!, doneEditing: false)
}
}
}
func generateExportUrl() -> URL {
// Create a custom URL using curernt date-time to prevent conflicted URL in the future.
let documentDirectory = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0]
let dateFormat = DateFormatter()
dateFormat.dateStyle = .long
dateFormat.timeStyle = .short
let dateString = dateFormat.string(from: Date())
let exportPath = (documentDirectory as NSString).strings(byAppendingPaths: ["edited-video-\(dateString).mp4"])[0]
//erase old
let fileManager = FileManager.default
do {
try fileManager.removeItem(at: URL(fileURLWithPath: exportPath))
} catch {
print("Unable to remove item at \(URL(fileURLWithPath: exportPath))")
}
return URL(fileURLWithPath: exportPath)
}
//Export Finish Handler
func exportDidComplete(exportURL: URL, doneEditing: Bool) {
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: exportURL)
}) { saved, error in
if saved {print("successful saving")}
else {
print("error saving")
}
}
}
}

Converting PDF to Image - Swift iOS

I am trying to convert a PDF file and all its pages to png images.
I have put together the code below filling the example on this thread
How to convert PDF to PNG efficiently?
When I run the code, it crashes on the pdf file source (sourceURL) there is definitely a file there. and when I print sourceURl it prints the URL to the file.
The crash says it found nil - My understanding is that means it could not find the file? even though I can physically see and open the file and also print the URL to the file.
Can someone point out what I'm doing wrong?
Code:
func convertPDFtoPNG() {
let sourceURL = pptURL
print("pptURL:", pptURL!)
let destinationURL = pngURL
let urls = try? convertPDF(at: sourceURL!, to: destinationURL!, fileType: .png, dpi: 200)
}
func convertPDF(at sourceURL: URL, to destinationURL: URL, fileType: ImageFileType, dpi: CGFloat = 200) throws -> [URL] {
let pdfDocument: CGPDFDocument! = CGPDFDocument(sourceURL as CFURL)! //Thread 1: Fatal error: Unexpectedly found nil while unwrapping an Optional value
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGImageAlphaInfo.noneSkipLast.rawValue
var urls = [URL](repeating: URL(fileURLWithPath : "/"), count: pdfDocument.numberOfPages)
DispatchQueue.concurrentPerform(iterations: pdfDocument.numberOfPages) { i in
let pdfPage = pdfDocument.page(at: i + 1)!
let mediaBoxRect = pdfPage.getBoxRect(.mediaBox)
let scale = dpi / 72.0
let width = Int(mediaBoxRect.width * scale)
let height = Int(mediaBoxRect.height * scale)
let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace, bitmapInfo: bitmapInfo)!
context.interpolationQuality = .high
context.fill(CGRect(x: 0, y: 0, width: width, height: height))
context.scaleBy(x: scale, y: scale)
context.drawPDFPage(pdfPage)
let image = context.makeImage()!
let imageName = sourceURL.deletingPathExtension().lastPathComponent
let imageURL = destinationURL.appendingPathComponent("\(imageName)-Page\(i+1).\(fileType.fileExtention)")
let imageDestination = CGImageDestinationCreateWithURL(imageURL as CFURL, fileType.uti, 1, nil)!
CGImageDestinationAddImage(imageDestination, image, nil)
CGImageDestinationFinalize(imageDestination)
urls[i] = imageURL
}
return urls
}
import Foundation
import Photos
// 1: 目前主要用来操作pdf转为图片
// 2: 图片保存到自定义相册中
struct HBPhotosAlbumHelperUtil {
static let shared = HBPhotosAlbumHelperUtil()
// url链接的pdf转为image
// pageNumber :表示pdf的对应的页面,默认为第一页
func drawToImagePDFFromURL(pdfurl url: String?, pageNumber index: Int = 1, scaleX scalex: CGFloat = 1.0, scaleY scaley: CGFloat = -1.0) -> UIImage? {
guard let pdfUrl = url, pdfUrl.count > 0, let formatterUrl = pdfUrl.urlValue else {
return nil
}
guard let document = CGPDFDocument(formatterUrl as CFURL) else {
return nil
}
guard let page = document.page(at: index) else {
return nil
}
let pageRect = page.getBoxRect(.mediaBox)
if #available(iOS 10.0, *) {
let renderGraph = UIGraphicsImageRenderer(size: pageRect.size)
let drawImage = renderGraph.image { context in
UIColor.white.set()
context.fill(pageRect)
context.cgContext.translateBy(x: 0.0, y: pageRect.size.height)
context.cgContext.scaleBy(x: scalex, y: scaley)
context.cgContext.drawPDFPage(page)
}
return drawImage
} else {
UIGraphicsBeginImageContextWithOptions(pageRect.size, false, 1.0)
let context = UIGraphicsGetCurrentContext()
context?.setFillColor(UIColor.white.cgColor)
context?.fill(pageRect)
context?.translateBy(x: 0.0, y: pageRect.size.height)
context?.scaleBy(x: scalex, y: scaley)
context?.drawPDFPage(page)
let pdfImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return pdfImage
}
}
}
// 用来表示保存图片到自定义相册或者系统相册的操作结果
enum HBPhotosAlbumUtilResult {
case success, error, denied
}
extension HBPhotosAlbumHelperUtil {
// 请求获取操作系统相册权限
// 返回true说明已经得到授权
static var photoAlbumAuthorized: Bool {
return PHPhotoLibrary.authorizationStatus() == .authorized || PHPhotoLibrary.authorizationStatus() == .notDetermined
}
// 保存图片到自定义相册中
func saveImageToCustomAlbum(saveImage markImage: UIImage, customAlbumName albumName: String = "丰巢管家电子发票", completion: ((_ result: HBPhotosAlbumUtilResult) -> Void)?) {
guard HBPhotosAlbumHelperUtil.photoAlbumAuthorized else {
completion?(.denied)
return
}
var assetAlbum: PHAssetCollection?
// 如果相册名称为空,则图片默认保存到系统相册里面
if albumName.isEmpty {
let assetCollection = PHAssetCollection.fetchAssetCollections(with: .smartAlbum, subtype: .smartAlbumUserLibrary,
options: nil)
assetAlbum = assetCollection.firstObject
} else {
// 获取指定的相册是否存在
let assetList = PHAssetCollection.fetchAssetCollections(with: .album, subtype: .any, options: nil)
assetList.enumerateObjects { albumOption, _, stop in
let assetCollection = albumOption
if albumName == assetCollection.localizedTitle {
assetAlbum = assetCollection
stop.initialize(to: true)
}
}
// 自定义相册不存在就创建
if assetAlbum == nil {
PHPhotoLibrary.shared().performChanges({
PHAssetCollectionChangeRequest.creationRequestForAssetCollection(withTitle: albumName)
}) { _, _ in
self.saveImageToCustomAlbum(saveImage: markImage, customAlbumName: albumName, completion: completion)
}
}
}
// 保存图片
PHPhotoLibrary.shared().performChanges({
let result = PHAssetChangeRequest.creationRequestForAsset(from: markImage)
if !albumName.isEmpty {
if let assetPlaceHolder = result.placeholderForCreatedAsset,
let lastAssetAlbum = assetAlbum,
let albumChangeRequset = PHAssetCollectionChangeRequest(for:
lastAssetAlbum) {
albumChangeRequset.addAssets([assetPlaceHolder] as NSArray)
}
}
}) { isSuccess, _ in
guard isSuccess else {
completion?(.error)
return
}
completion?(.success)
}
}
}
extension String {
/// URL legalization
public var urlValue: URL? {
if let url = URL(string: self) {
return url
}
var set = CharacterSet()
set.formUnion(.urlHostAllowed)
set.formUnion(.urlPathAllowed)
set.formUnion(.urlQueryAllowed)
set.formUnion(.urlFragmentAllowed)
return self.addingPercentEncoding(withAllowedCharacters: set).flatMap { URL(string: $0) }
}
}
You can use the api like this:
// Use this way to achieve pdf to image
HBPhotosAlbumHelperUtil.shared.drawToImagePDFFromURL(pdfurl: "link to pdf file")
// In this way, you can save pictures to the system custom album.
HBPhotosAlbumHelperUtil.shared.saveImageToCustomAlbum(saveImage: UIImage()) { (result) in
}
Make sure that your pptURL is file url.
URL(string: "path/to/pdf") and URL(fileURLWithPath: "path/to/pdf") are different things and you must use the last one while initiating your url.
The output should start with "file:///" prefix, f.e.
file:///Users/dev/Library/Developer/CoreSimulator/Devices/4FF18699-D82F-4308-88D6-44E3C11C955A/data/Containers/Bundle/Application/8F230041-AC15-45D9-863F-5778B565B12F/myApp.app/example.pdf

compressing Video Error: Terminated due to memory issue

I want to first trimming video that choose from photoLibrary, and then compress video file for custom size and bitrate. I'm using PryntTrimmerView for Trimming video, and then use trimmed video for compress video file.
there is my code for trimming and compressing video file.
I successfully export trimming asset, and then get compressed file successfully. when I choose short video from gallery there is no problem, but when choose video big size after compressing I have this error in console:
Message from debugger: Terminated due to memory issue
there is my code for trimming and compressing video file.
func prepareAssetComposition() throws {
topActivity.isHidden = false
topActivity.startAnimating()
confirmButton.isUserInteractionEnabled = false
//get asset and track
guard let asset = trimmerView.asset, let videoTrack = asset.tracks(withMediaType: AVMediaTypeVideo).first else {
return
}
let assetComposition = AVMutableComposition()
let start = trimmerView.startTime?.seconds
let end = trimmerView.endTime?.seconds
let startTime = CMTime(seconds: Double(start ?? 0), preferredTimescale: 1000)
let endTime = CMTime(seconds: Double(end ?? 0), preferredTimescale: 1000)
let trackTimeRange = CMTimeRange(start: startTime, end: endTime)
let videoCompositionTrack = assetComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
try videoCompositionTrack.insertTimeRange(trackTimeRange, of: videoTrack, at: kCMTimeZero)
if let audioTrack = asset.tracks(withMediaType: AVMediaTypeAudio).first {
let audioCompositionTrack = assetComposition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid)
try audioCompositionTrack.insertTimeRange(trackTimeRange, of: audioTrack, at: kCMTimeZero)
}
//set video oriention to portrati
let size = videoTrack.naturalSize
let txf = videoTrack.preferredTransform
var recordType = ""
if (size.width == txf.tx && size.height == txf.ty){
recordType = "UIInterfaceOrientationLandscapeRight"
}else if (txf.tx == 0 && txf.ty == 0){
recordType = "UIInterfaceOrientationLandscapeLeft"
}else if (txf.tx == 0 && txf.ty == size.width){
recordType = "UIInterfaceOrientationPortraitUpsideDown"
}else{
recordType = "UIInterfaceOrientationPortrait"
}
if recordType == "UIInterfaceOrientationPortrait" {
let t1: CGAffineTransform = CGAffineTransform(translationX: videoTrack.naturalSize.height, y: -(videoTrack.naturalSize.width - videoTrack.naturalSize.height)/2)
let t2: CGAffineTransform = t1.rotated(by: CGFloat(Double.pi / 2))
let finalTransform: CGAffineTransform = t2
videoCompositionTrack.preferredTransform = finalTransform
}else if recordType == "UIInterfaceOrientationLandscapeRight" {
let t1: CGAffineTransform = CGAffineTransform(translationX: videoTrack.naturalSize.height, y: -(videoTrack.naturalSize.width - videoTrack.naturalSize.height)/2)
let t2: CGAffineTransform = t1.rotated(by: -CGFloat(Double.pi))
let finalTransform: CGAffineTransform = t2
videoCompositionTrack.preferredTransform = finalTransform
}else if recordType == "UIInterfaceOrientationPortraitUpsideDown" {
let t1: CGAffineTransform = CGAffineTransform(translationX: videoTrack.naturalSize.height, y: -(videoTrack.naturalSize.width - videoTrack.naturalSize.height)/2)
let t2: CGAffineTransform = t1.rotated(by: -CGFloat(Double.pi/2))
let finalTransform: CGAffineTransform = t2
videoCompositionTrack.preferredTransform = finalTransform
}
//start exporting video
var name = ""
var url: URL!
if self.state == .Left {
url = URL(fileURLWithPath: "\(NSTemporaryDirectory())TrimmedMovie1.mp4")
name = "TrimmedMovie1.mp4"
}else if state == .Right {
url = URL(fileURLWithPath: "\(NSTemporaryDirectory())TrimmedMovie3.mp4")
name = "TrimmedMovie3.mp4"
}else if state == .Center {
url = URL(fileURLWithPath: "\(NSTemporaryDirectory())TrimmedMovie2.mp4")
name = "TrimmedMovie2.mp4"
}
try? FileManager.default.removeItem(at: url)
let exportSession = AVAssetExportSession(asset: assetComposition, presetName: AVAssetExportPresetHighestQuality)
if UIDevice.current.userInterfaceIdiom == .phone {
exportSession?.outputFileType = AVFileTypeQuickTimeMovie
}else {
exportSession?.outputFileType = AVFileTypeQuickTimeMovie
}
exportSession?.shouldOptimizeForNetworkUse = true
exportSession?.outputURL = url
exportSession?.exportAsynchronously(completionHandler: {
DispatchQueue.main.async {
if let url = exportSession?.outputURL, exportSession?.status == .completed {
let asset = AVAsset(url: url)
print(asset.duration)
var thump: UIImage?
var vData: Data?
if let img = asset.videoThumbnail {
thump = img
if recordType == "UIInterfaceOrientationPortrait" {
if thump != nil {
let img = UIImage(cgImage: thump!.cgImage!, scale: CGFloat(1.0), orientation: .right)
thump = img
thump = thump?.fixedOrientation()
}
}else if recordType == "UIInterfaceOrientationLandscapeRight" {
if thump != nil {
let img = UIImage(cgImage: thump!.cgImage!, scale: CGFloat(1.0), orientation: .down)
thump = img
thump = thump?.fixedOrientation()
}
}else if recordType == "UIInterfaceOrientationPortraitUpsideDown" {
if thump != nil {
let img = UIImage(cgImage: thump!.cgImage!, scale: CGFloat(1.0), orientation: .left)
thump = img
thump = thump?.fixedOrientation()
}
}
}
if let videoData = NSData(contentsOf: url) {
vData = videoData as Data
}
if let delegate = self.delegate {
self.playbackTimeCheckerTimer?.invalidate()
self.playButton.setImage(#imageLiteral(resourceName: "play"), for: .normal)
self.playbackTimeCheckerTimer = nil
let size = CGSize(width: 1280, height: 720)
if let videoData = NSData(contentsOf: url) {
vData = videoData as Data
}
let directoryURL: URL = FileManager.default.urls(for: .documentDirectory, in: .userDomainMask)[0]
let folderPath: URL = directoryURL.appendingPathComponent(name, isDirectory: true)
do {
try vData?.write(to: folderPath, options: [])
}
catch {
print(error.localizedDescription)
}
self.compress(fileName:name,videoPath: folderPath.path, exportVideoPath: folderPath.path, renderSize: size, completion: {res in
if res {
OperationQueue.main.addOperation {
self.topActivity.isHidden = true
self.topActivity.stopAnimating()
self.confirmButton.isUserInteractionEnabled = true
delegate.setVideoFromPath(path: folderPath.path, thump: thump, videoData: vData)
self.dismiss(animated: true, completion: nil)
return
}
}else {
print("can not compress")
}
})
}
} else {
self.topActivity.isHidden = true
self.topActivity.stopAnimating()
self.confirmButton.isUserInteractionEnabled = true
let error = exportSession?.error
print("error exporting video \(String(describing: error))")
}
}
})
}
private func existsFileAtUrl(url:String,name:String) -> Bool {
let path = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)[0] as String
let url = URL(fileURLWithPath: path)
let filePath = url.appendingPathComponent(name).path
let fileManager = FileManager.default
if fileManager.fileExists(atPath: filePath) {
return true
} else {
return false
}
}
//MARK: Compress
func compress(fileName:String,videoPath : String, exportVideoPath : String, renderSize : CGSize, completion : #escaping (Bool) -> ()) {
let videoUrl = URL(fileURLWithPath: videoPath)
if (!existsFileAtUrl(url: videoUrl.absoluteString,name:fileName)) {
completion(false)
return
}
let videoAssetUrl = AVURLAsset(url: videoUrl)
let videoTrackArray = videoAssetUrl.tracks(withMediaType: AVMediaTypeVideo)
if videoTrackArray.count < 1 {
completion(false)
return
}
let videoAssetTrack = videoTrackArray[0]
let audioTrackArray = videoAssetUrl.tracks(withMediaType: AVMediaTypeAudio)
if audioTrackArray.count < 1 {
completion(false)
return
}
let audioAssetTrack = audioTrackArray[0]
let outputUrl = URL(fileURLWithPath: exportVideoPath)
var videoWriter = try? AVAssetWriter(url: outputUrl, fileType: AVFileTypeQuickTimeMovie)
videoWriter?.shouldOptimizeForNetworkUse = true
let vSetting = videoSettings(size: renderSize)
let videoWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: vSetting)
videoWriterInput.expectsMediaDataInRealTime = false
videoWriterInput.transform = videoAssetTrack.preferredTransform
videoWriter?.add(videoWriterInput)
// output readers
let videoReaderSettings : [String : Int] = [kCVPixelBufferPixelFormatTypeKey as String : Int(kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange)]
let videoReaderOutput = AVAssetReaderTrackOutput(track: videoAssetTrack, outputSettings: videoReaderSettings)
let videoReader = try! AVAssetReader(asset: videoAssetUrl)
videoReader.add(videoReaderOutput)
videoWriter?.startWriting()
videoReader.startReading()
videoWriter?.startSession(atSourceTime: kCMTimeZero)
let processingVideoQueue = DispatchQueue(label: "processingVideoCompressionQueue")
videoWriterInput.requestMediaDataWhenReady(on: processingVideoQueue, using: {
while(videoWriterInput.isReadyForMoreMediaData){
let sampleVideoBuffer = videoReaderOutput.copyNextSampleBuffer()
if (videoReader.status == .reading && sampleVideoBuffer != nil) {
videoWriterInput.append(sampleVideoBuffer!)
}else {
videoWriterInput.markAsFinished()
if (videoReader.status == .completed) {
videoWriter?.finishWriting(completionHandler: {
videoWriter = nil
completion(true)
})
}
}
}
})
}
//MARK: Setting
func videoSettings(size : CGSize) -> [String : AnyObject] {
var compressionSettings = [String : AnyObject]()
compressionSettings[AVVideoAverageBitRateKey] = 5 as AnyObject
var settings = [String : AnyObject]()
settings[AVVideoCompressionPropertiesKey] = compressionSettings as AnyObject
settings[AVVideoCodecKey] = AVVideoCodecH264 as AnyObject?
settings[AVVideoHeightKey] = size.height as AnyObject?
settings[AVVideoWidthKey] = size.width as AnyObject?
return settings
}
I found issue, the problem is while statement. when I dismiss view controller this statement repeatedly call and I get this error. now when I want to dismiss view controller stop while loop with break and everything is working fine.

Unable to save fitler video using GPUImageMovieWriter in swift

i have created video Filter demo using Gpuimage filter for apply filter to video. below is my code for brightness filter. video is saved sometime but most of the time movieWritter unable to complete finishRecording or completionBlock while moviefile complete processing. at the end app terminate due to high usage of CPU and Memory usage.
let finalpath = "(FileManager.default.finalCompositions)/composition(getTimeStamp).mp4"
let finalUrl = URL(fileURLWithPath: finalpath)
let asset: AVURLAsset = AVURLAsset(url: self.videoUrl!)
let asetTrack: AVAssetTrack = asset.tracks(withMediaType: AVMediaTypeVideo)[0]
self.exportedMovieFile = GPUImageMovie(url: self.videoUrl)
self.exportedMovieFile?.runBenchmark = true
self.exportedMovieFile?.playAtActualSpeed = false
var exportfilter = GPUImageBrightnessFilter()
exportfilter.brightness = 0.5
self.exportedMovieFile?.addTarget(exportfilter)
let videosize: CGSize = CGSize(width: asetTrack.naturalSize.width, height: asetTrack.naturalSize.height)
self.exportedMovieWritter = GPUImageMovieWriter(movieURL: finalUrl, size: videosize)
exportfilter.addTarget(self.exportedMovieWritter)
//Configure this for video from the movie file, where we want to preserve all video frames and audio samples
self.exportedMovieWritter?.shouldPassthroughAudio = true
if asset.tracks(withMediaType: AVMediaTypeAudio).count > 0 {
self.exportedMovieFile?.audioEncodingTarget = self.exportedMovieWritter
}
else
{
self.exportedMovieFile?.audioEncodingTarget = nil
}
self.exportedMovieFile?.enableSynchronizedEncoding(using: self.exportedMovieWritter)
self.exportedMovieWritter!.startRecording()
self.exportedMovieFile?.startProcessing()
DispatchQueue.main.async {
self.timerProgress = Timer.scheduledTimer(timeInterval: 0.3, target: self, selector:#selector(self.filterRetrievingProgress), userInfo: nil, repeats: true)
}
self.exportedMovieWritter?.failureBlock = {(err) in
loggingPrint("Error :: (err?.localizedDescription)")
}
self.exportedMovieWritter?.completionBlock = {() -> Void in
exportfilter.removeTarget(self.exportedMovieWritter)
self.exportedMovieWritter?.finishRecording(completionHandler: {
self.timerProgress?.invalidate()
self.timerProgress = nil
self.exportedMovieFile?.removeAllTargets()
self.exportedMovieFile?.cancelProcessing()
self.exportedMovieFile = nil
DispatchQueue.main.async {
let uploadViewController = UploadViewController.loadController()
uploadViewController.isPhoto = false
uploadViewController.videoUrl = finalUrl
self.navigationController?.pushViewController(uploadViewController, animated: true)
}
})
}
#objc fileprivate func filterRetrievingProgress() {
loggingPrint("Progess :: (self.exportedMovieFile?.progress)")
}

Resources