How to set the Framerate using iOS AVFoundation - ios

The following code successfully converts an array of images into a video. However, the fps parameter has no effect of the framerate so the framerate is alway 1.
The AVVideoMaxKeyFrameIntervalKey and AVVideoMaxKeyFrameIntervalDurationKey settings have no effect on the output.
How do I set a framerate of 30 fps?
func createVideo(images: [UIImage], durationPerImage:Double = 1, fps:Double = 30){
let filePath = URL(fileURLWithPath: NSTemporaryDirectory() + "video.mp4")
let size = images[0].size
let fileManager = FileManager.default
if fileManager.fileExists(atPath: filePath.path) {
try? fileManager.removeItem(at: filePath)
}
let bitRate = size.width * size.height * fps * 4;
let videoWriter = try? AVAssetWriter(outputURL: filePath, fileType: AVFileType.mp4)
let videoSettings: [String : Any] = [
AVVideoCodecKey: AVVideoCodecType.h264,
AVVideoWidthKey: size.width,
AVVideoHeightKey: size.height,
AVVideoCompressionPropertiesKey: [
AVVideoAverageBitRateKey: bitRate,
AVVideoMaxKeyFrameIntervalKey: fps,
AVVideoMaxKeyFrameIntervalDurationKey: (1/fps)
],
]
let bufferAttributes: [String : Any] = [
kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB),
kCVPixelBufferWidthKey as String: size.width,
kCVPixelBufferHeightKey as String: size.height
]
let videoWriterInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoSettings)
videoWriterInput.expectsMediaDataInRealTime = true
videoWriter?.add(videoWriterInput)
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput, sourcePixelBufferAttributes: bufferAttributes)
videoWriter?.startWriting()
videoWriter?.startSession(atSourceTime: CMTime.zero)
var time = CMTime.zero
for image in images {
if videoWriter?.status == .unknown {
videoWriter?.startWriting()
}
if videoWriter?.status == .failed {
print("Error: \(String(describing: videoWriter?.error))")
return
}
var pixelBuffer: CVPixelBuffer?
let status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferAdaptor.pixelBufferPool!, &pixelBuffer)
if status != kCVReturnSuccess { return }
CVPixelBufferLockBaseAddress(pixelBuffer!, [])
let data = CVPixelBufferGetBaseAddress(pixelBuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: data, width: Int(size.width), height: Int(size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
context?.translateBy(x: 0, y: size.height)
context?.scaleBy(x: 1.0, y: -1.0)
UIGraphicsPushContext(context!)
image.draw(in: CGRect(x: 0, y: 0, width:size.width, height: size.height))
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(pixelBuffer!, [])
if !pixelBufferAdaptor.append(pixelBuffer!, withPresentationTime: time) { return }
time = CMTimeAdd(time, CMTimeMakeWithSeconds(durationPerImage, preferredTimescale: Int32(fps)))
}
videoWriterInput.markAsFinished()
videoWriter?.finishWriting {
let tracks = AVAsset(url: filePath).tracks(withMediaType: .video)
print("FPS", tracks.first!.nominalFrameRate, tracks.first!.naturalTimeScale)
}
}

Related

iOS app getting crash when frequent draw CGContext using CGImage, which create from [UInt8] data

Right now i am developing one module in that module i need to create video from array CGImage and while doing that processing my application get crashed at some point , i am not able to figure out exact reason behind that crash.
can anyone please suggest me i am going in right direction or not , should i convert [CGImage] to video or do i need to choose another approach.
i also tried to convert CGImage to UIImage and tried to create video but still facing same issue.
i am getting image data in [UInt8] data so what would be the correct approach converting the image formate and create video ?
In order to create video from [CGImage] following below approach.
I am converting [UInt8] data to CGImage using CGDataProvider and convert CGImage to UIImage. I have array of image and collect UIImage and then merge images and create video.
Here my code to convert CGImage from data.
private(set) var data: [UInt8]
var cgImage: CGImage? {
let colorSpaceRef = CGColorSpaceCreateDeviceRGB()
let bitsPerComponent = 8
let bitsPerPixel = channels * bitsPerComponent
let bytesPerRow = channels * width
let totalBytes = height * bytesPerRow
let bitmapInfo = CGBitmapInfo(rawValue: channels == 3 ? CGImageAlphaInfo.none.rawValue : CGImageAlphaInfo.last.rawValue)
let provider = CGDataProvider( dataInfo: nil,
data: data,
size: totalBytes,
releaseData: {_, _, _ in })!
return CGImage(width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow,
space: colorSpaceRef,
bitmapInfo: bitmapInfo,
provider: provider,
decode: nil,
shouldInterpolate: false,
intent: CGColorRenderingIntent.perceptual)
}
My app is getting crash here in this function, when i start frequent image drawing to context
(context!.draw(cgImage, in: CGRect(x: 0, y: 0, width: frameWidth,
height: frameHeight)))
If i use number of images from bundle and create video using this code its working fine. When i use created CGImage from [UInt8] data, it started getting crash after writing 3-4 images.
func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
autoreleasepool {
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
var pxbuffer:CVPixelBuffer?
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
assert(context != nil, "context is nil")
context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage
, in: CGRect(x: 0, y: 0, width: frameWidth, height: frameHeight))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
here, i am using below code to create video from array of images.
typealias CXEMovieMakerCompletion = (URL) -> Void
typealias CXEMovieMakerUIImageExtractor = (AnyObject) -> UIImage?
public class CXEImagesToVideo: NSObject{
var assetWriter:AVAssetWriter!
var writeInput:AVAssetWriterInput!
var bufferAdapter:AVAssetWriterInputPixelBufferAdaptor!
var videoSettings:[String : Any]!
var frameTime:CMTime!
var fileURL:URL!
var completionBlock: CXEMovieMakerCompletion?
var movieMakerUIImageExtractor:CXEMovieMakerUIImageExtractor?
public class func videoSettings(codec:String, width:Int, height:Int) -> [String: Any]{
if(Int(width) % 16 != 0){
print("warning: video settings width must be divisible by 16")
}
let videoSettings:[String: Any] = [AVVideoCodecKey: AVVideoCodecType.h264,
AVVideoWidthKey: width,
AVVideoHeightKey: height]
return videoSettings
}
public init(videoSettings: [String: Any],frameTime: CMTime) {
super.init()
self.frameTime = frameTime
let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
let tempPath = paths[0] + "/exprotvideo1.mp4"
if(FileManager.default.fileExists(atPath: tempPath)){
guard (try? FileManager.default.removeItem(atPath: tempPath)) != nil else {
print("remove path failed")
return
}
}
self.fileURL = URL(fileURLWithPath: tempPath)
self.assetWriter = try! AVAssetWriter(url: self.fileURL, fileType: AVFileType.mp4)
self.videoSettings = videoSettings
self.writeInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoSettings)
assert(self.assetWriter.canAdd(self.writeInput), "add failed")
self.assetWriter.add(self.writeInput)
let bufferAttributes:[String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB)]
self.bufferAdapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: self.writeInput, sourcePixelBufferAttributes: bufferAttributes)
self.frameTime = CMTimeMake(value: 1, timescale: 10)
}
func createMovieFrom(urls: [URL], withCompletion: #escaping CXEMovieMakerCompletion){
self.createMovieFromSource(images: urls as [AnyObject], extractor:{(inputObject:AnyObject) ->UIImage? in
return UIImage(data: try! Data(contentsOf: inputObject as! URL))}, withCompletion: withCompletion)
}
func createMovieFrom(images: [UIImage], withCompletion: #escaping CXEMovieMakerCompletion){
DispatchQueue.main.async {
self.createMovieFromSource(images: images, extractor: {(inputObject:AnyObject) -> UIImage? in
return inputObject as? UIImage}, withCompletion: withCompletion)
}
}
func imageFromLayer(layer:CALayer) -> UIImage {
UIGraphicsBeginImageContextWithOptions(layer.frame.size, layer.isOpaque, 0)
layer.render(in: UIGraphicsGetCurrentContext()!)
let outputImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return outputImage!
}
func createMovieFromSource(images: [AnyObject], extractor: #escaping CXEMovieMakerUIImageExtractor, withCompletion: #escaping CXEMovieMakerCompletion){
self.completionBlock = withCompletion
self.assetWriter.startWriting()
self.assetWriter.startSession(atSourceTime: CMTime.zero)
let mediaInputQueue = DispatchQueue.init(label: "Main") // DispatchQueue(label: "mediaInputQueue")
var i = 0
let frameNumber = images.count
self.writeInput.requestMediaDataWhenReady(on: mediaInputQueue){
while(true){
if(i >= frameNumber){
break
}
if (self.writeInput.isReadyForMoreMediaData){
var sampleBuffer:CVPixelBuffer?
autoreleasepool{
let temp = images[i]
let img = extractor(temp)
if img == nil{
i += 1
print("Warning: counld not extract one of the frames")
//continue
}
sampleBuffer = self.newPixelBufferFrom(cgImage: temp.cgImage!)
}
if (sampleBuffer != nil){
if(i == 0){
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: CMTime.zero)
}else{
let value = i - 1
let lastTime = CMTimeMake(value: Int64(value), timescale: self.frameTime.timescale)
let presentTime = CMTimeAdd(lastTime, self.frameTime)
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: presentTime)
}
i = i + 1
}
}
}
self.writeInput.markAsFinished()
self.assetWriter.finishWriting {
DispatchQueue.main.sync {
self.completionBlock!(self.fileURL)
}
}
}
}
func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
autoreleasepool {
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
var pxbuffer:CVPixelBuffer?
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
// CGImageAlphaInfo.noneSkipFirst.rawValue
assert(context != nil, "context is nil")
// context?.clear(CGRect(x: 0, y: 0, width: frameWidth, height: frameHeight))
context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage
, in: CGRect(x: 0, y: 0, width: frameWidth, height: frameHeight))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
}
}

Distorted video output from CGBitmapContext and CVPixelBuffer

I've been trying to debug (for days now) this code to produce a video from CGImages. The CGImages actually get created from CGBitMapContext's I draw into in the app code. I've simplified here to simply draw a diagonal yellow line and draw a number of frames of that static image. However, here is a frame (every frame is the same) fo the distorted video I find at the written to path.
import Foundation
import CoreGraphics
import CoreMedia
import QuartzCore
import AVFoundation
func exportVideo(
width: Int = 500,
height: Int = 500,
numberOfFrames: Int = 100
) {
let vidURL = NSURL.fileURL(withPath: "/Users/me/Desktop/testVideo.mp4")
try? FileManager.default.removeItem(at: vidURL)
let settings: [String: Any] = [
AVVideoCodecKey: AVVideoCodecType.h264,
AVVideoWidthKey: width,
AVVideoHeightKey: height
]
let assetWriter = try! AVAssetWriter(url: vidURL, fileType: .m4v)
let writerInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: settings)
assetWriter.add(writerInput)
let queue = DispatchQueue.global(qos: .background)
writerInput.expectsMediaDataInRealTime = false
let inputAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: writerInput, sourcePixelBufferAttributes: nil)
assetWriter.startWriting()
assetWriter.startSession(atSourceTime: CMTime.zero)
writerInput.requestMediaDataWhenReady(on: queue) {
for i in 0..<numberOfFrames where writerInput.isReadyForMoreMediaData {
guard let buffer = newPixelBufferFrom(width: width, height: height) else {
fatalError()
}
inputAdaptor.append(
buffer,
withPresentationTime: CMTime(seconds: Double(i), preferredTimescale: CMTimeScale(10))
)
}
writerInput.markAsFinished()
assetWriter.finishWriting { }
}
}
private func newPixelBufferFrom(
width: Int,
height: Int
) -> CVPixelBuffer? {
let options:[String: Any] = [
kCVPixelBufferCGImageCompatibilityKey as String: true,
kCVPixelBufferCGBitmapContextCompatibilityKey as String: true
]
var pxbuffer: CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_32ARGB,
options as CFDictionary?,
&pxbuffer)
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
guard let context = CGContext(
data: pxdata,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: 4 * width,
space: rgbColorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue
) else {
fatalError()
}
context.setStrokeColor(UIColor.yellow.cgColor)
context.setLineWidth(5)
context.move(to: .init(x: 0, y: 0))
context.addLine(to: .init(x: width, y: height))
context.strokePath()
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
so i stumbled across "the answer" to my problem here.
It turns out that the width and height needs to be a multiple of 4. I'll only say that I hope this post helps a future poor soul as the error codes, warnings, API members, and docs totally failed to help me.

Swift | Creating video out of image array

I build a App, where user can make a Video of a few images.
I took the code from this post: Making video from UIImage array with different transition animations
Everything is working fine, but I have one big problem and I don't know how to solve it.
When the video was created, the image is not filling the hole screen and sometimes it turns to another side or the image is cut in half.
First image is the original image and the second image is the created video with the problems I have.
I really need some help.
var outputSize = CGSize(width: 1920 , height: 1280)
var imagesPerSecond: TimeInterval = 0
var fps: Int32 = 0
var selectedPhotosArray = [UIImage]()
var imageArrayToVideoURL = NSURL()
var asset: AVAsset!
var imageCount = 1
My Code
func buildVideoFromImageArray() {
for image in arrayOfImages {
selectedPhotosArray.append(image)
}
imageArrayToVideoURL = NSURL(fileURLWithPath: NSHomeDirectory() + "/Documents/video1.MP4")
removeFileAtURLIfExists(url: imageArrayToVideoURL)
guard let videoWriter = try? AVAssetWriter(outputURL: imageArrayToVideoURL as URL, fileType: AVFileType.mp4) else {
fatalError("AVAssetWriter error")
}
let outputSettings = [AVVideoCodecKey : AVVideoCodecType.h264, AVVideoWidthKey : NSNumber(value: Float(outputSize.width)), AVVideoHeightKey : NSNumber(value: Float(outputSize.height))] as [String : Any]
guard videoWriter.canApply(outputSettings: outputSettings, forMediaType: AVMediaType.video) else {
fatalError("Negative : Can't applay the Output settings...")
}
let videoWriterInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: outputSettings)
let sourcePixelBufferAttributesDictionary = [kCVPixelBufferPixelFormatTypeKey as String : NSNumber(value: kCVPixelFormatType_32ARGB), kCVPixelBufferWidthKey as String: NSNumber(value: Float(outputSize.width)), kCVPixelBufferHeightKey as String: NSNumber(value: Float(outputSize.height))]
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: videoWriterInput, sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
if videoWriter.canAdd(videoWriterInput) {
videoWriter.add(videoWriterInput)
}
if videoWriter.startWriting() {
let zeroTime = CMTimeMake(value: Int64(imagesPerSecond),timescale: self.fps)
videoWriter.startSession(atSourceTime: zeroTime)
assert(pixelBufferAdaptor.pixelBufferPool != nil)
let media_queue = DispatchQueue(label: "mediaInputQueue")
videoWriterInput.requestMediaDataWhenReady(on: media_queue, using: { () -> Void in
//let fps: Int32 = 1
let framePerSecond: Int64 = Int64(self.imagesPerSecond)
let frameDuration = CMTimeMake(value: Int64(self.imagesPerSecond), timescale: self.fps)
var frameCount: Int64 = 0
var appendSucceeded = true
while (!self.selectedPhotosArray.isEmpty) { // wird so lange ausgeführt, bis noch etwas im Array steht
if (videoWriterInput.isReadyForMoreMediaData) {
let nextPhoto = self.selectedPhotosArray.remove(at: 0) // foto wird aus dem selectedPhotosArray gelöscht
let lastFrameTime = CMTimeMake(value: frameCount * framePerSecond, timescale: self.fps)
let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
var pixelBuffer: CVPixelBuffer? = nil
let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferAdaptor.pixelBufferPool!, &pixelBuffer)
if let pixelBuffer = pixelBuffer, status == 0 {
let managedPixelBuffer = pixelBuffer
CVPixelBufferLockBaseAddress(managedPixelBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
let data = CVPixelBufferGetBaseAddress(managedPixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: data, width: Int(self.outputSize.width), height: Int(self.outputSize.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(managedPixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
context!.clear(CGRect(x: 0, y: 0, width: CGFloat(self.outputSize.width), height: CGFloat(self.outputSize.height)))
let horizontalRatio = CGFloat(self.outputSize.width) / nextPhoto.size.width
let verticalRatio = CGFloat(self.outputSize.height) / nextPhoto.size.height
//let aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit
let newSize: CGSize = CGSize(width: nextPhoto.size.width * aspectRatio, height: nextPhoto.size.height * aspectRatio)
let x = newSize.width < self.outputSize.width ? (self.outputSize.width - newSize.width) / 2 : 0
let y = newSize.height < self.outputSize.height ? (self.outputSize.height - newSize.height) / 2 : 0
context?.draw(nextPhoto.cgImage!, in: CGRect(x: x, y: y, width: newSize.width, height: newSize.height))
CVPixelBufferUnlockBaseAddress(managedPixelBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
appendSucceeded = pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
} else {
print("Failed to allocate pixel buffer")
appendSucceeded = false
}
}
if !appendSucceeded {
break
}
frameCount += 1
}
videoWriterInput.markAsFinished()
videoWriter.finishWriting { () -> Void in
print("-----video1 url = \(self.imageArrayToVideoURL)")
//self.asset = AVAsset(url: self.imageArrayToVideoURL as URL)
PHPhotoLibrary.shared().performChanges({
PHAssetChangeRequest.creationRequestForAssetFromVideo(atFileURL: self.imageArrayToVideoURL as URL)
}) { saved, error in
if saved {
let fetchOptions = PHFetchOptions()
fetchOptions.sortDescriptors = [NSSortDescriptor(key: "creationDate", ascending: false)]
let fetchResult = PHAsset.fetchAssets(with: .video, options: fetchOptions).firstObject
// fetchResult is your latest video PHAsset
// To fetch latest image replace .video with .image
}
}
}
})
}
}
func removeFileAtURLIfExists(url: NSURL) {
if let filePath = url.path {
let fileManager = FileManager.default
if fileManager.fileExists(atPath: filePath) {
do{
try fileManager.removeItem(atPath: filePath)
} catch let error as NSError {
print("Couldn't remove existing destination file: \(error)")
}
}
}
}`

What is wrong with my swift VideoWriter code?

I want to generate a mpeg video from a few images (as frames).
After I finish writing the video file, I tried saving it to Photos, but iOS consider it incompatible:
let compatible = AVAsset(url: video_url).isCompatibleWithSavedPhotosAlbum
print("COMPATIBILITY", compatible) // false
And then I tried creating an AVPlayer to play the video, and the video fails to play. So the video file must be corrupt somehow.
I reviewed my code closely, but couldn't spot the problem. Please help.
Here is my code:
class VideoWriter {
var url:URL?
var assetWriter:AVAssetWriter?
init(url:URL) {
self.url = url
do {
try self.assetWriter = AVAssetWriter(url: self.url!, fileType: AVFileTypeMPEG4)
} catch {
print("Fail to create assetWriter")
}
}
func writeFrames(frames:[UIImage], finishedHandler:#escaping ()->Void) {
let settings:[String:Any] = [
AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: 480, //CANVAS_SIZE * 4 / 3,
AVVideoHeightKey: 360 //CANVAS_SIZE
]
let assetWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: settings)
self.assetWriter?.add(assetWriterInput)
let bufferAttributes:[String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB)]
let bufferAdapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: assetWriterInput, sourcePixelBufferAttributes: bufferAttributes)
let frameTime = CMTimeMake(1, 30)
self.assetWriter?.startWriting()
self.assetWriter?.startSession(atSourceTime: kCMTimeZero)
// write the frames here
let mediaInputQueue = DispatchQueue(label: "mediaInputQueue")
var i = 0
let frameNumber = frames.count
assetWriterInput.requestMediaDataWhenReady(on: mediaInputQueue){
while(true){
if(i >= frameNumber){
break
}
if (assetWriterInput.isReadyForMoreMediaData){
let image = frames[i]
print("writing frame ", i)
let pixelBuffer = self.newPixelBufferFrom(cgImage: image.cgImage!)
var time:CMTime
if i == 0 {
time = kCMTimeZero
} else {
let value = i - 1
let lastTime = CMTimeMake(Int64(value), frameTime.timescale)
time = CMTimeAdd(lastTime, frameTime)
}
bufferAdapter.append(pixelBuffer!, withPresentationTime: time)
i += 1
}
}
assetWriterInput.markAsFinished()
self.assetWriter?.finishWriting(completionHandler: {
Thread.sleep(forTimeInterval: 0.5)
DispatchQueue.main.sync {
print("Completed?", self.assetWriter?.status == AVAssetWriterStatus.completed)
finishedHandler()
}
})
}
}
func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
var pxbuffer:CVPixelBuffer?
let frameWidth = 480 //CANVAS_SIZE
let frameHeight = 360 //CANVAS_SIZE
let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
// TODO: throw exception in case of error, don't use assert
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
// TODO: throw exception in case of error, don't use assert
assert(context != nil, "context is nil")
context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage, in: CGRect(x: 0, y: 0, width: cgImage.width, height: cgImage.height))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
}
btw. I didn't add audio input, is that necessary for a MPEG file?

Recording a video filtered with CIFilter is too slow

I am trying to crop and upscale a video while recording it.
I use AssetWriter to save video and CIFilters for live filtering.
The problem is that recording is really slow.
Details (first 6 steps are just boilerplate code, so feel free to skip them):
Set devices, e.g. back camera and mic;
Set up CIContext for GPU usage as it described there:
func setupContext() {
// setup the GLKView for video/image preview
OGLContext = EAGLContext(api: EAGLRenderingAPI.openGLES2)
ciContext = CIContext.init(eaglContext: OGLContext!)
}
Setting up GLKView;
func setUpGLPreview(){
glView.context = OGLContext!
glView.enableSetNeedsDisplay = false
glView.transform = CGAffineTransform(rotationAngle: CGFloat.pi/2)
glView.frame = CGRect(x: 0.0, y: 20.0, width: view.bounds.width, height: view.bounds.width * 9/16)
glView.bounds = CGRect(x: 0.0, y: 0.0, width: view.bounds.width * 9/16, height: view.bounds.width)
glView.contentMode = .scaleAspectFit
view.updateConstraintsIfNeeded()
glView.bindDrawable()
previewBounds = CGRect(x: 0, y: 0, width: glView.drawableWidth, height: glView.drawableHeight)
}
Configuring AVCaptureSession;
Configuring inputs for AssetWriter:
func configureVideoSessionWithAsset(){
let compressionSettings: [String: Any] = [
AVVideoAverageBitRateKey: NSNumber(value: 20000000),
AVVideoMaxKeyFrameIntervalKey: NSNumber(value: 1),
AVVideoProfileLevelKey: AVVideoProfileLevelH264Baseline41
]
let videoSettings: [String : Any] = [
AVVideoCodecKey : AVVideoCodecH264,
AVVideoCompressionPropertiesKey: compressionSettings,
AVVideoWidthKey : 1080,
AVVideoHeightKey : 1920,
AVVideoScalingModeKey:AVVideoScalingModeResizeAspectFill
]
assetWriterInputCamera = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
let sourcePixelBufferAttributesDictionary : [String: Any] = [
String(kCVPixelBufferPixelFormatTypeKey) : kCVPixelFormatType_32BGRA,
String(kCVPixelBufferWidthKey) : 1080,
String(kCVPixelBufferHeightKey) : 1920,
String(kCVPixelFormatOpenGLESCompatibility) : kCFBooleanTrue
]
bufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: assetWriterInputCamera!, sourcePixelBufferAttributes: sourcePixelBufferAttributesDictionary)
assetWriterInputCamera?.expectsMediaDataInRealTime = true
let transform = CGAffineTransform(rotationAngle: CGFloat.pi/2)
assetWriterInputCamera?.transform = transform
let audioSettings : [String : Any] = [
AVFormatIDKey : NSInteger(kAudioFormatMPEG4AAC),
AVNumberOfChannelsKey : 2,
AVSampleRateKey : NSNumber(value: 44100.0)
]
assetWriterInputAudio = AVAssetWriterInput(mediaType: AVMediaTypeAudio, outputSettings: audioSettings)
assetWriterInputAudio?.expectsMediaDataInRealTime = true
}
Creating new Writer each time we start capturing:
func createNewWriter(){
let outputPath = NSTemporaryDirectory() + Date().description + "output" + "\(state.currentFileFormat.rawValue)"
let outputFileURL = URL(fileURLWithPath: outputPath)
let fileType = (state.currentFileFormat == .mov) ? AVFileTypeQuickTimeMovie : AVFileTypeMPEG4
self.assetWriter = try? AVAssetWriter(outputURL: outputFileURL, fileType: fileType)
if let videoInput = assetWriterInputCamera{
self.assetWriter?.add(videoInput)
}
if let audioInput = assetWriterInputAudio{
self.assetWriter?.add(audioInput)
}
}
Finally, recording and filtering:
var cropFilter = CIFilter(name: "CICrop")
var scaleFilter = CIFilter(name: "CILanczosScaleTransform")
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
sessionQueue.async {
guard let imgBuffer = CMSampleBufferGetImageBuffer(sampleBuffer), let previewBounds = self.previewBounds else{
return
}
//recording audio
if self.state.isRecording{
if let audio = self.assetWriterInputAudio, connection.audioChannels.count > 0 && audio.isReadyForMoreMediaData {
self.writerQueue.async {
audio.append(sampleBuffer)
}
return
}
}
let sourceImage = CIImage(cvPixelBuffer: imgBuffer)
//setting drawRect
let sourceAspect = sourceImage.extent.size.width / sourceImage.extent.size.height;
let previewAspect = previewBounds.size.width / previewBounds.size.height;
var drawRect = sourceImage.extent
if (sourceAspect > previewAspect){
drawRect.size.width = drawRect.size.height * previewAspect
} else{
drawRect.size.height = drawRect.size.width / previewAspect
}
//Not recording just showing the preview
if(self.assetWriter == nil || self.assetWriter?.status == AVAssetWriterStatus.completed){
self.glView.bindDrawable()
if(self.OGLContext != EAGLContext.current()){
EAGLContext.setCurrent(self.OGLContext)
}
glClearColor(0, 0, 0, 1.0)
glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
glEnable(GLenum(GL_BLEND))
glBlendFunc(GLenum(GL_ONE), GLenum(GL_ONE_MINUS_SRC_ALPHA))
self.ciContext?.draw(sourceImage, in: previewBounds, from: drawRect)
self.glView.display()
} else{
if !self.state.isRecording {
return
}
let outputResolution = (self.state.currentCamera == .back) ? CGRect(x: 0,y: 0, width: 604, height: 1080) : CGRect(x: 0,y: 0, width: 405, height: 720)
cropFilter?.setValue(sourceImage,
forKey: kCIInputImageKey)
cropFilter?.setValue(CIVector(cgRect: CGRect(x: 0, y: 0, width: outputResolution.width, height: outputResolution.height)),
forKey: "inputRectangle")
var filteredImage:CIImage? = sourceImage.cropping(to: outputResolution)
scaleFilter?.setValue(filteredImage,
forKey: kCIInputImageKey)
scaleFilter?.setValue(1080/filteredImage!.extent.width,
forKey: kCIInputScaleKey)
filteredImage = scaleFilter?.outputImage
//Checking if we started writing
if self.assetWriter?.status == AVAssetWriterStatus.unknown{
if self.assetWriter?.startWriting != nil {
self.assetWriter?.startWriting()
self.assetWriter?.startSession(atSourceTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
}
}
let time = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
var newPixelBuffer:CVPixelBuffer? = nil
guard let bufferPool = self.bufferAdaptor?.pixelBufferPool else{
return
}
if let filteredImage = filteredImage{
CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, bufferPool, &newPixelBuffer)
self.ciContext?.render(filteredImage, to: newPixelBuffer!, bounds: filteredImage.extent, colorSpace: nil)
self.glView.bindDrawable()
glClearColor(0, 0, 0, 1.0)
glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
glEnable(GLenum(GL_BLEND))
glBlendFunc(GLenum(GL_ONE), GLenum(GL_ONE_MINUS_SRC_ALPHA))
self.ciContext?.draw(filteredImage, in: previewBounds, from: filteredImage.extent)
self.glView.display()
}
if let camera = self.assetWriterInputCamera ,camera.isReadyForMoreMediaData{
self.writerQueue.async {
self.bufferAdaptor?.append(newPixelBuffer!, withPresentationTime: time)
}
}
}
}
}
Preview works fine before capturing starts. During capturing frame rate drops dramatically to unusable level.
The question is: what am I doing wrong in the last step?
CIImage filtering supposed to be very fast due to GPU usage, but instead everything is frozen.
Please note:
I know about GPUImage, but I think it's an overkill for this task;
I've checked out Apple's CIFunHouse and RosyWriter projects. The first helped me a lot to set everything up. The second uses shaders to filter video which is a bit complicated for me.
Any help and comments appreciated!
If you have any questions about the code, please ask!
Thanks!

Resources