How do I improve the performance of converting UIImage to CVPixelBuffer? - ios

I have a UIImage array with a lot of UIImage objects,and use the methods mentioned by the link to export the image array to a video. Everything works, but the performance of converting UIImage array to CVPixelBuffer is very terrible:
private func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
var pxbuffer:CVPixelBuffer?
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
assert(context != nil, "context is nil")
context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage, in: CGRect(x: 0, y: 0, width: frameWidth, height: frameHeight))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
Could you give me some ideas?
Thanks!

I solved my problem.
In my video editing application like iMovie, I need to convert one image to a video and make the image(now it's a video) is movable.
A UIImage array is from a UIImage in essence. Therefore, I avoid call newPixelBufferFrom repeatedly but only once. the following codes will be faster:
var sampleBuffer:CVPixelBuffer?
var pxDataBuffer:CVPixelBuffer?
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let originHeight = frameWidth * img!.cgImage!.height / img!.cgImage!.width
let heightDifference = originHeight - frameHeight
let frameCounts = self.duration * Int(self.frameTime.timescale)
let spacingOfHeight = heightDifference / frameCounts
sampleBuffer = self.newPixelBufferFrom(cgImage: img!.cgImage!)
assert(sampleBuffer != nil)
var presentTime = CMTimeMake(1, self.frameTime.timescale)
var stepRows = 0
for i in 0..<frameCounts {
CVPixelBufferLockBaseAddress(sampleBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pointer = CVPixelBufferGetBaseAddress(sampleBuffer!)
var pxData = pointer?.assumingMemoryBound(to: UInt8.self)
let bytes = CVPixelBufferGetBytesPerRow(sampleBuffer!) * stepRows
pxData = pxData?.advanced(by: bytes)
let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, pxData!, CVPixelBufferGetBytesPerRow(sampleBuffer!), nil, nil, options as CFDictionary?, &pxDataBuffer)
assert(status == kCVReturnSuccess && pxDataBuffer != nil, "newPixelBuffer failed")
CVPixelBufferUnlockBaseAddress(sampleBuffer!, CVPixelBufferLockFlags(rawValue: 0))
while !self.writeInput.isReadyForMoreMediaData {
usleep(100)
}
if (self.writeInput.isReadyForMoreMediaData){
if i == 0{
self.bufferAdapter.append(pxDataBuffer!, withPresentationTime: zeroTime)
}else{
self.bufferAdapter.append(pxDataBuffer!, withPresentationTime: presentTime)
}
presentTime = CMTimeAdd(presentTime, self.frameTime)
}
stepRows += spacingOfHeight
}

Related

How to convert a UIImage to a CVPixelBuffer 32BGRA for mediapipe?

I am using mediapipe to develop a iOS application, now I need input an image data to the mediapipe, but mediapipe only accepted 32BGRA CVPixelBuffer.
how can I convert UIImage to 32BGRA CVPixelBuffer?
I am using this code:
let frameSize = CGSize(width: self.cgImage!.width, height: self.cgImage!.height)
var pixelBuffer:CVPixelBuffer? = nil
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(frameSize.width), Int(frameSize.height), kCVPixelFormatType_32BGRA , nil, &pixelBuffer)
if status != kCVReturnSuccess {
return nil
}
CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags.init(rawValue: 0))
let data = CVPixelBufferGetBaseAddress(pixelBuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
let context = CGContext(data: data, width: Int(frameSize.width), height: Int(frameSize.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: bitmapInfo.rawValue)
context?.draw(self.cgImage!, in: CGRect(x: 0, y: 0, width: self.cgImage!.width, height: self.cgImage!.height))
CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pixelBuffer
but I will throw an error on mediapipe mediapipe/0 (11): signal SIGABRT
If I use AVCaptureVideoDataOutput it is all well.
btw: I am using swift.
Maybe you can try this. Also, I have a question for you. Do you know how to use a static image?
Are used for face recognition in mediapipe? If you know, please tell me. Thank you.
func pixelBufferFromCGImage(image:CGImage) -> CVPixelBuffer? {
let options = [
kCVPixelBufferCGImageCompatibilityKey as String: NSNumber(value: true),
kCVPixelBufferCGBitmapContextCompatibilityKey as String: NSNumber(value: true),
kCVPixelBufferIOSurfacePropertiesKey as String: [:]
] as CFDictionary
let size:CGSize = .init(width: image.width, height: image.height)
var pxbuffer: CVPixelBuffer? = nil
let status = CVPixelBufferCreate(
kCFAllocatorDefault,
Int(size.width),
Int(size.height),
kCVPixelFormatType_32BGRA,
options,
&pxbuffer)
guard let pxbuffer = pxbuffer else { return nil }
CVPixelBufferLockBaseAddress(pxbuffer, [])
guard let pxdata = CVPixelBufferGetBaseAddress(pxbuffer) else {return nil}
let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
guard let context = CGContext(data: pxdata, width: Int(size.width), height: Int(size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer), space: CGColorSpaceCreateDeviceRGB(), bitmapInfo:bitmapInfo.rawValue) else {
return nil
}
context.concatenate(CGAffineTransformIdentity)
context.draw(image, in: .init(x: 0, y: 0, width: size.width, height: size.height))
///error: CGContextRelease' is unavailable: Core Foundation objects are automatically memory managed
///maybe CGContextRelease should not use it
CGContextRelease(context)
CVPixelBufferUnlockBaseAddress(pxbuffer, [])
return pxbuffer
}

iOS app getting crash when frequent draw CGContext using CGImage, which create from [UInt8] data

Right now i am developing one module in that module i need to create video from array CGImage and while doing that processing my application get crashed at some point , i am not able to figure out exact reason behind that crash.
can anyone please suggest me i am going in right direction or not , should i convert [CGImage] to video or do i need to choose another approach.
i also tried to convert CGImage to UIImage and tried to create video but still facing same issue.
i am getting image data in [UInt8] data so what would be the correct approach converting the image formate and create video ?
In order to create video from [CGImage] following below approach.
I am converting [UInt8] data to CGImage using CGDataProvider and convert CGImage to UIImage. I have array of image and collect UIImage and then merge images and create video.
Here my code to convert CGImage from data.
private(set) var data: [UInt8]
var cgImage: CGImage? {
let colorSpaceRef = CGColorSpaceCreateDeviceRGB()
let bitsPerComponent = 8
let bitsPerPixel = channels * bitsPerComponent
let bytesPerRow = channels * width
let totalBytes = height * bytesPerRow
let bitmapInfo = CGBitmapInfo(rawValue: channels == 3 ? CGImageAlphaInfo.none.rawValue : CGImageAlphaInfo.last.rawValue)
let provider = CGDataProvider( dataInfo: nil,
data: data,
size: totalBytes,
releaseData: {_, _, _ in })!
return CGImage(width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow,
space: colorSpaceRef,
bitmapInfo: bitmapInfo,
provider: provider,
decode: nil,
shouldInterpolate: false,
intent: CGColorRenderingIntent.perceptual)
}
My app is getting crash here in this function, when i start frequent image drawing to context
(context!.draw(cgImage, in: CGRect(x: 0, y: 0, width: frameWidth,
height: frameHeight)))
If i use number of images from bundle and create video using this code its working fine. When i use created CGImage from [UInt8] data, it started getting crash after writing 3-4 images.
func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
autoreleasepool {
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
var pxbuffer:CVPixelBuffer?
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
assert(context != nil, "context is nil")
context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage
, in: CGRect(x: 0, y: 0, width: frameWidth, height: frameHeight))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
here, i am using below code to create video from array of images.
typealias CXEMovieMakerCompletion = (URL) -> Void
typealias CXEMovieMakerUIImageExtractor = (AnyObject) -> UIImage?
public class CXEImagesToVideo: NSObject{
var assetWriter:AVAssetWriter!
var writeInput:AVAssetWriterInput!
var bufferAdapter:AVAssetWriterInputPixelBufferAdaptor!
var videoSettings:[String : Any]!
var frameTime:CMTime!
var fileURL:URL!
var completionBlock: CXEMovieMakerCompletion?
var movieMakerUIImageExtractor:CXEMovieMakerUIImageExtractor?
public class func videoSettings(codec:String, width:Int, height:Int) -> [String: Any]{
if(Int(width) % 16 != 0){
print("warning: video settings width must be divisible by 16")
}
let videoSettings:[String: Any] = [AVVideoCodecKey: AVVideoCodecType.h264,
AVVideoWidthKey: width,
AVVideoHeightKey: height]
return videoSettings
}
public init(videoSettings: [String: Any],frameTime: CMTime) {
super.init()
self.frameTime = frameTime
let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
let tempPath = paths[0] + "/exprotvideo1.mp4"
if(FileManager.default.fileExists(atPath: tempPath)){
guard (try? FileManager.default.removeItem(atPath: tempPath)) != nil else {
print("remove path failed")
return
}
}
self.fileURL = URL(fileURLWithPath: tempPath)
self.assetWriter = try! AVAssetWriter(url: self.fileURL, fileType: AVFileType.mp4)
self.videoSettings = videoSettings
self.writeInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoSettings)
assert(self.assetWriter.canAdd(self.writeInput), "add failed")
self.assetWriter.add(self.writeInput)
let bufferAttributes:[String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB)]
self.bufferAdapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: self.writeInput, sourcePixelBufferAttributes: bufferAttributes)
self.frameTime = CMTimeMake(value: 1, timescale: 10)
}
func createMovieFrom(urls: [URL], withCompletion: #escaping CXEMovieMakerCompletion){
self.createMovieFromSource(images: urls as [AnyObject], extractor:{(inputObject:AnyObject) ->UIImage? in
return UIImage(data: try! Data(contentsOf: inputObject as! URL))}, withCompletion: withCompletion)
}
func createMovieFrom(images: [UIImage], withCompletion: #escaping CXEMovieMakerCompletion){
DispatchQueue.main.async {
self.createMovieFromSource(images: images, extractor: {(inputObject:AnyObject) -> UIImage? in
return inputObject as? UIImage}, withCompletion: withCompletion)
}
}
func imageFromLayer(layer:CALayer) -> UIImage {
UIGraphicsBeginImageContextWithOptions(layer.frame.size, layer.isOpaque, 0)
layer.render(in: UIGraphicsGetCurrentContext()!)
let outputImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return outputImage!
}
func createMovieFromSource(images: [AnyObject], extractor: #escaping CXEMovieMakerUIImageExtractor, withCompletion: #escaping CXEMovieMakerCompletion){
self.completionBlock = withCompletion
self.assetWriter.startWriting()
self.assetWriter.startSession(atSourceTime: CMTime.zero)
let mediaInputQueue = DispatchQueue.init(label: "Main") // DispatchQueue(label: "mediaInputQueue")
var i = 0
let frameNumber = images.count
self.writeInput.requestMediaDataWhenReady(on: mediaInputQueue){
while(true){
if(i >= frameNumber){
break
}
if (self.writeInput.isReadyForMoreMediaData){
var sampleBuffer:CVPixelBuffer?
autoreleasepool{
let temp = images[i]
let img = extractor(temp)
if img == nil{
i += 1
print("Warning: counld not extract one of the frames")
//continue
}
sampleBuffer = self.newPixelBufferFrom(cgImage: temp.cgImage!)
}
if (sampleBuffer != nil){
if(i == 0){
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: CMTime.zero)
}else{
let value = i - 1
let lastTime = CMTimeMake(value: Int64(value), timescale: self.frameTime.timescale)
let presentTime = CMTimeAdd(lastTime, self.frameTime)
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: presentTime)
}
i = i + 1
}
}
}
self.writeInput.markAsFinished()
self.assetWriter.finishWriting {
DispatchQueue.main.sync {
self.completionBlock!(self.fileURL)
}
}
}
}
func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
autoreleasepool {
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
var pxbuffer:CVPixelBuffer?
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
// CGImageAlphaInfo.noneSkipFirst.rawValue
assert(context != nil, "context is nil")
// context?.clear(CGRect(x: 0, y: 0, width: frameWidth, height: frameHeight))
context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage
, in: CGRect(x: 0, y: 0, width: frameWidth, height: frameHeight))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
}
}

Video export incorrect video's length in Swift

I Have created Video successfully from array of images with frame time duration = 1 second for all images. but I want to use different duration time for each of images, exp : 0.1s, 0.2s But It's not work correctly.
while (!self.selectedPhotosArray.isEmpty) {
if (videoWriterInput.isReadyForMoreMediaData) {
let nextDicData = self.selectedPhotosArray.remove(at: 0)
if let nextImage = nextDicData["img"] as? UIImage
{
var frameDuration = CMTimeMake(Int64(0 * 10000), fps)
if let timeVl = nextDicData["time"] as? Float{
framePerSecond = Int64(timeVl * 10000)
}
frameDuration = CMTimeMake(framePerSecond ,fps)
let lastFrameTime = CMTimeMake(Int64(lastTimeVl), fps)
let presentationTime = frameCount == 0 ? lastFrameTime : CMTimeAdd(lastFrameTime, frameDuration)
var pixelBuffer: CVPixelBuffer? = nil
let status: CVReturn = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pixelBufferAdaptor.pixelBufferPool!, &pixelBuffer)
if let pixelBuffer = pixelBuffer, status == 0 {
let managedPixelBuffer = pixelBuffer
CVPixelBufferLockBaseAddress(managedPixelBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
let data = CVPixelBufferGetBaseAddress(managedPixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: data, width: Int(self.outputSize.width), height: Int(self.outputSize.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(managedPixelBuffer), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
context!.clear(CGRect(x: 0, y: 0, width: CGFloat(self.outputSize.width), height: CGFloat(self.outputSize.height)))
let horizontalRatio = CGFloat(self.outputSize.width) / nextImage.size.width
let verticalRatio = CGFloat(self.outputSize.height) / nextImage.size.height
//let aspectRatio = max(horizontalRatio, verticalRatio) // ScaleAspectFill
let aspectRatio = min(horizontalRatio, verticalRatio) // ScaleAspectFit
let newSize: CGSize = CGSize(width: nextImage.size.width, height: nextImage.size.height)
let x = newSize.width < self.outputSize.width ? (self.outputSize.width - newSize.width) / 2 : 0
let y = newSize.height < self.outputSize.height ? (self.outputSize.height - newSize.height) / 2 : 0
context?.draw(nextImage.cgImage!, in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height))
CVPixelBufferUnlockBaseAddress(managedPixelBuffer, CVPixelBufferLockFlags(rawValue: CVOptionFlags(0)))
appendSucceeded = pixelBufferAdaptor.append(pixelBuffer, withPresentationTime: presentationTime)
} else {
print("Failed to allocate pixel buffer")
appendSucceeded = false
}
}
}
if !appendSucceeded {
break
}
frameCount += 1
lastTimeVl += framePerSecond
}

What is wrong with my swift VideoWriter code?

I want to generate a mpeg video from a few images (as frames).
After I finish writing the video file, I tried saving it to Photos, but iOS consider it incompatible:
let compatible = AVAsset(url: video_url).isCompatibleWithSavedPhotosAlbum
print("COMPATIBILITY", compatible) // false
And then I tried creating an AVPlayer to play the video, and the video fails to play. So the video file must be corrupt somehow.
I reviewed my code closely, but couldn't spot the problem. Please help.
Here is my code:
class VideoWriter {
var url:URL?
var assetWriter:AVAssetWriter?
init(url:URL) {
self.url = url
do {
try self.assetWriter = AVAssetWriter(url: self.url!, fileType: AVFileTypeMPEG4)
} catch {
print("Fail to create assetWriter")
}
}
func writeFrames(frames:[UIImage], finishedHandler:#escaping ()->Void) {
let settings:[String:Any] = [
AVVideoCodecKey: AVVideoCodecH264,
AVVideoWidthKey: 480, //CANVAS_SIZE * 4 / 3,
AVVideoHeightKey: 360 //CANVAS_SIZE
]
let assetWriterInput = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: settings)
self.assetWriter?.add(assetWriterInput)
let bufferAttributes:[String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB)]
let bufferAdapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: assetWriterInput, sourcePixelBufferAttributes: bufferAttributes)
let frameTime = CMTimeMake(1, 30)
self.assetWriter?.startWriting()
self.assetWriter?.startSession(atSourceTime: kCMTimeZero)
// write the frames here
let mediaInputQueue = DispatchQueue(label: "mediaInputQueue")
var i = 0
let frameNumber = frames.count
assetWriterInput.requestMediaDataWhenReady(on: mediaInputQueue){
while(true){
if(i >= frameNumber){
break
}
if (assetWriterInput.isReadyForMoreMediaData){
let image = frames[i]
print("writing frame ", i)
let pixelBuffer = self.newPixelBufferFrom(cgImage: image.cgImage!)
var time:CMTime
if i == 0 {
time = kCMTimeZero
} else {
let value = i - 1
let lastTime = CMTimeMake(Int64(value), frameTime.timescale)
time = CMTimeAdd(lastTime, frameTime)
}
bufferAdapter.append(pixelBuffer!, withPresentationTime: time)
i += 1
}
}
assetWriterInput.markAsFinished()
self.assetWriter?.finishWriting(completionHandler: {
Thread.sleep(forTimeInterval: 0.5)
DispatchQueue.main.sync {
print("Completed?", self.assetWriter?.status == AVAssetWriterStatus.completed)
finishedHandler()
}
})
}
}
func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
var pxbuffer:CVPixelBuffer?
let frameWidth = 480 //CANVAS_SIZE
let frameHeight = 360 //CANVAS_SIZE
let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
// TODO: throw exception in case of error, don't use assert
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
// TODO: throw exception in case of error, don't use assert
assert(context != nil, "context is nil")
context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage, in: CGRect(x: 0, y: 0, width: cgImage.width, height: cgImage.height))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
}
btw. I didn't add audio input, is that necessary for a MPEG file?

I want to release the CVPixelBufferRef in swift

I want to create a video from image.
So, I was the source of the link to the reference.
Link:CVPixelBufferPool Error ( kCVReturnInvalidArgument/-6661)
func writeAnimationToMovie(path: String, size: CGSize, animation: Animation) -> Bool {
var error: NSError?
let writer = AVAssetWriter(URL: NSURL(fileURLWithPath: path), fileType: AVFileTypeQuickTimeMovie, error: &error)
let videoSettings = [AVVideoCodecKey: AVVideoCodecH264, AVVideoWidthKey: size.width, AVVideoHeightKey: size.height]
let input = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: input, sourcePixelBufferAttributes: nil)
input.expectsMediaDataInRealTime = true
writer.addInput(input)
writer.startWriting()
writer.startSessionAtSourceTime(kCMTimeZero)
var buffer: CVPixelBufferRef
var frameCount = 0
for frame in animation.frames {
let rect = CGRectMake(0, 0, size.width, size.height)
let rectPtr = UnsafeMutablePointer<CGRect>.alloc(1)
rectPtr.memory = rect
buffer = pixelBufferFromCGImage(frame.image.CGImageForProposedRect(rectPtr, context: nil, hints: nil).takeUnretainedValue(), size)
var appendOk = false
var j = 0
while (!appendOk && j < 30) {
if pixelBufferAdaptor.assetWriterInput.readyForMoreMediaData {
let frameTime = CMTimeMake(Int64(frameCount), 10)
appendOk = pixelBufferAdaptor.appendPixelBuffer(buffer, withPresentationTime: frameTime)
// appendOk will always be false
NSThread.sleepForTimeInterval(0.05)
} else {
NSThread.sleepForTimeInterval(0.1)
}
j++
}
if (!appendOk) {
println("Doh, frame \(frame) at offset \(frameCount) failed to append")
}
}
input.markAsFinished()
writer.finishWritingWithCompletionHandler({
if writer.status == AVAssetWriterStatus.Failed {
println("oh noes, an error: \(writer.error.description)")
} else {
println("hrmmm, there should be a movie?")
}
})
return true;
}
func pixelBufferFromCGImage(image: CGImageRef, size: CGSize) -> CVPixelBufferRef {
let options = [
kCVPixelBufferCGImageCompatibilityKey: true,
kCVPixelBufferCGBitmapContextCompatibilityKey: true]
var pixBufferPointer = UnsafeMutablePointer<Unmanaged<CVPixelBuffer>?>.alloc(1)
let status = CVPixelBufferCreate(
nil,
UInt(size.width), UInt(size.height),
OSType(kCVPixelFormatType_32ARGB),
options,
pixBufferPointer)
CVPixelBufferLockBaseAddress(pixBufferPointer.memory?.takeUnretainedValue(), 0)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapinfo = CGBitmapInfo.fromRaw(CGImageAlphaInfo.NoneSkipFirst.toRaw())
var pixBufferData:UnsafeMutablePointer<(Void)> = CVPixelBufferGetBaseAddress(pixBufferPointer.memory?.takeUnretainedValue())
let context = CGBitmapContextCreate(
pixBufferData,
UInt(size.width), UInt(size.height),
8, UInt(4 * size.width),
rgbColorSpace, bitmapinfo!)
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0))
CGContextDrawImage(
context,
CGRectMake(0, 0, CGFloat(CGImageGetWidth(image)), CGFloat(CGImageGetHeight(image))),
image)
CVPixelBufferUnlockBaseAddress(pixBufferPointer.memory?.takeUnretainedValue(), 0)
return pixBufferPointer.memory!.takeUnretainedValue()
}
Something even after the Movies can be as under the image remains in memory.
I believe or not than is left PixcelBuffer.
I had a method CVPixelBufferRelease(buffer) to release the PixcelBuffer when the Objective-c, I'm no longer can use this in Swift. How do I release the PixcelBuffer doing?
If anyone could help, I'd really appreciate it.
1
2
When using CVPixelBufferCreate the UnsafeMutablePointer has to be destroyed after retrieving the memory of it.
When I create a CVPixelBuffer, I do it like this:
func allocPixelBuffer() -> CVPixelBuffer {
let pixelBufferAttributes : CFDictionary = [...]
let pixelBufferOut = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
_ = CVPixelBufferCreate(kCFAllocatorDefault,
Int(Width),
Int(Height),
OSType(kCVPixelFormatType_32ARGB),
pixelBufferAttributes,
pixelBufferOut)
let pixelBuffer = pixelBufferOut.memory!
pixelBufferOut.destroy()
return pixelBuffer
}
I had same problem, but I have solved.
Use this: autoreleasepool
var boolWhile = true
while (boolWhile) {
autoreleasepool({() -> () in
if(input.readyForMoreMediaData) {
presentTime = CMTimeMake(Int64(ii), fps)
if(ii >= arrayImages.count){
...
Try changing
return pixBufferPointer.memory!.takeUnretainedValue()
to
return pixBufferPointer.memory!.takeRetainedValue()
to avoid leaking CVPixelBuffers

Resources