CMSampleBuffer rotate from portrait to landscape in Swift 3 - ios

I'm handling ReplayKit2 in iOS, for some reasons I need to rotate CMSampleBuffer from portrait to landscape, I found the result is not correct.
What I miss ?
this is original sample buffer
this is actual output buffer
width & height are dimensions of sampleBuffer
func rotation(sampleBuffer: CMSampleBuffer, width: Int, height: Int) -> CMSampleBuffer {
//create pixelbuffer from the delegate method samplebuffer
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
//create CI image from the buffer
let image = CIImage(cvImageBuffer: pixelBuffer)
let extent = CGRect(x: 0, y: 0, width: width, height: height)
var tx = CGAffineTransform(translationX: extent.midX, y: extent.midY)
tx = tx.rotated(by: CGFloat(Double.pi / 2))
tx = tx.translatedBy(x: -extent.midX, y: -extent.midY)
var transformImage = CIFilter(
name: "CIAffineTransform",
withInputParameters: [
kCIInputImageKey: image,
kCIInputTransformKey: NSValue.init(cgAffineTransform: tx)])!.outputImage!
//create empty pixelbuffer
var newPixelBuffer : CVPixelBuffer? = nil
CVPixelBufferCreate(kCFAllocatorDefault,
width,
height,
kCVPixelFormatType_32BGRA,
nil,
&newPixelBuffer)
//render the context to the new pixelbuffer, context is a global
//CIContext variable. creating a new one each frame is too CPU intensive
self.ciContext.render(transformImage, to: newPixelBuffer!)
//finally, write this to the pixelbufferadaptor
CVPixelBufferUnlockBaseAddress(pixelBuffer,CVPixelBufferLockFlags(rawValue: 0))
var videoInfo: CMVideoFormatDescription?
CMVideoFormatDescriptionCreateForImageBuffer(kCFAllocatorDefault, newPixelBuffer!, &videoInfo)
var sampleTimingInfo = CMSampleTimingInfo(duration: CMSampleBufferGetDuration(sampleBuffer), presentationTimeStamp: CMSampleBufferGetPresentationTimeStamp(sampleBuffer), decodeTimeStamp: CMSampleBufferGetDecodeTimeStamp(sampleBuffer))
var newSampleBuffer: CMSampleBuffer?
CMSampleBufferCreateForImageBuffer(kCFAllocatorDefault, newPixelBuffer!, true, nil, nil, videoInfo!, &sampleTimingInfo, &newSampleBuffer)
return newSampleBuffer!
}

just found a very sweet method in iOS 11!
/* Returns a new image representing the original image transformeded for the given CGImagePropertyOrientation */
#available(iOS 11.0, *)
open func oriented(_ orientation: CGImagePropertyOrientation) -> CIImage

May be it will be useful
func rotate(_ sampleBuffer: CMSampleBuffer) -> CVPixelBuffer? {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return nil
}
var newPixelBuffer: CVPixelBuffer?
let error = CVPixelBufferCreate(kCFAllocatorDefault,
CVPixelBufferGetHeight(pixelBuffer),
CVPixelBufferGetWidth(pixelBuffer),
kCVPixelFormatType_420YpCbCr8BiPlanarFullRange,
nil,
&newPixelBuffer)
guard error == kCVReturnSuccess else {
return nil
}
let ciImage = CIImage(cvPixelBuffer: pixelBuffer).oriented(.right)
let context = CIContext(options: nil)
context.render(ciImage, to: newPixelBuffer!)
return newPixelBuffer
}

Related

FPS Gradually goes down when editing performed on CMSampleBuffer

I am doing a livestream where i need to send video from camera and an overlay from UIView It started working But Fps go down after some seconds.
If i send CMSampleBuffer directly FPS is Ok But if convert CMSampleImage and perform editing on it FPS decreases after some seconds.
i am attaching the code below where i get CMSampleBuffer from captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection)
I am also attaching FPS graph from facebook livestream sdk.
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
let pts:CMTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer) as CMTime
let newPts = CMTimeMakeWithSeconds(CMTimeGetSeconds(pts) + 5, preferredTimescale: pts.timescale);
let image = self.imageFromSampleBuffer(sampleBuffer: sampleBuffer)
let drawableRect = self.window.size.width > 800 ? CGRect(x: 73, y: 0, width: self.window.size.width - 147, height: self.window.size.height) : CGRect(x: 0, y: 0, width: self.window.size.width, height: self.window.size.height)
let webViewImage: UIImage = self.webview.screenShotWithoutDrawHierarchy(drawableRect: drawableRect) ?? UIImage()
let compositeImage = self.composite(image: image!, overlay: webViewImage, drawableRect: drawableRect)
if #available(iOS 13, *){
let newSampleBuffer = compositeImage?.createCMSampleBuffer(presentationTimeStamp: newPts, duration:CMTime.invalid, decodeTimeStamp: sampleBuffer.decodeTimeStamp)
self.rtmpStream.appendSampleBuffer(newSampleBuffer!, withType: .video)
}
}
func composite(image:UIImage, overlay:(UIImage), scaleOverlay: Bool = false, drawableRect:CGRect)->UIImage?{
UIGraphicsBeginImageContext(drawableRect.size)
image.draw(in: drawableRect)
UIColor(red: 0, green: 0, blue: 0, transparency: 0)?.setFill()
overlay.draw(in: drawableRect)
let newimage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newimage
}
private func imageFromSampleBuffer(sampleBuffer: CMSampleBuffer) -> UIImage? {
guard let imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else { return nil }
let ciImage = CIImage(cvPixelBuffer: imageBuffer)
return self.convert(cmage:ciImage)
}
// Convert CIImage to UIImage
func convert(cmage: CIImage) -> UIImage {
let context = CIContext(options: nil)
let cgImage = context.createCGImage(cmage, from: cmage.extent)!
let image = UIImage(cgImage: cgImage)
return image
}
// clean up AVCapture
func stopCamera(){
session.stopRunning()
}
}
extension UIImage {
var cvPixelBuffer: CVPixelBuffer? {
let attrs = [
String(kCVPixelBufferCGImageCompatibilityKey): kCFBooleanTrue,
String(kCVPixelBufferCGBitmapContextCompatibilityKey): kCFBooleanTrue
] as [String: Any]
var buffer: CVPixelBuffer?
let window = UIApplication.shared.keyWindow!
let drawableRect = window.size.width > 800 ? CGRect(x: 73, y: 0, width: window.size.width - 147, height: window.size.height) : CGRect(x: 0, y: 0, width: window.size.width, height: window.size.height)
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(drawableRect.size.width), Int(drawableRect.size.height), kCVPixelFormatType_32ARGB, attrs as CFDictionary, &buffer)
guard status == kCVReturnSuccess else {
return nil
}
CVPixelBufferLockBaseAddress(buffer!, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(buffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pixelData, width: Int(window.size.width), height: Int(window.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(buffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
context?.translateBy(x: 0, y: window.size.height)
context?.scaleBy(x: 1.0, y: -1.0)
let newRect = CGRect(x: 0, y: 0, width: window.size.width, height: window.size.height)
UIGraphicsPushContext(context!)
UIColor.clear.setFill()
UIRectFill(newRect)
self.draw(in: newRect)
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(buffer!, CVPixelBufferLockFlags(rawValue: 0))
return buffer
}
func createCMSampleBuffer(presentationTimeStamp: CMTime, duration:CMTime, decodeTimeStamp:CMTime ) -> CMSampleBuffer? {
let pixelBuffer = cvPixelBuffer
var newSampleBuffer: CMSampleBuffer?
var info = CMSampleTimingInfo()
var videoInfo: CMVideoFormatDescription?
info.presentationTimeStamp = presentationTimeStamp
info.duration = duration
info.decodeTimeStamp = CMTime.invalid
CMVideoFormatDescriptionCreateForImageBuffer(allocator: nil, imageBuffer: pixelBuffer!, formatDescriptionOut: &videoInfo)
CMSampleBufferCreateForImageBuffer(allocator: kCFAllocatorDefault,
imageBuffer: pixelBuffer!,
dataReady: true,
makeDataReadyCallback: nil,
refcon: nil,
formatDescription: videoInfo!,
sampleTiming: &info,
sampleBufferOut: &newSampleBuffer)
return newSampleBuffer!
}
}
Note: I am using Hashinkit but with my custom camera preview layer not attaching it to MTHKView becuase with view and camera both usage its getting heat warnings on Hashinkit

iOS app getting crash when frequent draw CGContext using CGImage, which create from [UInt8] data

Right now i am developing one module in that module i need to create video from array CGImage and while doing that processing my application get crashed at some point , i am not able to figure out exact reason behind that crash.
can anyone please suggest me i am going in right direction or not , should i convert [CGImage] to video or do i need to choose another approach.
i also tried to convert CGImage to UIImage and tried to create video but still facing same issue.
i am getting image data in [UInt8] data so what would be the correct approach converting the image formate and create video ?
In order to create video from [CGImage] following below approach.
I am converting [UInt8] data to CGImage using CGDataProvider and convert CGImage to UIImage. I have array of image and collect UIImage and then merge images and create video.
Here my code to convert CGImage from data.
private(set) var data: [UInt8]
var cgImage: CGImage? {
let colorSpaceRef = CGColorSpaceCreateDeviceRGB()
let bitsPerComponent = 8
let bitsPerPixel = channels * bitsPerComponent
let bytesPerRow = channels * width
let totalBytes = height * bytesPerRow
let bitmapInfo = CGBitmapInfo(rawValue: channels == 3 ? CGImageAlphaInfo.none.rawValue : CGImageAlphaInfo.last.rawValue)
let provider = CGDataProvider( dataInfo: nil,
data: data,
size: totalBytes,
releaseData: {_, _, _ in })!
return CGImage(width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow,
space: colorSpaceRef,
bitmapInfo: bitmapInfo,
provider: provider,
decode: nil,
shouldInterpolate: false,
intent: CGColorRenderingIntent.perceptual)
}
My app is getting crash here in this function, when i start frequent image drawing to context
(context!.draw(cgImage, in: CGRect(x: 0, y: 0, width: frameWidth,
height: frameHeight)))
If i use number of images from bundle and create video using this code its working fine. When i use created CGImage from [UInt8] data, it started getting crash after writing 3-4 images.
func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
autoreleasepool {
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
var pxbuffer:CVPixelBuffer?
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
assert(context != nil, "context is nil")
context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage
, in: CGRect(x: 0, y: 0, width: frameWidth, height: frameHeight))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
here, i am using below code to create video from array of images.
typealias CXEMovieMakerCompletion = (URL) -> Void
typealias CXEMovieMakerUIImageExtractor = (AnyObject) -> UIImage?
public class CXEImagesToVideo: NSObject{
var assetWriter:AVAssetWriter!
var writeInput:AVAssetWriterInput!
var bufferAdapter:AVAssetWriterInputPixelBufferAdaptor!
var videoSettings:[String : Any]!
var frameTime:CMTime!
var fileURL:URL!
var completionBlock: CXEMovieMakerCompletion?
var movieMakerUIImageExtractor:CXEMovieMakerUIImageExtractor?
public class func videoSettings(codec:String, width:Int, height:Int) -> [String: Any]{
if(Int(width) % 16 != 0){
print("warning: video settings width must be divisible by 16")
}
let videoSettings:[String: Any] = [AVVideoCodecKey: AVVideoCodecType.h264,
AVVideoWidthKey: width,
AVVideoHeightKey: height]
return videoSettings
}
public init(videoSettings: [String: Any],frameTime: CMTime) {
super.init()
self.frameTime = frameTime
let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
let tempPath = paths[0] + "/exprotvideo1.mp4"
if(FileManager.default.fileExists(atPath: tempPath)){
guard (try? FileManager.default.removeItem(atPath: tempPath)) != nil else {
print("remove path failed")
return
}
}
self.fileURL = URL(fileURLWithPath: tempPath)
self.assetWriter = try! AVAssetWriter(url: self.fileURL, fileType: AVFileType.mp4)
self.videoSettings = videoSettings
self.writeInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoSettings)
assert(self.assetWriter.canAdd(self.writeInput), "add failed")
self.assetWriter.add(self.writeInput)
let bufferAttributes:[String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB)]
self.bufferAdapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: self.writeInput, sourcePixelBufferAttributes: bufferAttributes)
self.frameTime = CMTimeMake(value: 1, timescale: 10)
}
func createMovieFrom(urls: [URL], withCompletion: #escaping CXEMovieMakerCompletion){
self.createMovieFromSource(images: urls as [AnyObject], extractor:{(inputObject:AnyObject) ->UIImage? in
return UIImage(data: try! Data(contentsOf: inputObject as! URL))}, withCompletion: withCompletion)
}
func createMovieFrom(images: [UIImage], withCompletion: #escaping CXEMovieMakerCompletion){
DispatchQueue.main.async {
self.createMovieFromSource(images: images, extractor: {(inputObject:AnyObject) -> UIImage? in
return inputObject as? UIImage}, withCompletion: withCompletion)
}
}
func imageFromLayer(layer:CALayer) -> UIImage {
UIGraphicsBeginImageContextWithOptions(layer.frame.size, layer.isOpaque, 0)
layer.render(in: UIGraphicsGetCurrentContext()!)
let outputImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return outputImage!
}
func createMovieFromSource(images: [AnyObject], extractor: #escaping CXEMovieMakerUIImageExtractor, withCompletion: #escaping CXEMovieMakerCompletion){
self.completionBlock = withCompletion
self.assetWriter.startWriting()
self.assetWriter.startSession(atSourceTime: CMTime.zero)
let mediaInputQueue = DispatchQueue.init(label: "Main") // DispatchQueue(label: "mediaInputQueue")
var i = 0
let frameNumber = images.count
self.writeInput.requestMediaDataWhenReady(on: mediaInputQueue){
while(true){
if(i >= frameNumber){
break
}
if (self.writeInput.isReadyForMoreMediaData){
var sampleBuffer:CVPixelBuffer?
autoreleasepool{
let temp = images[i]
let img = extractor(temp)
if img == nil{
i += 1
print("Warning: counld not extract one of the frames")
//continue
}
sampleBuffer = self.newPixelBufferFrom(cgImage: temp.cgImage!)
}
if (sampleBuffer != nil){
if(i == 0){
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: CMTime.zero)
}else{
let value = i - 1
let lastTime = CMTimeMake(value: Int64(value), timescale: self.frameTime.timescale)
let presentTime = CMTimeAdd(lastTime, self.frameTime)
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: presentTime)
}
i = i + 1
}
}
}
self.writeInput.markAsFinished()
self.assetWriter.finishWriting {
DispatchQueue.main.sync {
self.completionBlock!(self.fileURL)
}
}
}
}
func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
autoreleasepool {
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
var pxbuffer:CVPixelBuffer?
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
// CGImageAlphaInfo.noneSkipFirst.rawValue
assert(context != nil, "context is nil")
// context?.clear(CGRect(x: 0, y: 0, width: frameWidth, height: frameHeight))
context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage
, in: CGRect(x: 0, y: 0, width: frameWidth, height: frameHeight))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
}
}

CAMetalLayer drawable texture is weird on some devices

I am using the below code to get and append a pixel buffer from a metal layer. On some non specific devices the output looks like below and the drawable textures pixelFormat is .invalid
static func make(with currentDrawable: CAMetalDrawable, usingBuffer pool: CVPixelBufferPool) -> (CVPixelBuffer?, UIImage) {
let destinationTexture = currentDrawable.texture
var pixelBuffer: CVPixelBuffer?
_ = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, pool, &pixelBuffer)
if let pixelBuffer = pixelBuffer {
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags.init(rawValue: 0))
let region = MTLRegionMake2D(0, 0, Int(currentDrawable.layer.drawableSize.width), Int(currentDrawable.layer.drawableSize.height))
let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
let tempBuffer = CVPixelBufferGetBaseAddress(pixelBuffer)
destinationTexture.getBytes(tempBuffer!, bytesPerRow: Int(bytesPerRow), from: region, mipmapLevel: 0)
let image = imageFromCVPixelBuffer(buffer: pixelBuffer)
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags.init(rawValue: 0))
return (pixelBuffer, image)
}
return (nil, UIImage())
}
static func imageFromCVPixelBuffer(buffer: CVPixelBuffer) -> UIImage {
let ciimage = CIImage(cvPixelBuffer: buffer)
let cgimgage = context.createCGImage(ciimage, from: CGRect(x: 0, y: 0, width: CVPixelBufferGetWidth(buffer), height: CVPixelBufferGetHeight(buffer)))
let uiimage = UIImage(cgImage: cgimgage!)
return uiimage
}
Does anybody have any idea why this happens and how to prevent it?
There are several more feedback from people experiencing this can be found here: https://github.com/svtek/SceneKitVideoRecorder/issues/3

Rotate CMSampleBuffer by arbitrary angle and append to AVAssetWriterInput in swift 3

I convert the sample buffer to a CGContext. Then I apply a transformation to the context and create a CIImage from that, which in turn gets displayed in an UIImageView.
At the same time I want to append this to the AVAssetWriterInput to create a movie of these transformations.
So far the transformations I apply to the context have no effect whatsoever. When I display the so called transformed image in the imageview. it looks exactly the same.
UPDATE:
I managed to record the sample buffer to a video file (it's still stretched because of the wrong orientation though). I've used this code as a base
http://geek-is-stupid.github.io/blog/2017/04/13/how-to-record-detect-face-overlay-video-at-real-time-using-swift/
But I'm still struggling with applying the rotating to the CGContext. basically everything I do to the context is completely ignored.
func captureOutput(_ captureOutput: AVCaptureOutput!, didOutputSampleBuffer sampleBuffer: CMSampleBuffer!, from connection: AVCaptureConnection!) {
let writable = canWrite()
if writable , sessionAtSourceTime == nil {
print("starting session")
sessionAtSourceTime = CMSampleBufferGetPresentationTimeStamp(sampleBuffer)
assetWriter!.startSession(atSourceTime: sessionAtSourceTime!)
}
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
if writable {
autoreleasepool {
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
var renderedOutputPixelBuffer: CVPixelBuffer? = nil
let options = [
kCVPixelBufferCGImageCompatibilityKey as String: true,
kCVPixelBufferCGBitmapContextCompatibilityKey as String: true,] as CFDictionary
let status = CVPixelBufferCreate(kCFAllocatorDefault,
CVPixelBufferGetWidth(pixelBuffer),
CVPixelBufferGetHeight(pixelBuffer),
kCVPixelFormatType_32BGRA, options,
&renderedOutputPixelBuffer)
guard status == kCVReturnSuccess else { return }
CVPixelBufferLockBaseAddress(renderedOutputPixelBuffer!,CVPixelBufferLockFlags(rawValue: 0))
let renderedOutputPixelBufferBaseAddress = CVPixelBufferGetBaseAddress(renderedOutputPixelBuffer!)
memcpy(renderedOutputPixelBufferBaseAddress,CVPixelBufferGetBaseAddress(pixelBuffer),CVPixelBufferGetHeight(pixelBuffer) * CVPixelBufferGetBytesPerRow(pixelBuffer))
CVPixelBufferLockBaseAddress(renderedOutputPixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let context = CGContext(data: renderedOutputPixelBufferBaseAddress,
width: CVPixelBufferGetWidth(renderedOutputPixelBuffer!),
height: CVPixelBufferGetHeight(renderedOutputPixelBuffer!),
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(renderedOutputPixelBuffer!),
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: bitmapInfo!)
let radians : Float = atan2f(Float(boxView!.transform.b), Float(boxView!.transform.a));
context!.translateBy(x: self.view.frame.size.width/2, y: self.view.frame.size.height/2)
context!.rotate(by:CGFloat(radians))
let image: CGImage = context!.makeImage()!
self.imageView!.image = UIImage(cgImage: image)
if (bufferAdaptor?.assetWriterInput.isReadyForMoreMediaData)!, canWrite() {
bufferAdaptor?.append(renderedOutputPixelBuffer!, withPresentationTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
}
CVPixelBufferUnlockBaseAddress(renderedOutputPixelBuffer!,CVPixelBufferLockFlags(rawValue: 0))
CVPixelBufferUnlockBaseAddress(pixelBuffer,CVPixelBufferLockFlags(rawValue: 0))
}
}
found the solution. Below the important part of the code.
//create pixelbuffer from the delegate method samplebuffer
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
//create CI image from the buffer
let ci = CIImage.init(cvPixelBuffer: pixelBuffer, options: options)
//create filter to rotate
let filter = CIFilter.init(name: "CIAffineTransform")
//create transform, move rotation point to center
var transform = CGAffineTransform(translationX: self.view.frame.midX, y: self.view.frame.midY)
//rotate it
transform = transform.rotate(angle: CGFloat(radians))
// move the transform point back to the original
transform = transform.translatedBy(x: -self.view.frame.midX, y: -self.view.frame.midY)
filter!.setValue(transform, forKey: kCIInputTransformKey)
filter!.setValue(ci, forKey: kCIInputImageKey)
//take the output from the filter
let output = filter?.outputImage
//create empty pixelbuffer
var newPixelBuffer : CVPixelBuffer? = nil
CVPixelBufferCreate(kCFAllocatorDefault, Int(self.view.frame.width) ,
Int(self.view.frame.height),
kCVPixelFormatType_32BGRA,
nil,
&newPixelBuffer)
//render the context to the new pixelbuffer, context is a global
//CIContext variable. creating a new one each frame is too CPU intensive
context.render(output!, to: newPixelBuffer!)
//finally, write this to the pixelbufferadaptor
if (bufferAdaptor?.assetWriterInput.isReadyForMoreMediaData)!, canWrite() {
bufferAdaptor?.append(newPixelBuffer!,
withPresentationTime: CMSampleBufferGetPresentationTimeStamp(sampleBuffer))
}
CVPixelBufferUnlockBaseAddress(pixelBuffer,CVPixelBufferLockFlags(rawValue: 0))

I want to release the CVPixelBufferRef in swift

I want to create a video from image.
So, I was the source of the link to the reference.
Link:CVPixelBufferPool Error ( kCVReturnInvalidArgument/-6661)
func writeAnimationToMovie(path: String, size: CGSize, animation: Animation) -> Bool {
var error: NSError?
let writer = AVAssetWriter(URL: NSURL(fileURLWithPath: path), fileType: AVFileTypeQuickTimeMovie, error: &error)
let videoSettings = [AVVideoCodecKey: AVVideoCodecH264, AVVideoWidthKey: size.width, AVVideoHeightKey: size.height]
let input = AVAssetWriterInput(mediaType: AVMediaTypeVideo, outputSettings: videoSettings)
let pixelBufferAdaptor = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: input, sourcePixelBufferAttributes: nil)
input.expectsMediaDataInRealTime = true
writer.addInput(input)
writer.startWriting()
writer.startSessionAtSourceTime(kCMTimeZero)
var buffer: CVPixelBufferRef
var frameCount = 0
for frame in animation.frames {
let rect = CGRectMake(0, 0, size.width, size.height)
let rectPtr = UnsafeMutablePointer<CGRect>.alloc(1)
rectPtr.memory = rect
buffer = pixelBufferFromCGImage(frame.image.CGImageForProposedRect(rectPtr, context: nil, hints: nil).takeUnretainedValue(), size)
var appendOk = false
var j = 0
while (!appendOk && j < 30) {
if pixelBufferAdaptor.assetWriterInput.readyForMoreMediaData {
let frameTime = CMTimeMake(Int64(frameCount), 10)
appendOk = pixelBufferAdaptor.appendPixelBuffer(buffer, withPresentationTime: frameTime)
// appendOk will always be false
NSThread.sleepForTimeInterval(0.05)
} else {
NSThread.sleepForTimeInterval(0.1)
}
j++
}
if (!appendOk) {
println("Doh, frame \(frame) at offset \(frameCount) failed to append")
}
}
input.markAsFinished()
writer.finishWritingWithCompletionHandler({
if writer.status == AVAssetWriterStatus.Failed {
println("oh noes, an error: \(writer.error.description)")
} else {
println("hrmmm, there should be a movie?")
}
})
return true;
}
func pixelBufferFromCGImage(image: CGImageRef, size: CGSize) -> CVPixelBufferRef {
let options = [
kCVPixelBufferCGImageCompatibilityKey: true,
kCVPixelBufferCGBitmapContextCompatibilityKey: true]
var pixBufferPointer = UnsafeMutablePointer<Unmanaged<CVPixelBuffer>?>.alloc(1)
let status = CVPixelBufferCreate(
nil,
UInt(size.width), UInt(size.height),
OSType(kCVPixelFormatType_32ARGB),
options,
pixBufferPointer)
CVPixelBufferLockBaseAddress(pixBufferPointer.memory?.takeUnretainedValue(), 0)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapinfo = CGBitmapInfo.fromRaw(CGImageAlphaInfo.NoneSkipFirst.toRaw())
var pixBufferData:UnsafeMutablePointer<(Void)> = CVPixelBufferGetBaseAddress(pixBufferPointer.memory?.takeUnretainedValue())
let context = CGBitmapContextCreate(
pixBufferData,
UInt(size.width), UInt(size.height),
8, UInt(4 * size.width),
rgbColorSpace, bitmapinfo!)
CGContextConcatCTM(context, CGAffineTransformMakeRotation(0))
CGContextDrawImage(
context,
CGRectMake(0, 0, CGFloat(CGImageGetWidth(image)), CGFloat(CGImageGetHeight(image))),
image)
CVPixelBufferUnlockBaseAddress(pixBufferPointer.memory?.takeUnretainedValue(), 0)
return pixBufferPointer.memory!.takeUnretainedValue()
}
Something even after the Movies can be as under the image remains in memory.
I believe or not than is left PixcelBuffer.
I had a method CVPixelBufferRelease(buffer) to release the PixcelBuffer when the Objective-c, I'm no longer can use this in Swift. How do I release the PixcelBuffer doing?
If anyone could help, I'd really appreciate it.
1
2
When using CVPixelBufferCreate the UnsafeMutablePointer has to be destroyed after retrieving the memory of it.
When I create a CVPixelBuffer, I do it like this:
func allocPixelBuffer() -> CVPixelBuffer {
let pixelBufferAttributes : CFDictionary = [...]
let pixelBufferOut = UnsafeMutablePointer<CVPixelBuffer?>.alloc(1)
_ = CVPixelBufferCreate(kCFAllocatorDefault,
Int(Width),
Int(Height),
OSType(kCVPixelFormatType_32ARGB),
pixelBufferAttributes,
pixelBufferOut)
let pixelBuffer = pixelBufferOut.memory!
pixelBufferOut.destroy()
return pixelBuffer
}
I had same problem, but I have solved.
Use this: autoreleasepool
var boolWhile = true
while (boolWhile) {
autoreleasepool({() -> () in
if(input.readyForMoreMediaData) {
presentTime = CMTimeMake(Int64(ii), fps)
if(ii >= arrayImages.count){
...
Try changing
return pixBufferPointer.memory!.takeUnretainedValue()
to
return pixBufferPointer.memory!.takeRetainedValue()
to avoid leaking CVPixelBuffers

Resources