How to convert a UIImage to a CVPixelBuffer 32BGRA for mediapipe? - ios

I am using mediapipe to develop a iOS application, now I need input an image data to the mediapipe, but mediapipe only accepted 32BGRA CVPixelBuffer.
how can I convert UIImage to 32BGRA CVPixelBuffer?
I am using this code:
let frameSize = CGSize(width: self.cgImage!.width, height: self.cgImage!.height)
var pixelBuffer:CVPixelBuffer? = nil
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(frameSize.width), Int(frameSize.height), kCVPixelFormatType_32BGRA , nil, &pixelBuffer)
if status != kCVReturnSuccess {
return nil
}
CVPixelBufferLockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags.init(rawValue: 0))
let data = CVPixelBufferGetBaseAddress(pixelBuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
let context = CGContext(data: data, width: Int(frameSize.width), height: Int(frameSize.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pixelBuffer!), space: rgbColorSpace, bitmapInfo: bitmapInfo.rawValue)
context?.draw(self.cgImage!, in: CGRect(x: 0, y: 0, width: self.cgImage!.width, height: self.cgImage!.height))
CVPixelBufferUnlockBaseAddress(pixelBuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pixelBuffer
but I will throw an error on mediapipe mediapipe/0 (11): signal SIGABRT
If I use AVCaptureVideoDataOutput it is all well.
btw: I am using swift.

Maybe you can try this. Also, I have a question for you. Do you know how to use a static image?
Are used for face recognition in mediapipe? If you know, please tell me. Thank you.
func pixelBufferFromCGImage(image:CGImage) -> CVPixelBuffer? {
let options = [
kCVPixelBufferCGImageCompatibilityKey as String: NSNumber(value: true),
kCVPixelBufferCGBitmapContextCompatibilityKey as String: NSNumber(value: true),
kCVPixelBufferIOSurfacePropertiesKey as String: [:]
] as CFDictionary
let size:CGSize = .init(width: image.width, height: image.height)
var pxbuffer: CVPixelBuffer? = nil
let status = CVPixelBufferCreate(
kCFAllocatorDefault,
Int(size.width),
Int(size.height),
kCVPixelFormatType_32BGRA,
options,
&pxbuffer)
guard let pxbuffer = pxbuffer else { return nil }
CVPixelBufferLockBaseAddress(pxbuffer, [])
guard let pxdata = CVPixelBufferGetBaseAddress(pxbuffer) else {return nil}
let bitmapInfo = CGBitmapInfo(rawValue: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
guard let context = CGContext(data: pxdata, width: Int(size.width), height: Int(size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer), space: CGColorSpaceCreateDeviceRGB(), bitmapInfo:bitmapInfo.rawValue) else {
return nil
}
context.concatenate(CGAffineTransformIdentity)
context.draw(image, in: .init(x: 0, y: 0, width: size.width, height: size.height))
///error: CGContextRelease' is unavailable: Core Foundation objects are automatically memory managed
///maybe CGContextRelease should not use it
CGContextRelease(context)
CVPixelBufferUnlockBaseAddress(pxbuffer, [])
return pxbuffer
}

Related

iOS app getting crash when frequent draw CGContext using CGImage, which create from [UInt8] data

Right now i am developing one module in that module i need to create video from array CGImage and while doing that processing my application get crashed at some point , i am not able to figure out exact reason behind that crash.
can anyone please suggest me i am going in right direction or not , should i convert [CGImage] to video or do i need to choose another approach.
i also tried to convert CGImage to UIImage and tried to create video but still facing same issue.
i am getting image data in [UInt8] data so what would be the correct approach converting the image formate and create video ?
In order to create video from [CGImage] following below approach.
I am converting [UInt8] data to CGImage using CGDataProvider and convert CGImage to UIImage. I have array of image and collect UIImage and then merge images and create video.
Here my code to convert CGImage from data.
private(set) var data: [UInt8]
var cgImage: CGImage? {
let colorSpaceRef = CGColorSpaceCreateDeviceRGB()
let bitsPerComponent = 8
let bitsPerPixel = channels * bitsPerComponent
let bytesPerRow = channels * width
let totalBytes = height * bytesPerRow
let bitmapInfo = CGBitmapInfo(rawValue: channels == 3 ? CGImageAlphaInfo.none.rawValue : CGImageAlphaInfo.last.rawValue)
let provider = CGDataProvider( dataInfo: nil,
data: data,
size: totalBytes,
releaseData: {_, _, _ in })!
return CGImage(width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow,
space: colorSpaceRef,
bitmapInfo: bitmapInfo,
provider: provider,
decode: nil,
shouldInterpolate: false,
intent: CGColorRenderingIntent.perceptual)
}
My app is getting crash here in this function, when i start frequent image drawing to context
(context!.draw(cgImage, in: CGRect(x: 0, y: 0, width: frameWidth,
height: frameHeight)))
If i use number of images from bundle and create video using this code its working fine. When i use created CGImage from [UInt8] data, it started getting crash after writing 3-4 images.
func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
autoreleasepool {
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
var pxbuffer:CVPixelBuffer?
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
assert(context != nil, "context is nil")
context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage
, in: CGRect(x: 0, y: 0, width: frameWidth, height: frameHeight))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
here, i am using below code to create video from array of images.
typealias CXEMovieMakerCompletion = (URL) -> Void
typealias CXEMovieMakerUIImageExtractor = (AnyObject) -> UIImage?
public class CXEImagesToVideo: NSObject{
var assetWriter:AVAssetWriter!
var writeInput:AVAssetWriterInput!
var bufferAdapter:AVAssetWriterInputPixelBufferAdaptor!
var videoSettings:[String : Any]!
var frameTime:CMTime!
var fileURL:URL!
var completionBlock: CXEMovieMakerCompletion?
var movieMakerUIImageExtractor:CXEMovieMakerUIImageExtractor?
public class func videoSettings(codec:String, width:Int, height:Int) -> [String: Any]{
if(Int(width) % 16 != 0){
print("warning: video settings width must be divisible by 16")
}
let videoSettings:[String: Any] = [AVVideoCodecKey: AVVideoCodecType.h264,
AVVideoWidthKey: width,
AVVideoHeightKey: height]
return videoSettings
}
public init(videoSettings: [String: Any],frameTime: CMTime) {
super.init()
self.frameTime = frameTime
let paths = NSSearchPathForDirectoriesInDomains(.documentDirectory, .userDomainMask, true)
let tempPath = paths[0] + "/exprotvideo1.mp4"
if(FileManager.default.fileExists(atPath: tempPath)){
guard (try? FileManager.default.removeItem(atPath: tempPath)) != nil else {
print("remove path failed")
return
}
}
self.fileURL = URL(fileURLWithPath: tempPath)
self.assetWriter = try! AVAssetWriter(url: self.fileURL, fileType: AVFileType.mp4)
self.videoSettings = videoSettings
self.writeInput = AVAssetWriterInput(mediaType: AVMediaType.video, outputSettings: videoSettings)
assert(self.assetWriter.canAdd(self.writeInput), "add failed")
self.assetWriter.add(self.writeInput)
let bufferAttributes:[String: Any] = [kCVPixelBufferPixelFormatTypeKey as String: Int(kCVPixelFormatType_32ARGB)]
self.bufferAdapter = AVAssetWriterInputPixelBufferAdaptor(assetWriterInput: self.writeInput, sourcePixelBufferAttributes: bufferAttributes)
self.frameTime = CMTimeMake(value: 1, timescale: 10)
}
func createMovieFrom(urls: [URL], withCompletion: #escaping CXEMovieMakerCompletion){
self.createMovieFromSource(images: urls as [AnyObject], extractor:{(inputObject:AnyObject) ->UIImage? in
return UIImage(data: try! Data(contentsOf: inputObject as! URL))}, withCompletion: withCompletion)
}
func createMovieFrom(images: [UIImage], withCompletion: #escaping CXEMovieMakerCompletion){
DispatchQueue.main.async {
self.createMovieFromSource(images: images, extractor: {(inputObject:AnyObject) -> UIImage? in
return inputObject as? UIImage}, withCompletion: withCompletion)
}
}
func imageFromLayer(layer:CALayer) -> UIImage {
UIGraphicsBeginImageContextWithOptions(layer.frame.size, layer.isOpaque, 0)
layer.render(in: UIGraphicsGetCurrentContext()!)
let outputImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return outputImage!
}
func createMovieFromSource(images: [AnyObject], extractor: #escaping CXEMovieMakerUIImageExtractor, withCompletion: #escaping CXEMovieMakerCompletion){
self.completionBlock = withCompletion
self.assetWriter.startWriting()
self.assetWriter.startSession(atSourceTime: CMTime.zero)
let mediaInputQueue = DispatchQueue.init(label: "Main") // DispatchQueue(label: "mediaInputQueue")
var i = 0
let frameNumber = images.count
self.writeInput.requestMediaDataWhenReady(on: mediaInputQueue){
while(true){
if(i >= frameNumber){
break
}
if (self.writeInput.isReadyForMoreMediaData){
var sampleBuffer:CVPixelBuffer?
autoreleasepool{
let temp = images[i]
let img = extractor(temp)
if img == nil{
i += 1
print("Warning: counld not extract one of the frames")
//continue
}
sampleBuffer = self.newPixelBufferFrom(cgImage: temp.cgImage!)
}
if (sampleBuffer != nil){
if(i == 0){
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: CMTime.zero)
}else{
let value = i - 1
let lastTime = CMTimeMake(value: Int64(value), timescale: self.frameTime.timescale)
let presentTime = CMTimeAdd(lastTime, self.frameTime)
self.bufferAdapter.append(sampleBuffer!, withPresentationTime: presentTime)
}
i = i + 1
}
}
}
self.writeInput.markAsFinished()
self.assetWriter.finishWriting {
DispatchQueue.main.sync {
self.completionBlock!(self.fileURL)
}
}
}
}
func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
autoreleasepool {
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
var pxbuffer:CVPixelBuffer?
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
// CGImageAlphaInfo.noneSkipFirst.rawValue
assert(context != nil, "context is nil")
// context?.clear(CGRect(x: 0, y: 0, width: frameWidth, height: frameHeight))
context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage
, in: CGRect(x: 0, y: 0, width: frameWidth, height: frameHeight))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
}
}

iOS: convert UIImage to CMSampleBuffer

Some questions address how to convert a CMSampleBuffer to a UIImage, but there are no answers on how to do the reverse, i.e., convert UIImage to CMSampleBuffer.
This question is different from similar ones because the code below provides a starting point for converting a UIImage to a CVPixelBuffer, which hopefully someone with more AVFoundation expertise can help fix to convert to a CMSampleBuffer.
func convertImageToBuffer(from image: UIImage) -> CVPixelBuffer? {
let attrs = [
String(kCVPixelBufferCGImageCompatibilityKey) : kCFBooleanTrue,
String(kCVPixelBufferCGBitmapContextCompatibilityKey) : kCFBooleanTrue
] as [String : Any]
var buffer : CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(image.size.width), Int(image.size.height), kCVPixelFormatType_32ARGB, attrs as CFDictionary, &buffer)
guard (status == kCVReturnSuccess) else {
return nil
}
CVPixelBufferLockBaseAddress(buffer!, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(buffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pixelData, width: Int(image.size.width), height: Int(image.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(buffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
context?.translateBy(x: 0, y: image.size.height)
context?.scaleBy(x: 1.0, y: -1.0)
UIGraphicsPushContext(context!)
image.draw(in: CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height))
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(buffer!, CVPixelBufferLockFlags(rawValue: 0))
return buffer
}
You are halfway done. Just convert CVPixelBuffer to CMSampleBuffer:
extension UIImage {
var cvPixelBuffer: CVPixelBuffer? {
let attrs = [
String(kCVPixelBufferCGImageCompatibilityKey): kCFBooleanTrue,
String(kCVPixelBufferCGBitmapContextCompatibilityKey): kCFBooleanTrue
] as [String: Any]
var buffer: CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault, Int(self.size.width), Int(self.size.height), kCVPixelFormatType_32ARGB, attrs as CFDictionary, &buffer)
guard status == kCVReturnSuccess else {
return nil
}
CVPixelBufferLockBaseAddress(buffer!, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(buffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pixelData, width: Int(self.size.width), height: Int(self.size.height), bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(buffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
context?.translateBy(x: 0, y: self.size.height)
context?.scaleBy(x: 1.0, y: -1.0)
UIGraphicsPushContext(context!)
self.draw(in: CGRect(x: 0, y: 0, width: self.size.width, height: self.size.height))
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(buffer!, CVPixelBufferLockFlags(rawValue: 0))
return buffer
}
func createCMSampleBuffer() -> CMSampleBuffer? {
let pixelBuffer = cvPixelBuffer
var newSampleBuffer: CMSampleBuffer?
var timimgInfo: CMSampleTimingInfo?
var videoInfo: CMVideoFormatDescription?
CMVideoFormatDescriptionCreateForImageBuffer(allocator: nil, imageBuffer: pixelBuffer!, formatDescriptionOut: &videoInfo)
CMSampleBufferCreateForImageBuffer(allocator: kCFAllocatorDefault,
imageBuffer: pixelBuffer!,
dataReady: true,
makeDataReadyCallback: nil,
refcon: nil,
formatDescription: videoInfo!,
sampleTiming: &timimgInfo,
sampleBufferOut: &newSampleBuffer)
return newSampleBuffer!
}
}

Convert Image to CVPixelBuffer for Machine Learning Swift

I am trying to get Apple's sample Core ML Models that were demoed at the 2017 WWDC to function correctly. I am using the GoogLeNet to try and classify images (see the Apple Machine Learning Page). The model takes a CVPixelBuffer as an input. I have an image called imageSample.jpg that I'm using for this demo. My code is below:
var sample = UIImage(named: "imageSample")?.cgImage
let bufferThree = getCVPixelBuffer(sample!)
let model = GoogLeNetPlaces()
guard let output = try? model.prediction(input: GoogLeNetPlacesInput.init(sceneImage: bufferThree!)) else {
fatalError("Unexpected runtime error.")
}
print(output.sceneLabel)
I am always getting the unexpected runtime error in the output rather than an image classification. My code to convert the image is below:
func getCVPixelBuffer(_ image: CGImage) -> CVPixelBuffer? {
let imageWidth = Int(image.width)
let imageHeight = Int(image.height)
let attributes : [NSObject:AnyObject] = [
kCVPixelBufferCGImageCompatibilityKey : true as AnyObject,
kCVPixelBufferCGBitmapContextCompatibilityKey : true as AnyObject
]
var pxbuffer: CVPixelBuffer? = nil
CVPixelBufferCreate(kCFAllocatorDefault,
imageWidth,
imageHeight,
kCVPixelFormatType_32ARGB,
attributes as CFDictionary?,
&pxbuffer)
if let _pxbuffer = pxbuffer {
let flags = CVPixelBufferLockFlags(rawValue: 0)
CVPixelBufferLockBaseAddress(_pxbuffer, flags)
let pxdata = CVPixelBufferGetBaseAddress(_pxbuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB();
let context = CGContext(data: pxdata,
width: imageWidth,
height: imageHeight,
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(_pxbuffer),
space: rgbColorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedFirst.rawValue)
if let _context = context {
_context.draw(image, in: CGRect.init(x: 0, y: 0, width: imageWidth, height: imageHeight))
}
else {
CVPixelBufferUnlockBaseAddress(_pxbuffer, flags);
return nil
}
CVPixelBufferUnlockBaseAddress(_pxbuffer, flags);
return _pxbuffer;
}
return nil
}
I got this code from a previous StackOverflow post (last answer here). I recognize that the code may not be correct, but I have no idea of how to do this myself. I believe that this is the section that contains the error. The model calls for the following type of input: Image<RGB,224,224>
You don't need to do a bunch of image mangling yourself to use a Core ML model with an image — the new Vision framework can do that for you.
import Vision
import CoreML
let model = try VNCoreMLModel(for: MyCoreMLGeneratedModelClass().model)
let request = VNCoreMLRequest(model: model, completionHandler: myResultsMethod)
let handler = VNImageRequestHandler(url: myImageURL)
handler.perform([request])
func myResultsMethod(request: VNRequest, error: Error?) {
guard let results = request.results as? [VNClassificationObservation]
else { fatalError("huh") }
for classification in results {
print(classification.identifier, // the scene label
classification.confidence)
}
}
The WWDC17 session on Vision should have a bit more info — it's tomorrow afternoon.
You can use a pure CoreML, but you should resize an image to (224,224)
DispatchQueue.global(qos: .userInitiated).async {
// Resnet50 expects an image 224 x 224, so we should resize and crop the source image
let inputImageSize: CGFloat = 224.0
let minLen = min(image.size.width, image.size.height)
let resizedImage = image.resize(to: CGSize(width: inputImageSize * image.size.width / minLen, height: inputImageSize * image.size.height / minLen))
let cropedToSquareImage = resizedImage.cropToSquare()
guard let pixelBuffer = cropedToSquareImage?.pixelBuffer() else {
fatalError()
}
guard let classifierOutput = try? self.classifier.prediction(image: pixelBuffer) else {
fatalError()
}
DispatchQueue.main.async {
self.title = classifierOutput.classLabel
}
}
// ...
extension UIImage {
func resize(to newSize: CGSize) -> UIImage {
UIGraphicsBeginImageContextWithOptions(CGSize(width: newSize.width, height: newSize.height), true, 1.0)
self.draw(in: CGRect(x: 0, y: 0, width: newSize.width, height: newSize.height))
let resizedImage = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return resizedImage
}
func cropToSquare() -> UIImage? {
guard let cgImage = self.cgImage else {
return nil
}
var imageHeight = self.size.height
var imageWidth = self.size.width
if imageHeight > imageWidth {
imageHeight = imageWidth
}
else {
imageWidth = imageHeight
}
let size = CGSize(width: imageWidth, height: imageHeight)
let x = ((CGFloat(cgImage.width) - size.width) / 2).rounded()
let y = ((CGFloat(cgImage.height) - size.height) / 2).rounded()
let cropRect = CGRect(x: x, y: y, width: size.height, height: size.width)
if let croppedCgImage = cgImage.cropping(to: cropRect) {
return UIImage(cgImage: croppedCgImage, scale: 0, orientation: self.imageOrientation)
}
return nil
}
func pixelBuffer() -> CVPixelBuffer? {
let width = self.size.width
let height = self.size.height
let attrs = [kCVPixelBufferCGImageCompatibilityKey: kCFBooleanTrue,
kCVPixelBufferCGBitmapContextCompatibilityKey: kCFBooleanTrue] as CFDictionary
var pixelBuffer: CVPixelBuffer?
let status = CVPixelBufferCreate(kCFAllocatorDefault,
Int(width),
Int(height),
kCVPixelFormatType_32ARGB,
attrs,
&pixelBuffer)
guard let resultPixelBuffer = pixelBuffer, status == kCVReturnSuccess else {
return nil
}
CVPixelBufferLockBaseAddress(resultPixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
let pixelData = CVPixelBufferGetBaseAddress(resultPixelBuffer)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
guard let context = CGContext(data: pixelData,
width: Int(width),
height: Int(height),
bitsPerComponent: 8,
bytesPerRow: CVPixelBufferGetBytesPerRow(resultPixelBuffer),
space: rgbColorSpace,
bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue) else {
return nil
}
context.translateBy(x: 0, y: height)
context.scaleBy(x: 1.0, y: -1.0)
UIGraphicsPushContext(context)
self.draw(in: CGRect(x: 0, y: 0, width: width, height: height))
UIGraphicsPopContext()
CVPixelBufferUnlockBaseAddress(resultPixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
return resultPixelBuffer
}
}
The expected image size for inputs you can find in the mimodel file:
A demo project that uses both pure CoreML and Vision variants you can find here: https://github.com/handsomecode/iOS11-Demos/tree/coreml_vision/CoreML/CoreMLDemo
If the input is UIImage, rather than an URL, and you want to use VNImageRequestHandler, you can use CIImage.
func updateClassifications(for image: UIImage) {
let orientation = CGImagePropertyOrientation(image.imageOrientation)
guard let ciImage = CIImage(image: image) else { return }
let handler = VNImageRequestHandler(ciImage: ciImage, orientation: orientation)
}
From Classifying Images with Vision and Core ML

Saving openGL ES drawing to a PNG image file in iOS Swift 3

I am trying to save a drawing generated on iOS using openGL ES to an PNG file using CGDataProvider and CGImage functions. I have found some code on the web in Objective-C which I converted to Swift 3. The code compiles but fails during runtime throwing an error:
fatal error: unexpectedly found nil while unwrapping an Optional value at this line:
let iref: CGImage = CGImage(pngDataProviderSource: provider!, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)!
Code:
let x: Int = 0
let y: Int = 0
let dataLength: Int = Int(width) * Int(height) * 4
let pixels: UnsafeMutableRawPointer? = malloc(dataLength * MemoryLayout<GLubyte>.size)
glPixelStorei(GL_PACK_ALIGNMENT.ui, 4)
glReadPixels(GLint(x), GLint(y), GLsizei(width), GLsizei(height), GLenum(GL_RGBA), GL_UNSIGNED_BYTE.ui, pixels)
let pixelData: UnsafePointer = (UnsafeRawPointer(pixels)?.assumingMemoryBound(to: UInt8.self))!
let cfdata: CFData = CFDataCreate(kCFAllocatorDefault, pixelData, dataLength * MemoryLayout<GLubyte>.size)
let provider: CGDataProvider! = CGDataProvider(data: cfdata)
let iref: CGImage = CGImage(pngDataProviderSource: provider!, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)!
UIGraphicsBeginImageContext(CGSize(width: CGFloat(width), height: CGFloat(height)))
let cgcontext: CGContext? = UIGraphicsGetCurrentContext()
cgcontext!.setBlendMode(CGBlendMode.copy)
cgcontext!.draw(iref, in: CGRect(x: CGFloat(0.0), y: CGFloat(0.0), width: CGFloat(width), height: CGFloat(height)))
let image: UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
I can't find any help on the web for Swift 3. Could you please help in fixing this error.
As suggested by Matic Oblak, the error was gone and the code worked properly, when I replaced this line in the above code
let iref: CGImage = CGImage(pngDataProviderSource: provider!, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)!
with this
let iref: CGImage? = CGImage(width: Int(width), height: Int(height), bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: Int(width)*4, space: colorspace!, bitmapInfo: CGBitmapInfo.byteOrder32Big, provider: provider, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)
New working code:
let x: Int = 0
let y: Int = 0
let dataLength: Int = Int(width) * Int(height) * 4
let pixels: UnsafeMutableRawPointer? = malloc(dataLength * MemoryLayout<GLubyte>.size)
glPixelStorei(GL_PACK_ALIGNMENT.ui, 4)
glReadPixels(GLint(x), GLint(y), GLsizei(width), GLsizei(height), GLenum(GL_RGBA), GL_UNSIGNED_BYTE.ui, pixels)
let pixelData: UnsafePointer = (UnsafeRawPointer(pixels)?.assumingMemoryBound(to: UInt8.self))!
let cfdata: CFData = CFDataCreate(kCFAllocatorDefault, pixelData, dataLength * MemoryLayout<GLubyte>.size)
let provider: CGDataProvider! = CGDataProvider(data: cfdata)
let iref: CGImage? = CGImage(width: Int(width), height: Int(height), bitsPerComponent: 8, bitsPerPixel: 32, bytesPerRow: Int(width)*4, space: colorspace!, bitmapInfo: CGBitmapInfo.byteOrder32Big, provider: provider, decode: nil, shouldInterpolate: true, intent: CGColorRenderingIntent.defaultIntent)
UIGraphicsBeginImageContext(CGSize(width: CGFloat(width), height: CGFloat(height)))
let cgcontext: CGContext? = UIGraphicsGetCurrentContext()
cgcontext!.setBlendMode(CGBlendMode.copy)
cgcontext!.draw(iref, in: CGRect(x: CGFloat(0.0), y: CGFloat(0.0), width: CGFloat(width), height: CGFloat(height)))
let image: UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()

How do I improve the performance of converting UIImage to CVPixelBuffer?

I have a UIImage array with a lot of UIImage objects,and use the methods mentioned by the link to export the image array to a video. Everything works, but the performance of converting UIImage array to CVPixelBuffer is very terrible:
private func newPixelBufferFrom(cgImage:CGImage) -> CVPixelBuffer?{
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
var pxbuffer:CVPixelBuffer?
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
let status = CVPixelBufferCreate(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, options as CFDictionary?, &pxbuffer)
assert(status == kCVReturnSuccess && pxbuffer != nil, "newPixelBuffer failed")
CVPixelBufferLockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pxdata = CVPixelBufferGetBaseAddress(pxbuffer!)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: pxdata, width: frameWidth, height: frameHeight, bitsPerComponent: 8, bytesPerRow: CVPixelBufferGetBytesPerRow(pxbuffer!), space: rgbColorSpace, bitmapInfo: CGImageAlphaInfo.noneSkipFirst.rawValue)
assert(context != nil, "context is nil")
context!.concatenate(CGAffineTransform.identity)
context!.draw(cgImage, in: CGRect(x: 0, y: 0, width: frameWidth, height: frameHeight))
CVPixelBufferUnlockBaseAddress(pxbuffer!, CVPixelBufferLockFlags(rawValue: 0))
return pxbuffer
}
Could you give me some ideas?
Thanks!
I solved my problem.
In my video editing application like iMovie, I need to convert one image to a video and make the image(now it's a video) is movable.
A UIImage array is from a UIImage in essence. Therefore, I avoid call newPixelBufferFrom repeatedly but only once. the following codes will be faster:
var sampleBuffer:CVPixelBuffer?
var pxDataBuffer:CVPixelBuffer?
let options:[String: Any] = [kCVPixelBufferCGImageCompatibilityKey as String: true, kCVPixelBufferCGBitmapContextCompatibilityKey as String: true]
let frameHeight = self.videoSettings[AVVideoHeightKey] as! Int
let frameWidth = self.videoSettings[AVVideoWidthKey] as! Int
let originHeight = frameWidth * img!.cgImage!.height / img!.cgImage!.width
let heightDifference = originHeight - frameHeight
let frameCounts = self.duration * Int(self.frameTime.timescale)
let spacingOfHeight = heightDifference / frameCounts
sampleBuffer = self.newPixelBufferFrom(cgImage: img!.cgImage!)
assert(sampleBuffer != nil)
var presentTime = CMTimeMake(1, self.frameTime.timescale)
var stepRows = 0
for i in 0..<frameCounts {
CVPixelBufferLockBaseAddress(sampleBuffer!, CVPixelBufferLockFlags(rawValue: 0))
let pointer = CVPixelBufferGetBaseAddress(sampleBuffer!)
var pxData = pointer?.assumingMemoryBound(to: UInt8.self)
let bytes = CVPixelBufferGetBytesPerRow(sampleBuffer!) * stepRows
pxData = pxData?.advanced(by: bytes)
let status = CVPixelBufferCreateWithBytes(kCFAllocatorDefault, frameWidth, frameHeight, kCVPixelFormatType_32ARGB, pxData!, CVPixelBufferGetBytesPerRow(sampleBuffer!), nil, nil, options as CFDictionary?, &pxDataBuffer)
assert(status == kCVReturnSuccess && pxDataBuffer != nil, "newPixelBuffer failed")
CVPixelBufferUnlockBaseAddress(sampleBuffer!, CVPixelBufferLockFlags(rawValue: 0))
while !self.writeInput.isReadyForMoreMediaData {
usleep(100)
}
if (self.writeInput.isReadyForMoreMediaData){
if i == 0{
self.bufferAdapter.append(pxDataBuffer!, withPresentationTime: zeroTime)
}else{
self.bufferAdapter.append(pxDataBuffer!, withPresentationTime: presentTime)
}
presentTime = CMTimeAdd(presentTime, self.frameTime)
}
stepRows += spacingOfHeight
}

Resources