vImageBuffer_InitWithCGImage Memory Leak in Swift 3 - ios

I am trying to get histogram calculation. Everything works fine, except following method shows an immense memory leak when profiled in Instruments.
Every time following method is called, it uses 200-300 MB of memory and never releases:
func histogramCalculation(_ imageRef: CGImage) -> (red: [UInt], green: [UInt], blue: [UInt]) {
var inBuffer = vImage_Buffer()
vImageBuffer_InitWithCGImage(
&inBuffer,
&format,
nil,
imageRef,
UInt32(kvImageNoFlags))
let alpha = [UInt](repeating: 0, count: 256)
let red = [UInt](repeating: 0, count: 256)
let green = [UInt](repeating: 0, count: 256)
let blue = [UInt](repeating: 0, count: 256)
let alphaPtr = UnsafeMutablePointer<vImagePixelCount>(mutating: alpha) as UnsafeMutablePointer<vImagePixelCount>?
let redPtr = UnsafeMutablePointer<vImagePixelCount>(mutating: red) as UnsafeMutablePointer<vImagePixelCount>?
let greenPtr = UnsafeMutablePointer<vImagePixelCount>(mutating: green) as UnsafeMutablePointer<vImagePixelCount>?
let bluePtr = UnsafeMutablePointer<vImagePixelCount>(mutating: blue) as UnsafeMutablePointer<vImagePixelCount>?
let rgba = [redPtr, greenPtr, bluePtr, alphaPtr]
let histogram = UnsafeMutablePointer<UnsafeMutablePointer<vImagePixelCount>?>(mutating: rgba)
let error : vImage_Error = vImageHistogramCalculation_ARGB8888(&inBuffer, histogram, UInt32(kvImageNoFlags))
if (error == kvImageNoError) {
return (red, green, blue)
}
return (red, green, blue)
}
What could be wrong here.....

The docs for vImageBuffer_InitWithCGImage explain:
You are responsible for returning the memory referenced by buf->data to the system using free() when you are done with it.
So I would expect something along these lines to clean up the memory:
inBuffer.data.deallocate(bytes: inBuffer.rowBytes * Int(inBuffer.height),
alignedTo: MemoryLayout<vImage_Buffer>.alignment)
As a side note, your use of UnsafeMutablePointer here is not safe. There's no promise, for instance, that alpha will exist by the time you use reference it. Swift is allowed to destroy alpha immediately after you create alphaPtr (because it's never referenced again). It is rare that you want to use UnsafeMutablePointer.init. Instead, you want to use withUnsafe... methods to establish guaranteed lifetimes. For example (untested, but compiles):
var alpha = [vImagePixelCount](repeating: 0, count: 256)
var red = [vImagePixelCount](repeating: 0, count: 256)
var green = [vImagePixelCount](repeating: 0, count: 256)
var blue = [vImagePixelCount](repeating: 0, count: 256)
let error = alpha.withUnsafeMutableBufferPointer { alphaPtr -> vImage_Error in
return red.withUnsafeMutableBufferPointer { redPtr in
return green.withUnsafeMutableBufferPointer { greenPtr in
return blue.withUnsafeMutableBufferPointer { bluePtr in
var rgba = [redPtr.baseAddress, greenPtr.baseAddress, bluePtr.baseAddress, alphaPtr.baseAddress]
return rgba.withUnsafeMutableBufferPointer { buffer in
return vImageHistogramCalculation_ARGB8888(&inBuffer, buffer.baseAddress!, UInt32(kvImageNoFlags))
}
}
}
}
}

Related

Rendering to CVPixelBuffer on iOS

I have a flutter plugin where I need do to some basic 3D rendering on iOS.
I decided to go with the Metal API because the OpenGL ES is deprecated on the platform.
Before implementing a plugin I implemented rendering in the iOS application. There rendering works without problems.
While rendering to the texture I get whole area filled with black.
//preparation
Vertices = [Vertex(x: 1, y: -1, tx: 1, ty: 1),
Vertex(x: 1, y: 1, tx: 1, ty: 0),
Vertex(x: -1, y: 1, tx: 0, ty: 0),
Vertex(x: -1, y: -1, tx: 0, ty: 1),]
Indices = [0, 1, 2, 2, 3, 0]
let d = [
kCVPixelBufferOpenGLCompatibilityKey : true,
kCVPixelBufferMetalCompatibilityKey : true
]
var cvret = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32BGRA, d as CFDictionary, &pixelBuffer); //FIXME jaki format
if(cvret != kCVReturnSuccess) {
print("faield to create pixel buffer")
}
metalDevice = MTLCreateSystemDefaultDevice()!
let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: MTLPixelFormat.rgba8Unorm, width: width, height: height, mipmapped: false)
desc.usage = MTLTextureUsage.renderTarget.union( MTLTextureUsage.shaderRead )
targetTexture = metalDevice.makeTexture(descriptor: desc)
metalCommandQueue = metalDevice.makeCommandQueue()!
ciCtx = CIContext.init(mtlDevice: metalDevice)
let vertexBufferSize = Vertices.size()
vertexBuffer = metalDevice.makeBuffer(bytes: &Vertices, length: vertexBufferSize, options: .storageModeShared)
let indicesBufferSize = Indices.size()
indicesBuffer = metalDevice.makeBuffer(bytes: &Indices, length: indicesBufferSize, options: .storageModeShared)
let defaultLibrary = metalDevice.makeDefaultLibrary()!
let txProgram = defaultLibrary.makeFunction(name: "basic_fragment")
let vertexProgram = defaultLibrary.makeFunction(name: "basic_vertex")
let pipelineStateDescriptor = MTLRenderPipelineDescriptor()
pipelineStateDescriptor.sampleCount = 1
pipelineStateDescriptor.vertexFunction = vertexProgram
pipelineStateDescriptor.fragmentFunction = txProgram
pipelineStateDescriptor.colorAttachments[0].pixelFormat = .rgba8Unorm
pipelineState = try! metalDevice.makeRenderPipelineState(descriptor: pipelineStateDescriptor)
//drawing
let renderPassDescriptor = MTLRenderPassDescriptor()
renderPassDescriptor.colorAttachments[0].texture = targetTexture
renderPassDescriptor.colorAttachments[0].loadAction = .clear
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.85, green: 0.85, blue: 0.85, alpha: 0.5)
renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreAction.store
renderPassDescriptor.renderTargetWidth = width
renderPassDescriptor.renderTargetHeight = height
guard let commandBuffer = metalCommandQueue.makeCommandBuffer() else { return }
guard let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) else { return }
renderEncoder.label = "Offscreen render pass"
renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0)
renderEncoder.setRenderPipelineState(pipelineState)
renderEncoder.drawIndexedPrimitives(type: .triangle, indexCount: Indices.count, indexType: .uint32, indexBuffer: indicesBuffer, indexBufferOffset: 0) // 2
renderEncoder.endEncoding()
commandBuffer.commit()
//copy to pixel buffer
guard let img = CIImage(mtlTexture: targetTexture) else { return }
ciCtx.render(img, to: pixelBuffer!)
I'm pretty sure that creating a separate MTLTexture and then blitting it into a CVPixelBuffer is not a way to go. You are basically writing it out to an MTLTexture and then using that result only to write it out to a CIImage.
Instead, you can make them share an IOSurface underneath by creating a CVPixelBuffer with CVPixelBufferCreateWithIOSurface and a corresponding MTLTexture with makeTexture(descriptor:iosurface:plane:) .
Or you can create an MTLBuffer that aliases the same memory as CVPixelBuffer, then create an MTLTexture from that MTLBuffer. If you are going to use this approach, I would suggest also using MTLBlitCommandEncoders methods optimizeContentsForCPUAccess and optimizeContentsForGPUAccess. You first optimizeContentsForGPUAccess, then use the texture on the GPU, then twiddle the pixels back into a CPU-readable format with optimizeContentsForCPUAccess. That way you don't lose the performance when rendering to a texture.

Convert colours of every pixel in video preview - Swift

I have the following code which displays a camera preview, retrieves a single pixel's colour from the UIImage and converts this value to a 'filtered' colour.
func captureOutput(_ output: AVCaptureOutput, didOutput sampleBuffer: CMSampleBuffer, from connection: AVCaptureConnection) {
connection.videoOrientation = orientation
let videoOutput = AVCaptureVideoDataOutput()
videoOutput.setSampleBufferDelegate(self, queue: DispatchQueue.main)
let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)
let cameraImage = CIImage(cvImageBuffer: pixelBuffer!)
let typeOfColourBlindness = ColourBlindType(rawValue: "deuteranomaly")
/* Gets colour from a single pixel - currently 0,0 and converts it into the 'colour blind' version */
let captureImage = convert(cmage: cameraImage)
let colour = captureImage.getPixelColour(pos: CGPoint(x: 0, y: 0))
var redval: CGFloat = 0
var greenval: CGFloat = 0
var blueval: CGFloat = 0
var alphaval: CGFloat = 0
_ = colour.getRed(&redval, green: &greenval, blue: &blueval, alpha: &alphaval)
print("Colours are r: \(redval) g: \(greenval) b: \(blueval) a: \(alphaval)")
let filteredColour = CBColourBlindTypes.getModifiedColour(.deuteranomaly, red: Float(redval), green: Float(greenval), blue: Float(blueval))
print(filteredColour)
/* #################################################################################### */
DispatchQueue.main.async {
// placeholder for now
self.filteredImage.image = self.applyFilter(cameraImage: cameraImage, colourBlindness: typeOfColourBlindness!)
}
}
Here is where the x: 0, y: 0 pixel value is converted:
import Foundation
enum ColourBlindType: String {
case deuteranomaly = "deuteranomaly"
case protanopia = "protanopia"
case deuteranopia = "deuteranopia"
case protanomaly = "protanomaly"
}
class CBColourBlindTypes: NSObject {
class func getModifiedColour(_ type: ColourBlindType, red: Float, green: Float, blue: Float) -> Array<Float> {
switch type {
case .deuteranomaly:
return [(red*0.80)+(green*0.20)+(blue*0),
(red*0.25833)+(green*0.74167)+(blue*0),
(red*0)+(green*0.14167)+(blue*0.85833)]
case .protanopia:
return [(red*0.56667)+(green*0.43333)+(blue*0),
(red*0.55833)+(green*0.44167)+(blue*0),
(red*0)+(green*0.24167)+(blue*0.75833)]
case .deuteranopia:
return [(red*0.625)+(green*0.375)+(blue*0),
(red*0.7)+(green*0.3)+(blue*0),
(red*0)+(green*0.3)+(blue*0.7)]
case .protanomaly:
return [(red*0.81667)+(green*0.18333)+(blue*0.0),
(red*0.33333)+(green*0.66667)+(blue*0.0),
(red*0.0)+(green*0.125)+(blue*0.875)]
}
}
}
The placeholder for now comment refers to the following function:
func applyFilter(cameraImage: CIImage, colourBlindness: ColourBlindType) -> UIImage {
//do stuff with pixels to render new image
/* Placeholder code for shifting the hue */
// Create a place to render the filtered image
let context = CIContext(options: nil)
// Create filter angle
let filterAngle = 207 * Double.pi / 180
// Create a random color to pass to a filter
let randomColor = [kCIInputAngleKey: filterAngle]
// Apply a filter to the image
let filteredImage = cameraImage.applyingFilter("CIHueAdjust", parameters: randomColor)
// Render the filtered image
let renderedImage = context.createCGImage(filteredImage, from: filteredImage.extent)
// Return a UIImage
return UIImage(cgImage: renderedImage!)
}
And here is my extension for retrieving a pixel colour:
extension UIImage {
func getPixelColour(pos: CGPoint) -> UIColor {
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
How can I create a filter for the following colour range for example?
I want to take in the camera input, replace the colours to be of the Deuteranopia range and display this on the screen, in real time, using Swift.
I am using a UIImageView for the image display.
To learn how to perform filtering of video capture and real-time display of the filtered image, you may want to study the AVCamPhotoFilter sample code from Apple, and other sources such as this objc.io tutorial
In short, using a UIImage for real-time rendering is not a good idea - it's too slow. Use a OpenGL (e.g. GLKView) of Metal (e.g. MTKView). The AVCamPhotoFilter code uses MTKView and renders to intermediate buffers, but you can also render a CIImage directly using the appropriate CIContext methods, e.g. for metal https://developer.apple.com/documentation/coreimage/cicontext/1437835-render
In addition, regarding your color filter - you may want to look at the CIColorCube core image filter as shown here.
let filterName = "CIColorCrossPolynomial"
//deuteronomaly
let param = ["inputRedCoefficients" : CIVector(values: [0.8, 0.2, 0, 0, 0, 0, 0, 0, 0, 0], count: 10),
"inputGreenCoefficients" : CIVector(values: [0.25833, 0.74167, 0, 0, 0, 0, 0, 0, 0, 0], count: 10),
"inputBlueCoefficients" : CIVector(values: [0, 0.14167, 0.85833, 0, 0, 0, 0, 0, 0, 0], count: 10)]
let filter = CIFilter(name: filterName, parameters: param)
let startImage = CIImage(image: image!)
filter?.setValue(startImage, forKey: kCIInputImageKey)
let newImage = UIImage(ciImage: ((filter?.outputImage)!))
filter result:
filter result 2:

Thread 1: EXC_BAD_ACCESS (code=EXC_I386_GPFLT)

In my swift project, I have two classes that work together to hold Pixel values of an image to be able to modify red, green, blue and alpha values. An UnsafeMutableBufferPointer holds lots of bites that are comprised of the Pixel class objects.
I can interact with that the class that holds the UnsafeMutableBufferPointer<Pixel> property. I can access all of the properties on that object and that all works fine. The only problem I'm having with the UnsafeMutableBufferPoint<Pixel> is trying to loop through it with my Pixel object and it keeps crashing with the Thread 1: EXC_BAD_ACCESS (code=EXC_I386_GPFLT) exception.
init!(image: UIImage)
{
_width = Int(image.size.width)
_height = Int(image.size.height)
guard let cgImage = image.cgImage else { return nil }
_width = Int(image.size.width)
_height = Int(image.size.height)
let bitsPerComponent = 8
let bytesPerPixel = 4
let bytesPerRow = _width * bytesPerPixel
let imageData = UnsafeMutablePointer<Pixel>.allocate(capacity: _width * _height)
let colorSpace = CGColorSpaceCreateDeviceRGB()
var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Big.rawValue
bitmapInfo |= CGImageAlphaInfo.premultipliedLast.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
guard let imageContext = CGContext(data: imageData, width: _width, height: _height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else { return nil }
imageContext.draw(cgImage, in: CGRect(origin: CGPoint.zero, size: image.size))
_pixels = UnsafeMutableBufferPointer<Pixel>(start: imageData, count: _width * _height)
}
This function is the part that is crashing the program. The exact part that is crashing is the for loop that is looping through the rgba.pixels. rgba.pixels is the UnsafeMutableBufferPointer.
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any])
{
let image: UIImage = info[UIImagePickerControllerEditedImage] as! UIImage!
let rgba = RGBA(image: image)!
for pixel in rgba.pixels
{
print(pixel.red)
}
self.dismiss(animated: true, completion: nil);
}
This is the constructor where I create the UnsafeMutableBufferPointer<Pixel>. Is there an easier way to do this and still be able to get the RBGA values and change them easily.
The Pixel class is a UInt32 value this is split into four UInt 8 values.
Am I using the wrong construct to hold those values and if so, is there a safer or easier construct to use? Or am I doing something wrong when accessing the Pixel values?
This is how I got the pixels of an image -
// Grab and set up variables for the original image
let inputCGImage = inputImage.CGImage
let inputWidth: Int = CGImageGetWidth(inputCGImage)
let inputHeight: Int = CGImageGetHeight(inputCGImage)
// Get the colorspace that will be used for image processing (RGB/HSV)
let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceRGB()!
// Hardcode memory variables
let bytesPerPixel = 4 // 32 bits = 4 bytes
let bitsPerComponent = 8 // 32 bits div. by 4 components (RGBA) = 8 bits per component
let inputBytesPerRow = bytesPerPixel * inputWidth
// Get a pointer pointing to an allocated array to hold all the pixel data of the image
let inputPixels = UnsafeMutablePointer<UInt32>(calloc(inputHeight * inputWidth, sizeof(UInt32)))
// Create a context to draw the original image in (aka put the pixel data into the above array)
let context: CGContextRef = CGBitmapContextCreate(inputPixels, inputWidth, inputHeight, bitsPerComponent, inputBytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedLast.rawValue | CGBitmapInfo.ByteOrder32Big.rawValue)!
CGContextDrawImage(context, CGRect(x: 0, y: 0, width: inputWidth, height: inputHeight), inputCGImage)
Keep in mind this is not Swift 3 syntax incase that's what you're using, but that's the basic algorithm. Now to grab the individual color values of each pixel, you will have to implement these functions -
func Mask8(x: UInt32) -> UInt32
{
return x & 0xFF
}
func R(x: UInt32) -> UInt32
{
return Mask8(x)
}
func G(x: UInt32) -> UInt32
{
return Mask8(x >> 8)
}
func B(x: UInt32) -> UInt32
{
return Mask8(x >> 16)
}
func A(x: UInt32) -> UInt32
{
return Mask8(x >> 24)
}
To create a completely new color after processing the RGBA values, you use this function -
func RGBAMake(r: UInt32, g: UInt32, b: UInt32, a: UInt32) -> UInt32
{
return (Mask8(r) | Mask8(g) << 8 | Mask8(b) << 16 | Mask8(a) << 24)
}
To iterate through the pixels array, you do it as so -
var currentPixel = inputPixels
for _ in 0..<height
{
for i in 0..<width
{
let color: UInt32 = currentPixel.memory
if i < width - 1
{
print(NSString(format: "%3.0f", R(x: color), terminator: " "))
}
else
{
print(NSString(format: "%3.0f", R(x: color)))
}
currentPixel += 1
}
}

Swift - Compare colors at CGPoint

I have 2 pictures which I want to compare, if pixel color is the same to save it.
I detect the color of the pixel by this UIImage extension function:
func getPixelColor(pos: CGPoint) -> ??? {
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return ???
}
For example, I run the scanner on picture 1 and save it in an array? Or dictionary? And after that I run the scanner on picture 2 and when I have the information from 2 pictures to compare it with what function?
I want to see on which CGPoint the pixels colors are identical from 2 images?
UPDATE:
I update getPixelColor to return me "(pos)(r)(g)(b)(a)" and after that I created this function which left only duplicates (BEFORE USING THIS FUNCTION YOU HAVE TO .sort() THE ARRAY!)
extension Array where Element : Equatable {
var duplicates: [Element] {
var arr:[Element] = []
var start = 0
var start2 = 1
for _ in 0...self.count{
if(start2<self.count){
if(self[start] == self[start2]){
if(arr.contains(self[start])==false){
arr.append(self[start])
}
}
start+=1
start2+=1
}
}
return arr
}
}
This returns me something like this:
"(609.0, 47.0)1.01.01.01.0" I know that the color is black at this point I do x-536 to fit iPhone 5 screen and when I make an attempt to draw it again it draws something wrong... maybe I can't do it properly.. help?
have the UIImage extension return a UIColor. use this method to compare each pixel of the two images. if both pixels match, add the color to an array of arrays.
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
func findMatchingPixels(aImage: UIImage, _ bImage: UIImage) -> [[UIColor?]] {
guard aImage.size == bImage.size else { fatalError("images must be the same size") }
var matchingColors: [[UIColor?]] = []
for y in 0..<Int(aImage.size.height) {
var currentRow = [UIColor?]()
for x in 0..<Int(aImage.size.width) {
let aColor = aImage.getPixelColor(CGPoint(x: x, y: y))
let colorsMatch = bImage.getPixelColor(CGPoint(x: x, y: y)) == aColor
currentRow.append(colorsMatch ? aColor : nil)
}
matchingColors.append(currentRow)
}
return matchingColors
}
used like this:
let matchingPixels = findMatchingPixels(UIImage(named: "imageA.png")!, UIImage(named: "imageB.png")!)
if let colorForOrigin = matchingPixels[0][0] {
print("the images have the same color, it is: \(colorForOrigin)")
} else {
print("the images do not have the same color at (0,0)")
}
for simplicity i made findMatchingPixels() require the images be the same size, but it wouldn't take much to allow different sized images.
UPDATE
if you want ONLY the pixels that match, i'd return a tuple like this:
func findMatchingPixels(aImage: UIImage, _ bImage: UIImage) -> [(CGPoint, UIColor)] {
guard aImage.size == bImage.size else { fatalError("images must be the same size") }
var matchingColors = [(CGPoint, UIColor)]()
for y in 0..<Int(aImage.size.height) {
for x in 0..<Int(aImage.size.width) {
let aColor = aImage.getPixelColor(CGPoint(x: x, y: y))
guard bImage.getPixelColor(CGPoint(x: x, y: y)) == aColor else { continue }
matchingColors.append((CGPoint(x: x, y: y), aColor))
}
}
return matchingColors
}
Why not try a different approach?
The Core Image filter CIDifferenceBlendMode will return an all black image if passed two identical images and an image with areas of non black where two images differ. Pass that into a CIAreaMaximum which will return a 1x1 image containing the maximum pixel: if the maximum value is 0, you know you have two identical images, if the maximum is greater than zero, the two images are different.
Given two CIImage instances, imageA and imageB, here's the code:
let ciContext = CIContext()
let difference = imageA
.imageByApplyingFilter("CIDifferenceBlendMode",
withInputParameters: [
kCIInputBackgroundImageKey: imageB])
.imageByApplyingFilter("CIAreaMaximum",
withInputParameters: [
kCIInputExtentKey: CIVector(CGRect: imageA.extent)])
let totalBytes = 4
let bitmap = calloc(totalBytes, sizeof(UInt8))
ciContext.render(difference,
toBitmap: bitmap,
rowBytes: totalBytes,
bounds: difference.extent,
format: kCIFormatRGBA8,
colorSpace: nil)
let rgba = UnsafeBufferPointer<UInt8>(
start: UnsafePointer<UInt8>(bitmap),
count: totalBytes)
let red = rgba[0]
let green = rgba[1]
let blue = rgba[2]
If red, green or blue are not zero, you know the images are different!

Compute the histogram of an image using vImageHistogramCalculation in swift

I'm trying to compute the histogram of an image using Accelerate vImageHistogramCalculation_ARGBFFFF function, but I'm getting a vImage_Error of type kvImageNullPointerArgument (error code is -21772).
This is the exact same question, but I'm working in Swift: Compute the histogram of an image using vImageHistogramCalculation
// Get CGImage from UIImage
var image:UIImage = UIImage(named: "happiness1")!
var img:CGImageRef = image.CGImage
// Create vImage_Buffer with data from CGImageRef
var inProvider:CGDataProviderRef = CGImageGetDataProvider(img)
var inBitmapData:CFDataRef = CGDataProviderCopyData(inProvider)
// The next three lines set up the inBuffer object
var height:vImagePixelCount = CGImageGetHeight(img)
var width:vImagePixelCount = CGImageGetWidth(img)
var rowBytes:UInt = CGImageGetBytesPerRow(img)
var data:UnsafePointer<Void> = UnsafePointer<Void>(CFDataGetBytePtr(inBitmapData))
// Setup inBuffer
var inBuffer = vImage_Buffer(data: &data, height: height, width: width, rowBytes: rowBytes)
var histogram_entries:UInt32 = 4
var minVal:Pixel_F = 0
var maxVal:Pixel_F = 255
//let flags:vImage_Flags = kvImageNoFlags = 0
var histogram = UnsafeMutablePointer<UnsafeMutablePointer<vImagePixelCount>>()
var error:vImage_Error = vImageHistogramCalculation_ARGBFFFF(&inBuffer, histogram, histogram_entries, minVal, maxVal, 0)
println(error)
The problem is in the histogram variable, I need to recreate something like this:
// create an array of four histograms with eight entries each.
vImagePixelCount histogram[4][8] = {{0}};
// vImageHistogramCalculation requires an array of pointers to the histograms.
vImagePixelCount *histogramPointers[4] = { &histogram[0][0], &histogram[1][0], &histogram[2][0], &histogram[3][0] };
vImage_Error error = vImageHistogramCalculation_ARGBFFFF(&inBuffer, histogramPointers, 8, 0, 255, kvImageNoFlags);
// You can now access bin j of the histogram for channel i as histogram[i][j].
// The storage for the histogram will be cleaned up when execution leaves the
// current lexical block.
Suggestion?
I've implemented vImageHistogramCalculation_ARGB8888 as an extension to UIImage in Swift with the following:
func SIHistogramCalculation() -> (alpha: [UInt], red: [UInt], green: [UInt], blue: [UInt]) {
let imageRef = self.CGImage
let inProvider = CGImageGetDataProvider(imageRef)
let inBitmapData = CGDataProviderCopyData(inProvider)
var inBuffer = vImage_Buffer(data: UnsafeMutablePointer(CFDataGetBytePtr(inBitmapData)), height: UInt(CGImageGetHeight(imageRef)), width: UInt(CGImageGetWidth(imageRef)), rowBytes: CGImageGetBytesPerRow(imageRef))
var alpha = [UInt](count: 256, repeatedValue: 0)
var red = [UInt](count: 256, repeatedValue: 0)
var green = [UInt](count: 256, repeatedValue: 0)
var blue = [UInt](count: 256, repeatedValue: 0)
var alphaPtr = UnsafeMutablePointer<vImagePixelCount>(alpha)
var redPtr = UnsafeMutablePointer<vImagePixelCount>(red)
var greenPtr = UnsafeMutablePointer<vImagePixelCount>(green)
var bluePtr = UnsafeMutablePointer<vImagePixelCount> (blue)
var rgba = [redPtr, greenPtr, bluePtr, alphaPtr]
var histogram = UnsafeMutablePointer<UnsafeMutablePointer<vImagePixelCount>>(rgba)
var error = vImageHistogramCalculation_ARGB8888(&inBuffer, histogram, UInt32(kvImageNoFlags))
return (alpha, red, green, blue)
}
(Taken from https://github.com/FlexMonkey/ShinpuruImage)
For Swift 5, you need to explicitly let the compiler know that your pointers are optional. Change your UnsafeMutablePointer declarations to the following:
Swift 5 version:
let redPtr = red.withUnsafeMutableBufferPointer { $0.baseAddress }
let greenPtr = green.withUnsafeMutableBufferPointer { $0.baseAddress }
let bluePtr = blue.withUnsafeMutableBufferPointer { $0.baseAddress }
let alphaPtr = alphaChannel.withUnsafeMutableBufferPointer { $0.baseAddress }
let histogram = UnsafeMutablePointer<UnsafeMutablePointer<vImagePixelCount>?>.allocate(capacity: 4)
histogram[0] = redPtr
histogram[1] = greenPtr
histogram[2] = bluePtr
histogram[3] = alphaPtr
let error:vImage_Error = vImageHistogramCalculation_ARGB8888(&inBuffer, histogram, UInt32(kvImageNoFlags))
Swift 4 version:
let redPtr: UnsafeMutablePointer<vImagePixelCount>? = UnsafeMutablePointer(mutating: red)
let greenPtr: UnsafeMutablePointer<vImagePixelCount>? = UnsafeMutablePointer(mutating:green)
let bluePtr: UnsafeMutablePointer<vImagePixelCount>? = UnsafeMutablePointer(mutating:blue)
let alphaPtr: UnsafeMutablePointer<vImagePixelCount>? = UnsafeMutablePointer(mutating:alpha)
Today, I write the code to analyze photo's RGB histogram. it's working now.
func getHistogram(_ image: UIImage) -> (alpha: [UInt], red: [UInt], green: [UInt], blue: [UInt]) {
guard
let cgImage = image.cgImage,
var imageBuffer = try? vImage_Buffer(cgImage: cgImage)
else {
return nil
}
defer {
imageBuffer.free()
}
var redArray: [vImagePixelCount] = Array(repeating: 0, count: 256)
var greenArray: [vImagePixelCount] = Array(repeating: 0, count: 256)
var blueArray: [vImagePixelCount] = Array(repeating: 0, count: 256)
var alphaArray: [vImagePixelCount] = Array(repeating: 0, count: 256)
var error: vImage_Error = kvImageNoError
redArray.withUnsafeMutableBufferPointer { rPointer in
greenArray.withUnsafeMutableBufferPointer { gPointer in
blueArray.withUnsafeMutableBufferPointer { bPointer in
alphaArray.withUnsafeMutableBufferPointer { aPointer in
var histogram = [ rPointer.baseAddress, gPointer.baseAddress, bPointer.baseAddress, aPointer.baseAddress ]
histogram.withUnsafeMutableBufferPointer { hPointer in
if let hBaseAddress = hPointer.baseAddress {
error = vImageHistogramCalculation_ARGB8888(&imageBuffer, hBaseAddress, vNoFlags)
}
}
}
}
}
}
guard error == kvImageNoError else {
printVImageError(error: error)
return nil
}
return (alphaArray, redArray, greenArray, blueArrat)
}

Resources