Convert UIImage to UInt8 Array in Swift - ios

all!
I've been doing a lot of research into this and I've integrated several different solutions into my project, but none of them seem to work. My current solution has been borrowed from this thread.
When I run my code, however, two things happen:
The pixel array remains initialized but unpopulated (Full of 0s)
I get two errors:
CGBitmapContextCreate: unsupported parameter combination: set CGBITMAP_CONTEXT_LOG_ERRORS environmental variable to see the details
and
CGContextDrawImage: invalid context 0x0. If you want to see the backtrace, please set
Any ideas? Here is my current built function that I'm calling for my Image class:
init?(fromImage image: UIImage!) {
let imageRef = image!.CGImage
self.width = CGImageGetWidth(imageRef)
self.height = CGImageGetHeight(imageRef)
let colorspace = CGColorSpaceCreateDeviceRGB()
let bytesPerRow = (4 * width);
let bitsPerComponent :UInt = 8
let pixels = UnsafeMutablePointer<UInt8>(malloc(width*height*4))
var context = CGBitmapContextCreate(pixels, width, height, Int(bitsPerComponent), bytesPerRow, colorspace, 0);
CGContextDrawImage(context, CGRectMake(0, 0, CGFloat(width), CGFloat(height)), imageRef)
Any pointers would help a lot, as I'm new to understanding how all of this CGBitmap stuff works.
Thanks a ton!

You should not pass an 0 as the bitmapInfo param for CGBitmapContextCreate. For RGBA you shall pass CGImageAlphaInfo.PremultipliedLast.rawValue.
Supported combinations of bitsPerComponent, bytesPerRow, colorspace and bitmapInfo can be found here:
https://developer.apple.com/library/mac/documentation/GraphicsImaging/Conceptual/drawingwithquartz2d/dq_context/dq_context.html#//apple_ref/doc/uid/TP30001066-CH203-BCIBHHBB
Note that 32 bits per pixel (bpp) is 4 bytes per pixel and you use it to calculate bytesPerRow

You need to convert the image to NSData and then convert NSData
to UInt8 array.
let data: NSData = UIImagePNGRepresentation(image)
// or let data: NSData = UIImageJPGRepresentation(image)
let count = data.length / sizeof(UInt8)
// create an array of Uint8
var array = [UInt8](count: count, repeatedValue: 0)
// copy bytes into array
data.getBytes(&array, length:count * sizeof(UInt8))

Related

How to interpret the pixel array derived from CMSampleBuffer in Swift

Maybe this is a very stupid question.
I am using AVFoundation in my app and I am able to get the frames(32BGRA Format).
The width of the frame is 1504, Height is 1128 and the bytes-Per-Row value is 6016.
When I create a UInt8 pixel array from this samplebuffer the length (array.count) of this array is 1696512 which happens to be equal to width * height.
What I am not getting is why the array length is width * height. Should it not be width * height * 4.
What am I missing here?
Edit - 1: Code
func BufferToArray(sampleBuffer: CMSampleBuffer) -> ([UInt8], Int, Int, Int) {
var rgbBufferArray = [UInt8]()
//Get pixel Buffer from CMSSampleBUffer
let pixelBuffer: CVPixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer)!
//Lock the base Address
CVPixelBufferLockBaseAddress(pixelBuffer, CVPixelBufferLockFlags.readOnly)
let width = CVPixelBufferGetWidth(pixelBuffer)
let height = CVPixelBufferGetHeight(pixelBuffer)
//get pixel count
let pixelCount = CVPixelBufferGetWidth(pixelBuffer) * CVPixelBufferGetHeight(pixelBuffer)
//Get base address
let baseAddress = CVPixelBufferGetBaseAddress(pixelBuffer)
//Get bytes per row of the image
let bytesPerRow = CVPixelBufferGetBytesPerRow(pixelBuffer)
//Cast the base address to UInt8. This is like an array now
let frameBuffer = baseAddress?.assumingMemoryBound(to: UInt8.self)
rgbBufferArray = Array(UnsafeMutableBufferPointer(start: frameBuffer, count: pixelCount))
//Unlock and release memory
CVPixelBufferUnlockBaseAddress(pixelBuffer, CVPixelBufferLockFlags(rawValue: 0))
return (rgbBufferArray, bytesPerRow, width, height)
}
The culprit is the data type (UInt8) in combination with the count:
You are assuming the memory contains UInt8 values (assumingMemoryBound(to: UInt8.self)) of pixelCount count. But as you concluded correctly it should be four times that number.
I'd recommend you import simd and use simd_uchar4 as data type. That's a struct type containing 4 UInt8. Then your array will contain pixelCount values of 4-tuple pixel values. You can access the channels with array[index].x , .y, .z, and .w respectively.

CGContext.init for ARGB image returns nil

I'm trying to create a CGContext and fill it with an array of pixels with an ARGB format. I've successfully created the pixel array, but when I try to create the CGContext with CGColorSpaceCreateDeviceRGB and CGImageAlphaInfo.first, it returns nil.
func generateBitmapImage8bit() -> CGImage {
let width = params[0]
let height = params[1]
let bitmapBytesPerRow = width * 4
let context = CGContext(data: nil,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: bitmapBytesPerRow,
space: CGColorSpaceCreateDeviceRGB(), //<-
bitmapInfo: CGImageAlphaInfo.first.rawValue)
context!.data!.storeBytes(of: rasterArray, as: [Int].self)
let image = context!.makeImage()
return image!
}
Please refer to the Supported Pixel Formats.
Seems like you use incorrect configuration (CGImageAlphaInfo.first). For 8 bitsPerComponent and RGB color space you have only the following valid alpha options:
kCGImageAlphaNoneSkipFirst
kCGImageAlphaNoneSkipLast
kCGImageAlphaPremultipliedFirst
kCGImageAlphaPremultipliedLast
You can also try setting CGBITMAP_CONTEXT_LOG_ERRORS environment variable in your scheme to get more information in runtime.

invalid data bytes/row:CGBitmapContextCreate: CGContextDrawImage: invalid context 0x0

I'm trying to convert array of images into a video file. In the process I have to fill pixel buffer from selected images. Here is the code snippet:
CVPixelBufferLockBaseAddress(pixelBuffer, 0)
let pixelData = CVPixelBufferGetBaseAddress(pixelBuffer)
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedFirst.rawValue)
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
print("\npixel buffer width: \(CVPixelBufferGetWidth(pixelBuffer))\n")
print("\nbytes per row: \(CVPixelBufferGetBytesPerRow(pixelBuffer))\n")
let context = CGBitmapContextCreate(
pixelData,
Int(image.size.width),
Int(image.size.height),
CGImageGetBitsPerComponent(image.CGImage),
CVPixelBufferGetBytesPerRow(pixelBuffer),
rgbColorSpace,
bitmapInfo.rawValue
)
CGContextDrawImage(context, CGRectMake(0, 0, image.size.width, image.size.height), image.CGImage)
CVPixelBufferUnlockBaseAddress(pixelBuffer, 0)
After executing these lines I get the following message in xcode:
CGBitmapContextCreate: invalid data bytes/row: should be at least 13056 for 8 integer bits/component, 3 components, kCGImageAlphaPremultipliedFirst.CGContextDrawImage: invalid context 0x0.
After debugging I get the following value:
CVPixelBufferGetWidth(pixelBuffer) // value 480
CVPixelBufferGetBytesPerRow(pixelBuffer) // value 1920
What should I do to get valid data bytes/row? What are the 3 components specified in console log? I saw similar questions in stackoverflow but nothing helped in my case.

Find right CGBitmapInfo for a picture

I'm trying to do a steganography app, and since now I've managed to get pixels bytes from actual image and modify them. But I have't managed to rebuild image in a proper way (I get bytes from image, rebuild image, get bytes again and they are not the same). After some debugging I've get to a conclusion. I'm not build image the right way. Since all other vars are pretty standard(see the code below), the only var that I'm not sure what it does and I think is wrong how I use it is bitMapInfo. Can someone help me out to figure what do I need?
let bitsPerComponent = 8
let bytesPerPixel = 4
let bytesPerRow = Int(image.size.width) * bytesPerPixel
let colorSpace = CGColorSpaceCreateDeviceRGB()
var bitmapInfo: UInt32 = CGBitmapInfo.ByteOrder32Big.rawValue
bitmapInfo |= CGImageAlphaInfo.PremultipliedLast.rawValue & CGBitmapInfo.AlphaInfoMask.rawValue
let imageContext = CGBitmapContextCreateWithData(imageData,
Int(image.size.width),
Int(image.size.height),
bitsPerComponent,
bytesPerRow,
colorSpace,
bitmapInfo,
nil,
nil)
let cgImage = CGBitmapContextCreateImage(imageContext)
let newImage = UIImage(CGImage: cgImage!)

How to make use of kCIFormatRGBAh to get half floats on iOS with Core Image?

I'm trying to get the per-pixel RGBA values for a CIImage in floating point.
I expect the following to work, using CIContext and rendering as kCIFormatRGBAh, but the output is all zeroes. Otherwise my next step would be converting from half floats to full.
What am I doing wrong? I've also tried this in Objective-C and get the same result.
let image = UIImage(named: "test")!
let sourceImage = CIImage(CGImage: image.CGImage)
let context = CIContext(options: [kCIContextWorkingColorSpace: NSNull()])
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bounds = sourceImage.extent()
let bytesPerPixel: UInt = 8
let format = kCIFormatRGBAh
let rowBytes = Int(bytesPerPixel * UInt(bounds.size.width))
let totalBytes = UInt(rowBytes * Int(bounds.size.height))
var bitmap = calloc(totalBytes, UInt(sizeof(UInt8)))
context.render(sourceImage, toBitmap: bitmap, rowBytes: rowBytes, bounds: bounds, format: format, colorSpace: colorSpace)
let bytes = UnsafeBufferPointer<UInt8>(start: UnsafePointer<UInt8>(bitmap), count: Int(totalBytes))
for (var i = 0; i < Int(totalBytes); i += 2) {
println("half float :: left: \(bytes[i]) / right: \(bytes[i + 1])")
// prints all zeroes!
}
free(bitmap)
Here's a related question about getting the output of CIAreaHistogram, which is why I want floating point values rather than integer, but I can't seem to make kCIFormatRGBAh work on any CIImage regardless of its origin, filter output or otherwise.
There are two constraints on using RGBAh with [CIContext render:toBitmap:rowBytes:bounds:format:colorSpace:] on iOS
the rowBytes must be a multiple of 8 bytes
calling it under simulator is not supported
These constraints come from the behavior of OpenGLES with RGBAh on iOS.

Resources