I am new with iOS and CoreML. I have a very simple UI with two UIImageViews (one should be the input, and the second should be the output). When tapping the first one, the image should be processed by a neural network and the output should be displayed in the second one.
However, when I try to download the image from the MLMultiArray output object and create an UIImage from it which I can then upload to the second UIImageView I get an EXC_BAD_ACCESS (code=1) .
I have reduced the problem to not calling the neural network processing at all, just trying to create a new image from a MLMultiArray. The outcome was the same.
After that I tried generating an UIImage from an empty buffer. The image is created correctly, but if I attempt to update the UIImageView to use it, I get the same error.
If I try to update the second UIImageView to a different image (e.g.: the input image) everything works fine.
I assume this is a memory management issue about the UIImage object I am creating but I am not able to figure out what I am doing wrong
class ViewController: UIViewController {
#IBOutlet weak var out: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
}
#IBAction func imageTapped(_ sender: UITapGestureRecognizer) {
let imageView = sender.view as? UIImageView
if let imageToAnalyse = imageView?.image {
if let outImg = process(forImage: imageToAnalyse) {
out.image = outImg
}
}
}
func process (forImage inImage:UIImage) -> UIImage? {
let size = CGSize(width: 512, height: 512)
let mlOut = try? MLMultiArray(shape: [1, size.height, size.width] as [NSNumber], dataType: .float32)
let newImage = getSinglePlaneImage(inBuffer: mlOut!, width: Int(size.width), height: Int(size.height))
return newImage
}
func getSinglePlaneImage(inBuffer: MLMultiArray, width: Int, height: Int) -> UIImage
{
var newImage: UIImage
// let floatPtr = inBuffer.dataPointer.bindMemory(to: Float32.self, capacity: inBuffer.count)
// let floatBuffer = UnsafeBufferPointer(start: floatPtr, count: inBuffer.count)
// let pixelValues : [UInt8]? = Array(floatBuffer).map({UInt8( ImageProcessor.clamp( (($0) + 1.0) * 128.0, minValue: 0.0, maxValue: 255.0) ) })
//simulating pixels from MLMultiArray
let pixels : [UInt8]? = Array(repeating: 0, count: 512*512)
var imageRef: CGImage?
if var pixelValues = pixels {
let bitsPerComponent = 8
let bytesPerPixel = 1
let bitsPerPixel = bytesPerPixel * bitsPerComponent
let bytesPerRow = bytesPerPixel * width
let totalBytes = height * bytesPerRow
imageRef = withUnsafePointer(to: &pixelValues, {
ptr -> CGImage? in
var imageRef: CGImage?
let colorSpaceRef = CGColorSpaceCreateDeviceGray()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue).union(CGBitmapInfo())
let data = UnsafeRawPointer(ptr.pointee).assumingMemoryBound(to: UInt8.self)
let releaseData: CGDataProviderReleaseDataCallback = {
(info: UnsafeMutableRawPointer?, data: UnsafeRawPointer, size: Int) -> () in
}
if let providerRef = CGDataProvider(dataInfo: nil, data: data, size: totalBytes, releaseData: releaseData) {
imageRef = CGImage(width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow,
space: colorSpaceRef,
bitmapInfo: bitmapInfo,
provider: providerRef,
decode: nil,
shouldInterpolate: false,
intent: CGColorRenderingIntent.defaultIntent)
}
return imageRef
})
}
newImage = UIImage(cgImage: imageRef!)
return newImage
}
}
Seems your code would convert 512x512-float32 array into 512x512-UInt8 Array successfully, so I write this answer based on the uncommented version of your code. (Though, the conversion is not efficient enough and has some room to improve.)
UPDATE
The following description is not the right solution for the OP's issue. Just kept for record. Please skip to UPDATED CODE at the bottom of this answer.
OLD CODE (NOT the right solution)
First of all, the worst flaw in the code are the following two lines:
imageRef = withUnsafePointer(to: &pixelValues, {
let data = UnsafeRawPointer(ptr.pointee).assumingMemoryBound(to: UInt8.self)
The first line above passes a pointer to [UInt8]?, in Swift, [UInt8]? (aka Optional<Array<UInt8>>) is an 8-byte struct, not a contiguous region like C-arrays.
The second is more dangerous. ptr.pointee is [UInt8]?, but accessing Swift Arrays through pointer is not guaranteed. And passing an Array to UnsafeRawPointer.init(_:) may create a temporal region which would be deallocated just after the call to the initializer.
As you know, accessing a dangling pointer does not make harm in some limited condition occasionally, but may generate unexpected result at any time.
I would write something like this:
func getSinglePlaneImage(inBuffer: MLMultiArray, width: Int, height: Int) -> UIImage {
//simulating pixels from MLMultiArray
//...
let pixelValues: [UInt8] = Array(repeating: 0, count: 1*512*512)
let bitsPerComponent = 8
let bytesPerPixel = 1
let bitsPerPixel = bytesPerPixel * 8
let bytesPerRow = bytesPerPixel * width
let totalBytes = height * bytesPerRow
let imageRef = pixelValues.withUnsafeBytes({bytes -> CGImage? in
var imageRef: CGImage?
let colorSpaceRef = CGColorSpaceCreateDeviceGray()
let bitmapInfo: CGBitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
let data = bytes.baseAddress!.assumingMemoryBound(to: UInt8.self)
let releaseData: CGDataProviderReleaseDataCallback = {_,_,_ in }
if let providerRef = CGDataProvider(dataInfo: nil, data: data, size: totalBytes, releaseData: releaseData) {
imageRef = CGImage(width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow,
space: colorSpaceRef,
bitmapInfo: bitmapInfo,
provider: providerRef,
decode: nil,
shouldInterpolate: false,
intent: .defaultIntent)
}
return imageRef
})
let newImage = UIImage(cgImage: imageRef!)
return newImage
}
When you want a pointer pointing to the starting element of an Array, use withUnsafeBytes and use the pointer (in fact, it is an UnsafeRawBufferPointer) inside the closure argument.
One more, your pixels or pixelValues have no need to be an Optional.
Or else, you can create a grey-scale image with Float32 for each pixel.
func getSinglePlaneImage(inBuffer: MLMultiArray, width: Int, height: Int) -> UIImage {
//simulating pixels from MLMultiArray
//...
let pixelValues: [Float32] = Array(repeating: 0, count: 1*512*512)
let bitsPerComponent = 32 //<-
let bytesPerPixel = 4 //<-
let bitsPerPixel = bytesPerPixel * 8
let bytesPerRow = bytesPerPixel * width
let totalBytes = height * bytesPerRow
let imageRef = pixelValues.withUnsafeBytes({bytes -> CGImage? in
var imageRef: CGImage?
let colorSpaceRef = CGColorSpaceCreateDeviceGray()
let bitmapInfo: CGBitmapInfo = [CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue),
.byteOrder32Little, .floatComponents] //<-
let data = bytes.baseAddress!.assumingMemoryBound(to: Float32.self)
let releaseData: CGDataProviderReleaseDataCallback = {_,_,_ in }
if let providerRef = CGDataProvider(dataInfo: nil, data: data, size: totalBytes, releaseData: releaseData) {
imageRef = CGImage(width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow,
space: colorSpaceRef,
bitmapInfo: bitmapInfo,
provider: providerRef,
decode: nil,
shouldInterpolate: false,
intent: CGColorRenderingIntent.defaultIntent)
}
return imageRef
})
let newImage = UIImage(cgImage: imageRef!)
return newImage
}
Both work as expected in my testing project, but if you find something wrong, please inform me.
UPDATED CODE (Hope this is the right solution)
I was missing the fact that CGDataProvider keeps the pointer when created with init(dataInfo:data:size:releaseData:) even after a CGImage is created. So, it may be referenced after the closure to withUnsafeBytes is finished, when it is not valid.
You should better use CGDataProvider.init(data:) in such cases.
func getSinglePlaneImage(inBuffer: MLMultiArray, width: Int, height: Int) -> UIImage {
var newImage: UIImage
//let floatPtr = inBuffer.dataPointer.bindMemory(to: Float32.self, capacity: inBuffer.count)
//let floatBuffer = UnsafeBufferPointer(start: floatPtr, count: inBuffer.count)
//let pixelValues: Data = Data((floatBuffer.lazy.map{
// UInt8(ImageProcessor.clamp((($0) + 1.0) * 128.0, minValue: 0.0, maxValue: 255.0))
//})
//simulating pixels from MLMultiArray
//...
let pixelValues = Data(count: 1*512*512) // <- ###
var imageRef: CGImage?
let bitsPerComponent = 8
let bytesPerPixel = 1
let bitsPerPixel = bytesPerPixel * bitsPerComponent
let bytesPerRow = bytesPerPixel * width
let colorSpaceRef = CGColorSpaceCreateDeviceGray()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.none.rawValue)
if let providerRef = CGDataProvider(data: pixelValues as CFData) { // <-###
imageRef = CGImage(width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: bytesPerRow,
space: colorSpaceRef,
bitmapInfo: bitmapInfo,
provider: providerRef,
decode: nil,
shouldInterpolate: false,
intent: CGColorRenderingIntent.defaultIntent)
}
newImage = UIImage(cgImage: imageRef!)
return newImage
}
As far as I tried, this does not crash even in actual device with number of repeated touches. Please try. Thanks for your patience.
Related
I am new with swift, TFlite and IOS. I succeed to convert, run my model. However at the end, I need to reconstruct an image. My TFlite model return a TFLite.tensor Float32 4d - shape (1, height, width, 3).
let outputTensor: Tensor
outputTensor = try myInterpreter.output(at: 0)
I am looking to make a RGB picture without alpha. In python, it will like this:
Image.fromarray((np.array(outputTensor.data) * 255).astype(np.uint8))
From my understanding the best way will be to make a CVPixelBuffer, apply a CoreOS transformation (for the x255) and finally make the UUImage. I am deeply lost in the IOS doc, it exists many possibilities, does the community has a suggestion ?
++t
From Google example, an extension of UIImage can be coded:
extension UIImage {
convenience init?(data: Data, size: CGSize) {
let width = Int(size.width)
let height = Int(size.height)
let floats = data.toArray(type: Float32.self)
let bufferCapacity = width * height * 4
let unsafePointer = UnsafeMutablePointer<UInt8>.allocate(capacity: bufferCapacity)
let unsafeBuffer = UnsafeMutableBufferPointer<UInt8>(
start: unsafePointer,
count: bufferCapacity)
defer {
unsafePointer.deallocate()
}
for x in 0..<width {
for y in 0..<height {
let floatIndex = (y * width + x) * 3
let index = (y * width + x) * 4
let red = UInt8(floats[floatIndex] * 255)
let green = UInt8(floats[floatIndex + 1] * 255)
let blue = UInt8(floats[floatIndex + 2] * 255)
unsafeBuffer[index] = red
unsafeBuffer[index + 1] = green
unsafeBuffer[index + 2] = blue
unsafeBuffer[index + 3] = 0
}
}
let outData = Data(buffer: unsafeBuffer)
// Construct image from output tensor data
let alphaInfo = CGImageAlphaInfo.noneSkipLast
let bitmapInfo = CGBitmapInfo(rawValue: alphaInfo.rawValue)
.union(.byteOrder32Big)
let colorSpace = CGColorSpaceCreateDeviceRGB()
guard
let imageDataProvider = CGDataProvider(data: outData as CFData),
let cgImage = CGImage(
width: width,
height: height,
bitsPerComponent: 8,
bitsPerPixel: 32,
bytesPerRow: MemoryLayout<UInt8>.size * 4 * Int(size.width),
space: colorSpace,
bitmapInfo: bitmapInfo,
provider: imageDataProvider,
decode: nil,
shouldInterpolate: false,
intent: .defaultIntent
)
else {
return nil
}
self.init(cgImage: cgImage)
}
}
Then the image can be easily constructed from the inference of TFLite.
let outputTensor: Tensor
outputTensor = try decoder.output(at: 0)
image = UIImage(data: outputTensor.data, size: size) ?? UIImage()
I would like to export a 16 bit image and I ahve the following code for it
let bv = malloc(width * height * 4)!
var db = vImage_Buffer(data: bv,
height: vImagePixelCount(height),
width: vImagePixelCount(width),
rowBytes: width*2)
// vImageConvert_PlanarFtoPlanar8(&sourceBuffer, &destBuffer, 1.0, 0.0, vImage_Flags(kvImageNoFlags))
vImageConvert_PlanarFtoPlanar16F(&sourceBuffer, &db, vImage_Flags(kvImageNoFlags))
let bp = bv.assumingMemoryBound(to: UInt8.self)
let p = CGDataProvider(data: CFDataCreateWithBytesNoCopy(kCFAllocatorDefault,
bp,
width * height,
kCFAllocatorDefault))!
let cgImage = CGImage(width: width,
height: height,
bitsPerComponent: 5,
bitsPerPixel: 16,
bytesPerRow: width,
space: CGColorSpace(name: CGColorSpace.linearSRGB)!,
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.noneSkipFirst.rawValue),
provider: p,
decode: nil,
shouldInterpolate: false,
intent: .defaultIntent)!
let savePath = self.documentsPath.appendingPathComponent("camera")
let sURL = savePath.appendingPathComponent(String(format: "image.png")
if let imageDestination = CGImageDestinationCreateWithURL(smoothedSceneDepthURL as CFURL, kUTTypePNG, 1, nil) {
CGImageDestinationAddImage(imageDestination, cgImage, nil)
CGImageDestinationFinalize(imageDestination)
}
Apple documentation says those values for bitsPerComponent and bitsPerPixel are correct for iOS here
But I get the following error:
[Unknown process name] CGImageCreate: invalid image bits/pixel or bytes/row.
I am able to export an 8 bit image in grayscale and can post the params if required btw
If there are 16 bits per pixel, then there will be 2*width bytes per row:
bytesPerRow: 2 * width,
I trying take screenshot of camera vuforia. I using this code. Its perfect working on iPhone 7 (ios 11.3) and iPad Pro (ios 11.2). But this code not working on iPad 2 (ios 9.3.5), function returning valid UIImage, but it black.
static public func takeScreenshot() -> UIImage? {
let xCoord: Int = 0
let yCoord: Int = 0
let screen = UIScreen.main.bounds
let scale = UIScreen.main.scale
let width = screen.width * scale
let height = screen.height * scale
let dataLength: Int = Int(width) * Int(height) * 4
let pixels: UnsafeMutableRawPointer? = malloc(dataLength * MemoryLayout<GLubyte>.size)
glPixelStorei(GLenum(GL_PACK_ALIGNMENT), 4)
glReadPixels(GLint(xCoord), GLint(yCoord), GLsizei(width), GLsizei(height), GLenum(GL_RGBA), GLenum(GL_UNSIGNED_BYTE), pixels)
guard let pixelData: UnsafePointer = (UnsafeRawPointer(pixels)?.assumingMemoryBound(to: UInt8.self)) else { return nil }
let cfdata: CFData = CFDataCreate(kCFAllocatorDefault, pixelData, dataLength * MemoryLayout<GLubyte>.size)
let provider: CGDataProvider! = CGDataProvider(data: cfdata)
let colorspace = CGColorSpaceCreateDeviceRGB()
guard let iref = CGImage(width: Int(width),
height: Int(height),
bitsPerComponent: 8,
bitsPerPixel: 32,
bytesPerRow: Int(width)*4,
space: colorspace,
bitmapInfo: CGBitmapInfo.byteOrder32Big,
provider: provider,
decode: nil,
shouldInterpolate: false,
intent: CGColorRenderingIntent.defaultIntent) else { return nil }
UIGraphicsBeginImageContext(CGSize(width: CGFloat(width), height: CGFloat(height)))
if let cgcontext = UIGraphicsGetCurrentContext() {
cgcontext.setBlendMode(CGBlendMode.copy)
cgcontext.draw(iref, in: CGRect(x: CGFloat(0.0), y: CGFloat(0.0), width: CGFloat(width), height: CGFloat(height)))
let image: UIImage? = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
return nil
}
UPDATE: i resolved this problem, need run function in opengl thread
OpenGL ES has very limited formats that are accepted. There is an excellent website with OpenGL documentations http://docs.gl
You are interested in http://docs.gl/es2/glReadPixels or http://docs.gl/es3/glReadPixels. Buffer format should be GL_RGBA or GL_BGRA.
Maybe better approach would be https://stackoverflow.com/a/9704392/1351828.
In my swift project, I have two classes that work together to hold Pixel values of an image to be able to modify red, green, blue and alpha values. An UnsafeMutableBufferPointer holds lots of bites that are comprised of the Pixel class objects.
I can interact with that the class that holds the UnsafeMutableBufferPointer<Pixel> property. I can access all of the properties on that object and that all works fine. The only problem I'm having with the UnsafeMutableBufferPoint<Pixel> is trying to loop through it with my Pixel object and it keeps crashing with the Thread 1: EXC_BAD_ACCESS (code=EXC_I386_GPFLT) exception.
init!(image: UIImage)
{
_width = Int(image.size.width)
_height = Int(image.size.height)
guard let cgImage = image.cgImage else { return nil }
_width = Int(image.size.width)
_height = Int(image.size.height)
let bitsPerComponent = 8
let bytesPerPixel = 4
let bytesPerRow = _width * bytesPerPixel
let imageData = UnsafeMutablePointer<Pixel>.allocate(capacity: _width * _height)
let colorSpace = CGColorSpaceCreateDeviceRGB()
var bitmapInfo: UInt32 = CGBitmapInfo.byteOrder32Big.rawValue
bitmapInfo |= CGImageAlphaInfo.premultipliedLast.rawValue & CGBitmapInfo.alphaInfoMask.rawValue
guard let imageContext = CGContext(data: imageData, width: _width, height: _height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else { return nil }
imageContext.draw(cgImage, in: CGRect(origin: CGPoint.zero, size: image.size))
_pixels = UnsafeMutableBufferPointer<Pixel>(start: imageData, count: _width * _height)
}
This function is the part that is crashing the program. The exact part that is crashing is the for loop that is looping through the rgba.pixels. rgba.pixels is the UnsafeMutableBufferPointer.
func imagePickerController(_ picker: UIImagePickerController, didFinishPickingMediaWithInfo info: [String : Any])
{
let image: UIImage = info[UIImagePickerControllerEditedImage] as! UIImage!
let rgba = RGBA(image: image)!
for pixel in rgba.pixels
{
print(pixel.red)
}
self.dismiss(animated: true, completion: nil);
}
This is the constructor where I create the UnsafeMutableBufferPointer<Pixel>. Is there an easier way to do this and still be able to get the RBGA values and change them easily.
The Pixel class is a UInt32 value this is split into four UInt 8 values.
Am I using the wrong construct to hold those values and if so, is there a safer or easier construct to use? Or am I doing something wrong when accessing the Pixel values?
This is how I got the pixels of an image -
// Grab and set up variables for the original image
let inputCGImage = inputImage.CGImage
let inputWidth: Int = CGImageGetWidth(inputCGImage)
let inputHeight: Int = CGImageGetHeight(inputCGImage)
// Get the colorspace that will be used for image processing (RGB/HSV)
let colorSpace: CGColorSpaceRef = CGColorSpaceCreateDeviceRGB()!
// Hardcode memory variables
let bytesPerPixel = 4 // 32 bits = 4 bytes
let bitsPerComponent = 8 // 32 bits div. by 4 components (RGBA) = 8 bits per component
let inputBytesPerRow = bytesPerPixel * inputWidth
// Get a pointer pointing to an allocated array to hold all the pixel data of the image
let inputPixels = UnsafeMutablePointer<UInt32>(calloc(inputHeight * inputWidth, sizeof(UInt32)))
// Create a context to draw the original image in (aka put the pixel data into the above array)
let context: CGContextRef = CGBitmapContextCreate(inputPixels, inputWidth, inputHeight, bitsPerComponent, inputBytesPerRow, colorSpace, CGImageAlphaInfo.PremultipliedLast.rawValue | CGBitmapInfo.ByteOrder32Big.rawValue)!
CGContextDrawImage(context, CGRect(x: 0, y: 0, width: inputWidth, height: inputHeight), inputCGImage)
Keep in mind this is not Swift 3 syntax incase that's what you're using, but that's the basic algorithm. Now to grab the individual color values of each pixel, you will have to implement these functions -
func Mask8(x: UInt32) -> UInt32
{
return x & 0xFF
}
func R(x: UInt32) -> UInt32
{
return Mask8(x)
}
func G(x: UInt32) -> UInt32
{
return Mask8(x >> 8)
}
func B(x: UInt32) -> UInt32
{
return Mask8(x >> 16)
}
func A(x: UInt32) -> UInt32
{
return Mask8(x >> 24)
}
To create a completely new color after processing the RGBA values, you use this function -
func RGBAMake(r: UInt32, g: UInt32, b: UInt32, a: UInt32) -> UInt32
{
return (Mask8(r) | Mask8(g) << 8 | Mask8(b) << 16 | Mask8(a) << 24)
}
To iterate through the pixels array, you do it as so -
var currentPixel = inputPixels
for _ in 0..<height
{
for i in 0..<width
{
let color: UInt32 = currentPixel.memory
if i < width - 1
{
print(NSString(format: "%3.0f", R(x: color), terminator: " "))
}
else
{
print(NSString(format: "%3.0f", R(x: color)))
}
currentPixel += 1
}
}
I've been trying to figure out how to convert an array of rgb pixel data to a UIImage in Swift.
I'm keeping the rgb data per pixel in a simple struct:
public struct PixelData {
var a: Int
var r: Int
var g: Int
var b: Int
}
I've made my way to the following function, but the resulting image is incorrect:
func imageFromARGB32Bitmap(pixels:[PixelData], width: Int, height: Int)-> UIImage {
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo:CGBitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedFirst.rawValue)
let bitsPerComponent:Int = 8
let bitsPerPixel:Int = 32
assert(pixels.count == Int(width * height))
var data = pixels // Copy to mutable []
let providerRef = CGDataProviderCreateWithCFData(
NSData(bytes: &data, length: data.count * sizeof(PixelData))
)
let cgim = CGImageCreate(
width,
height,
bitsPerComponent,
bitsPerPixel,
width * Int(sizeof(PixelData)),
rgbColorSpace,
bitmapInfo,
providerRef,
nil,
true,
kCGRenderingIntentDefault
)
return UIImage(CGImage: cgim)!
}
Any tips or pointers on how to properly convert an rgb array to an UIImage?
Note: This is a solution for iOS creating a UIImage. For a solution for macOS and NSImage, see this answer.
Your only problem is that the data types in your PixelData structure need to be UInt8. I created a test image in a Playground with the following:
public struct PixelData {
var a: UInt8
var r: UInt8
var g: UInt8
var b: UInt8
}
var pixels = [PixelData]()
let red = PixelData(a: 255, r: 255, g: 0, b: 0)
let green = PixelData(a: 255, r: 0, g: 255, b: 0)
let blue = PixelData(a: 255, r: 0, g: 0, b: 255)
for _ in 1...300 {
pixels.append(red)
}
for _ in 1...300 {
pixels.append(green)
}
for _ in 1...300 {
pixels.append(blue)
}
let image = imageFromARGB32Bitmap(pixels: pixels, width: 30, height: 30)
Update for Swift 4:
I updated imageFromARGB32Bitmap to work with Swift 4. The function now returns a UIImage? and guard is used to return nil if anything goes wrong.
func imageFromARGB32Bitmap(pixels: [PixelData], width: Int, height: Int) -> UIImage? {
guard width > 0 && height > 0 else { return nil }
guard pixels.count == width * height else { return nil }
let rgbColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue)
let bitsPerComponent = 8
let bitsPerPixel = 32
var data = pixels // Copy to mutable []
guard let providerRef = CGDataProvider(data: NSData(bytes: &data,
length: data.count * MemoryLayout<PixelData>.size)
)
else { return nil }
guard let cgim = CGImage(
width: width,
height: height,
bitsPerComponent: bitsPerComponent,
bitsPerPixel: bitsPerPixel,
bytesPerRow: width * MemoryLayout<PixelData>.size,
space: rgbColorSpace,
bitmapInfo: bitmapInfo,
provider: providerRef,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent
)
else { return nil }
return UIImage(cgImage: cgim)
}
Making it a convenience initializer for UIImage:
This function works well as a convenience initializer for UIImage. Here is the implementation:
extension UIImage {
convenience init?(pixels: [PixelData], width: Int, height: Int) {
guard width > 0 && height > 0, pixels.count == width * height else { return nil }
var data = pixels
guard let providerRef = CGDataProvider(data: Data(bytes: &data, count: data.count * MemoryLayout<PixelData>.size) as CFData)
else { return nil }
guard let cgim = CGImage(
width: width,
height: height,
bitsPerComponent: 8,
bitsPerPixel: 32,
bytesPerRow: width * MemoryLayout<PixelData>.size,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue),
provider: providerRef,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent)
else { return nil }
self.init(cgImage: cgim)
}
}
Here is an example of its usage:
// Generate a 500x500 image of randomly colored pixels
let height = 500
let width = 500
var pixels: [PixelData] = .init(repeating: .init(a: 0, r: 0, g: 0, b: 0), count: width * height)
for index in pixels.indices {
pixels[index].a = 255
pixels[index].r = .random(in: 0...255)
pixels[index].g = .random(in: 0...255)
pixels[index].b = .random(in: 0...255)
}
let image = UIImage(pixels: pixels, width: width, height: height)
Update for Swift 3
struct PixelData {
var a: UInt8 = 0
var r: UInt8 = 0
var g: UInt8 = 0
var b: UInt8 = 0
}
func imageFromBitmap(pixels: [PixelData], width: Int, height: Int) -> UIImage? {
assert(width > 0)
assert(height > 0)
let pixelDataSize = MemoryLayout<PixelData>.size
assert(pixelDataSize == 4)
assert(pixels.count == Int(width * height))
let data: Data = pixels.withUnsafeBufferPointer {
return Data(buffer: $0)
}
let cfdata = NSData(data: data) as CFData
let provider: CGDataProvider! = CGDataProvider(data: cfdata)
if provider == nil {
print("CGDataProvider is not supposed to be nil")
return nil
}
let cgimage: CGImage! = CGImage(
width: width,
height: height,
bitsPerComponent: 8,
bitsPerPixel: 32,
bytesPerRow: width * pixelDataSize,
space: CGColorSpaceCreateDeviceRGB(),
bitmapInfo: CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedFirst.rawValue),
provider: provider,
decode: nil,
shouldInterpolate: true,
intent: .defaultIntent
)
if cgimage == nil {
print("CGImage is not supposed to be nil")
return nil
}
return UIImage(cgImage: cgimage)
}