Create pixel buffer in lab colour space - ios

I'm implementing a colour selection tool similar to photoshop's magic wand tool in ios.
I have already got it working in RGB, but to make it more accurate I want to make it work in LAB colour space.
The way it currently works is that it takes a UIImage, creates a CGImage version of that image. It then creates a CGContext in RGB colourspace, draws the CGImage in that context, takes the context data and then binds that to a pixel buffer which uses a struct RGBA32.
let colorSpace = CGColorSpaceCreateDeviceRGB()
let width = inputCGImage.width
let height = inputCGImage.height
let bytesPerPixel = 4
let bitsPerComponent = 8
let bytesPerRow = bytesPerPixel * width
let bitmapInfo = RGBA32.bitmapInfo
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colspace, bitmapInfo: bitmapInfo) else {
print("unable to create context")
return nil
}
context.draw(inputCGImage, in: CGRect(x: 0, y: 0, width: width, height: height))
guard let buffer = context.data else {
print("unable to get context data")
return nil
}
let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height)
struct RGBA32: Equatable {
var color: UInt32
var redComponent: UInt8 {
return UInt8((color >> 24) & 255)
}
var greenComponent: UInt8 {
return UInt8((color >> 16) & 255)
}
var blueComponent: UInt8 {
return UInt8((color >> 8) & 255)
}
var alphaComponent: UInt8 {
return UInt8((color >> 0) & 255)
}
init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
color = (UInt32(red) << 24) | (UInt32(green) << 16) | (UInt32(blue) << 8) | (UInt32(alpha) << 0)
}
static let clear = RGBA32(red: 0, green: 0, blue: 0, alpha: 0)
static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
static func ==(lhs: RGBA32, rhs: RGBA32) -> Bool {
return lhs.color == rhs.color
}
}
It then uses that pixel buffer to quickly compare the colour values to selected pixel using a very simple euclidian distance as detailed here.
https://en.wikipedia.org/wiki/Color_difference
As I said it works but for more accurate results I want it to work is CIE Lab Colour space.
Initially I tried converting each pixel to LAB colour as they were checked, then used CIE94 comparison as detailed in the above colour difference link. It worked but was very slow, I guess because it had to convert a million pixels(or so) to LAB colour before checking.
It then struck me that to make it work quickly it would be better to store the pixel buffer in LAB colourspace(it's not used for anything else).
So I created a similar struct LABA32
struct LABA32:Equatable {
var colour: UInt32
var lComponent: UInt8 {
return UInt8((colour >> 24) & 255)
}
var aComponent: UInt8 {
return UInt8((colour >> 16) & 255)
}
var bComponent: UInt8 {
return UInt8((colour >> 8) & 255)
}
var alphaComponent: UInt8 {
return UInt8((colour >> 0) & 255)
}
init(lComponent: UInt8, aComponent: UInt8, bComponent: UInt8, alphaComponent: UInt8) {
colour = (UInt32(lComponent) << 24) | (UInt32(aComponent) << 16) | (UInt32(bComponent) << 8) | (UInt32(alphaComponent) << 0)
}
static let clear = LABA32(lComponent: 0, aComponent: 0, bComponent: 0, alphaComponent: 0)
static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
static func ==(lhs: LABA32, rhs: LAB32) -> Bool {
return lhs.colour == rhs.colour
}
I might be wrong but in theory if I draw the CGImage in a context with a LAB colourspace instead of Device RGB it should map the data to this new struct.
The problem I'm having is actually creating the colourspace(let alone test if this theory will actually work).
To create a LAB colourspace I am trying to use this constructor
CGColorSpace(labWhitePoint: <UnsafePointer<CGFloat>!>, blackPoint: <UnsafePointer<CGFloat>!>, range: <UnsafePointer<CGFloat>!>)
According to apple documentation
white​Point: An array of 3 numbers that specify the tristimulus value,
in the CIE 1931 XYZ-space, of the diffuse white point.
black​Point: An
array of 3 numbers that specify the tristimulus value, in CIE 1931
XYZ-space, of the diffuse black point.
range: An array of 4 numbers
that specify the range of valid values for the a* and b* components of
the color space. The a* component represents values running from green
to red, and the b* component represents values running from blue to
yellow.
So I've created 3 arrays of CGFloats
var whitePoint:[CGFloat] = [0.95947,1,1.08883]
var blackPoint:[CGFloat] = [0,0,0]
var range:[CGFloat] = [-127,127,-127,127]
I then try to construct the colour space
let colorSpace = CGColorSpace(labWhitePoint: &whitePoint, blackPoint: &blackPoint, range: &range)
The problem is that I keep getting error "unsupported color space" so I must be doing something completely wrong. I've spent a lot of time looking for others trying to construct a LAB colour space but there doesn't seem to be anything relevant, even trying to find objective-C versions.
So how do I actually create a LAB colourspace correctly ?
Thanks.

The documentation also says:
Important: iOS does not support device-independent or generic color spaces. iOS applications must use device color spaces instead.
So if you want to work in LAB I guess you have to do the transformation manually.

Related

How do I convert a UIColor to a 3/4/6/8 digits hexadecimal string in Swift?

How do I convert a UIColor to a hexadecimal string of 3/4/6/8 digits in Swift?
How do I get a spectific one? for example, get "#0000FFFF" by calling UIColor.blue.eightDigitsString
Please See this:
5.2. The RGB hexadecimal notations: #RRGGBB
The CSS hex color notation allows a color to be specified by giving the channels as hexadecimal numbers, which is similar to how colors are often written directly in computer code. It’s also shorter than writing the same color out in rgb() notation.
The syntax of a is a <hash-token> token whose value consists of 3, 4, 6, or 8 hexadecimal digits. In other words, a hex color is written as a hash character, "#", followed by some number of digits 0-9 or letters a-f (the case of the letters doesn’t matter - #00ff00 is identical to #00FF00).
The number of hex digits given determines how to decode the hex notation into an RGB color:
6 digits
The first pair of digits, interpreted as a hexadecimal number, specifies the red channel of the color, where 00 represents the minimum value and ff (255 in decimal) represents the maximum. The next pair of digits, interpreted in the same way, specifies the green channel, and the last pair specifies the blue. The alpha channel of the color is fully opaque.
In other words, #00ff00 represents the same color as rgb(0 255 0) (a lime green).
8 digits
The first 6 digits are interpreted identically to the 6-digit notation. The last pair of digits, interpreted as a hexadecimal number, specifies the alpha channel of the color, where 00 represents a fully transparent color and ff represent a fully opaque color.
In other words, #0000ffcc represents the same color as rgb(0 0 100% / 80%) (a slightly-transparent blue).
3 digits
This is a shorter variant of the 6-digit notation. The first digit, interpreted as a hexadecimal number, specifies the red channel of the color, where 0 represents the minimum value and f represents the maximum. The next two digits represent the green and blue channels, respectively, in the same way. The alpha channel of the color is fully opaque.
This syntax is often explained by saying that it’s identical to a 6-digit notation obtained by "duplicating" all of the digits. For example, the notation #123 specifies the same color as the notation #112233. This method of specifying a color has lower "resolution" than the 6-digit notation; there are only 4096 possible colors expressible in the 3-digit hex syntax, as opposed to approximately 17 million in 6-digit hex syntax.
4 digits
This is a shorter variant of the 8-digit notation, "expanded" in the same way as the 3-digit notation is. The first digit, interpreted as a hexadecimal number, specifies the red channel of the color, where 0 represents the minimum value and f represents the maximum. The next three digits represent the green, blue, and alpha channels, respectively.
Now I already know how to convert a UIColor object to a 6-digits hex string. But I'm not sure how to convert it to a 3-digits/4-digits/8-digits hex string and what should be noticed.
guard let components = cgColor.components, components.count >= 3 else {
return nil
}
let r = Float(components[0])
let g = Float(components[1])
let b = Float(components[2])
var a = Float(1.0)
if components.count >= 4 {
a = Float(components[3])
}
if alpha {
// rrggbbaa mode
// is there any difference between rrggbbaa and aarrggbb?
return String(format: "%02lX%02lX%02lX%02lX", lroundf(r * 255), lroundf(g * 255), lroundf(b * 255), lroundf(a * 255))
} else {
// rrggbb mode
return String(format: "%02lX%02lX%02lX", lroundf(r * 255), lroundf(g * 255), lroundf(b * 255))
}
NOTE: it's UIColor to string, not string to UIColor
Here's an extension for UIColor that can provide hexStrings in many formats including 3, 4, 6, and 8 digit forms:
extension UIColor {
enum HexFormat {
case RGB
case ARGB
case RGBA
case RRGGBB
case AARRGGBB
case RRGGBBAA
}
enum HexDigits {
case d3, d4, d6, d8
}
func hexString(_ format: HexFormat = .RRGGBBAA) -> String {
let maxi = [.RGB, .ARGB, .RGBA].contains(format) ? 16 : 256
func toI(_ f: CGFloat) -> Int {
return min(maxi - 1, Int(CGFloat(maxi) * f))
}
var r: CGFloat = 0
var g: CGFloat = 0
var b: CGFloat = 0
var a: CGFloat = 0
self.getRed(&r, green: &g, blue: &b, alpha: &a)
let ri = toI(r)
let gi = toI(g)
let bi = toI(b)
let ai = toI(a)
switch format {
case .RGB: return String(format: "#%X%X%X", ri, gi, bi)
case .ARGB: return String(format: "#%X%X%X%X", ai, ri, gi, bi)
case .RGBA: return String(format: "#%X%X%X%X", ri, gi, bi, ai)
case .RRGGBB: return String(format: "#%02X%02X%02X", ri, gi, bi)
case .AARRGGBB: return String(format: "#%02X%02X%02X%02X", ai, ri, gi, bi)
case .RRGGBBAA: return String(format: "#%02X%02X%02X%02X", ri, gi, bi, ai)
}
}
func hexString(_ digits: HexDigits) -> String {
switch digits {
case .d3: return hexString(.RGB)
case .d4: return hexString(.RGBA)
case .d6: return hexString(.RRGGBB)
case .d8: return hexString(.RRGGBBAA)
}
}
}
Examples
print(UIColor.red.hexString(.d3)) // #F00
print(UIColor.red.hexString(.d4)) // #F00F
print(UIColor.red.hexString(.d6)) // #FF0000
print(UIColor.red.hexString(.d8)) // #FF0000FF
print(UIColor.green.hexString(.RGB)) // #0F0
print(UIColor.green.hexString(.ARGB)) // #F0F0
print(UIColor.green.hexString(.RGBA)) // #0F0F
print(UIColor.green.hexString(.RRGGBB)) // #00FF00
print(UIColor.green.hexString(.AARRGGBB)) // #FF00FF00
print(UIColor.green.hexString(.RRGGBBAA)) // #00FF00FF
print(UIColor(red: 0.25, green: 0.5, blue: 0.75, alpha: 0.3333).hexString()) // #4080c055
Any UIColor instance can be represented by 8 hexadecimal digits: for example #336699CC. For some colours, a shorter representation can be used:
for opaque colours (alpha unspecified or 1.0), the alpha component can be left off: #336699FF becomes #336699
if all colour components consist of pairs of the same digit, the digit only needs to be specified once: #336699CC becomes #369C, but #335799CC cannot be shortened
the two rules above can be combined: #336699FF becomes #369
The following function will return the shortest valid representation allowed for a given UIColor.
struct HexRepresentationOptions: OptionSet {
let rawValue: UInt
static let allowImplicitAlpha = HexRepresentationOptions(rawValue: 1 << 0)
static let allowShortForm = HexRepresentationOptions(rawValue: 1 << 1)
static let allowAll: HexRepresentationOptions = [
.allowImplicitAlpha,
.allowShortForm
]
}
func hexRepresentation(forColor color: UIColor,
options: HexRepresentationOptions = .allowAll) -> String? {
var red: CGFloat = 0.0
var green: CGFloat = 0.0
var blue: CGFloat = 0.0
var alpha: CGFloat = 0.0
guard color.getRed(&red, green: &green, blue: &blue, alpha: &alpha) else {
return nil
}
let colorComponents: [CGFloat]
if options.contains(.allowImplicitAlpha) && alpha == 1.0 {
colorComponents = [red, green, blue]
} else {
colorComponents = [red, green, blue, alpha]
}
let hexComponents = colorComponents.map { component -> (UInt8, UInt8, UInt8) in
let hex = UInt8(component * 0xFF)
return (hex, hex & 0x0F, hex >> 4)
}
let hasAlpha = colorComponents.count == 4
let useShortForm = options.contains(.allowShortForm) &&
!hexComponents.contains(where: { c in c.1 != c.2 })
let hexColor: UInt64 = hexComponents.reduce(UInt64(0)) { result, component in
if useShortForm {
return (result << 4) | UInt64(component.1)
} else {
return (result << 8) | UInt64(component.0)
}
}
switch (useShortForm, hasAlpha) {
case (true, false):
return String(format: "#%03X", hexColor)
case (true, true):
return String(format: "#%04X", hexColor)
case (false, false):
return String(format: "#%06X", hexColor)
case (false, true):
return String(format: "#%08X", hexColor)
}
}

In a MTKView How to get pixel information ( RGB and Alpha values ) of a given point in 3D (x,y,z)?

Ok we have draw several things in a MTKView We can move and turn around them using Metal, MetalKit functions but we are unable to get pixel information of a given point in 3D (x,y,z). We searched several hours and could not find any solution for that.
impossible to get color from 3D model space, but possible to get color from 2D view space.
func getColor(x: CGFloat, y: CGFloat) -> UIColor? {
if let curDrawable = self.currentDrawable {
var pixel: [CUnsignedChar] = [0, 0, 0, 0] // bgra
let textureScale = CGFloat(curDrawable.texture.width) / self.bounds.width
let bytesPerRow = curDrawable.texture.width * 4
let y = self.bounds.height - y
curDrawable.texture.getBytes(&pixel, bytesPerRow: bytesPerRow, from: MTLRegionMake2D(Int(x * textureScale), Int(y * textureScale), 1, 1), mipmapLevel: 0)
let red: CGFloat = CGFloat(pixel[2]) / 255.0
let green: CGFloat = CGFloat(pixel[1]) / 255.0
let blue: CGFloat = CGFloat(pixel[0]) / 255.0
let alpha: CGFloat = CGFloat(pixel[3]) / 255.0
let color = UIColor(red:red, green: green, blue:blue, alpha:alpha)
return color
}
return nil
}
This seems like a classic case of trying to consult your view (in the Model-View-Controller paradigm sense of the term) for information held by the model. Consult your model directly.
A 3D rendering technology like Metal is all about flattening 3D information to 2D and efficiently throwing away information not relevant to that 2D representation. Neither Metal nor the MTKView has any information about points in 3D by the end of the render.
Also note, your model probably doesn't have information for arbitrary 3D points. Most likely all of your 3D models (in the other sense of the term) are shells. They have no interiors, just surfaces. So, unless a 3D point falls exactly on the surface and not at all inside nor outside of the model, it has no color.

Optimize retrieval of all rendered pixel's data in a UIView

I need to perform some statistics and pixel-by-pixel analysis of a UIView containing sub views, sublayers and mask in a small iOS-swift3 project.
For the moment i came up with the following:
private func computeStatistics() {
// constants
let width: Int = Int(self.bounds.size.width)
let height: Int = Int(self.bounds.size.height)
// color extractor
let pixel = UnsafeMutablePointer<CUnsignedChar>.allocate(capacity: 4)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
for x in 0..<width {
for y in 0..<height {
let context = CGContext(data: pixel, width: 1, height: 1, bitsPerComponent: 8, bytesPerRow: 4, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
context!.translateBy(x: -CGFloat(x), y: -CGFloat(y))
layer.render(in: context!)
// analyse the pixel here
// eg: let totalRed += pixel[0]
}
}
pixel.deallocate(capacity: 4)
}
It's working, the problem is that on a fullscreen view even on an iphone4 this would mean 150.000 instantiations of the context and as many expensive renders, that beside being very slow must also have an issue with deallocation, saturating my memory (even in simulator).
I tried analysis only a fraction of the pixels
let definition: Int = width / 10
for x in 0..<width where x%definition == 0 {
...
}
But beside still taking up to 10 seconds on even on a simulated iphone7 is a very poor solution.
Is it possible to avoid re-rendering and translating the context everytime?

How to extract an array of binary values from a CGImage in swift

I am using a getPixelColor function (printed below) to extract all the pixel data from a CGImage:
extension CGImage{
func getPixelColor(pos: CGPoint) -> UIColor {
let pixelData = self.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
I am then averaging the R,G,B pixel values to calculate the intensity.
I am placing each intensity value into a 2-D array with dimensions imageWidth x imageHeight. These intensity values are then checked against a certain threshold, and accordingly assigned either a zero or one.
extension UIColor {
var coreImageColor: CIColor {
return CIColor(color: self)
}
var components: (red: CGFloat, green: CGFloat, blue: CGFloat, alpha: CGFloat) {
let color = coreImageColor
return (color.red, color.green, color.blue, color.alpha)
}
}
var intensityArray: [[Float]] = Array(repeating: Array(repeating: 0, count: width), count: height)
//fill intensity array with a nested for loop
for index1 in 0...height-1{ //make sure height-1 is here always
for index2 in 0...width-1{ //(width-1) has to goes here because the last place in the array is the size-1
let cgPoint = CGPoint(x: CGFloat(index2), y: CGFloat(index1)) //get an "index out of range error at 250 times
let color = cgImage?.getPixelColor(pos: cgPoint)
let CIcolor = color?.coreImageColor
let greencomponent = Float((color?.components.green)!)
let redcomponent = Float((color?.components.red)!)
let bluecomponent = Float((color?.components.blue)!)
let alphacomponent = Float((color?.components.alpha)!)
var intensity = (greencomponent+redcomponent+bluecomponent)/3
if intensity > 0.9 {
intensity = 1
} else {
intensity = 0
}
intensityArray[index1][index2] = intensity
}
}
With images larger than 100x100, this process takes very long so I was wondering if there was a simpler way to get the image pixel data in binary.
Looked a little deeper, and my guess wasn't the major problem. The major problem was all the extra objects you're creating (you're doing way too much work here). You don't need to make a UIColor and a CIColor for every pixel just to extract back the information you already had.
Here's a sketch, lightly tested (might be a bug in the computation; you'll have to unit test it)
var intensityArray: [[Double]] = Array(repeating: Array(repeating: 0, count: width), count: height)
//fill intensity array with a nested for loop
for index1 in 0...height-1{ //make sure height-1 is here always
for index2 in 0...width-1{ //(width-1) has to goes here because the last place in the array is the size-1
let colorIndex = (index1 * width + index2) * 4
let red = Double(data[colorIndex]) / 255
let green = Double(data[colorIndex + 1]) / 255
let blue = Double(data[colorIndex + 2]) / 255
var intensity = (red+green+blue)/3
if intensity > 0.9 {
intensity = 1
} else {
intensity = 0
}
intensityArray[index1][index2] = intensity
}
}
This is still a long way from the fastest approach, but it's much faster than you're doing this. If you need this to be really fast, you'll want to look at CIImage and CIFilter, which has these kinds of tools built-in (though I haven't looked up how to do this specific operation; depends on what you're planning on doing with the intensities).
OLD ANSWER; useful, but may or may not be the major cause on this one.
I'm going to go out on a limb here (too lazy to actually throw it in the profiler) and blind-guess that it's this line that takes almost all your time:
intensityArray[index1][index2] = intensity
This is not "a 2-D array." There is no such thing in Swift. This is an Array of Arrays, which is a different thing (each row could be a different length). That is very likely causing tons of copy-on-writes that duplicate the entire array.
I'd simplify it to [Float] and just write to intensityArray[index1 + index2*width]. That should be very fast (in fact, you could just use .append rather than subscripting at all). When you're done, you can pack it some other way if you like.
But the lesson here is that you should always start by profiling with Instruments. Don't guess where the problem is (yeah, I guessed, but I've seen this problem many times), test.
Also, double check whether this runs dramatically faster in Release mode. Sometimes these kinds of things are really slow in Debug, but fine once you optimize.

Resizing (downscaled) NSImage slightly changes RGB values. How do I preserve the original RGB values?

I have a project I'm working on that requires the original RGB values from an NSImage. When downscaling a completely red image (PNG Image) (RGB: 255,0,0) to a size of 200X200, I get slightly different RGB values (RGB: 251, 0, 7). The resizing code and pixel extraction code are down below. I have two questions. Is this expected behavior when resizing an NSImage using the code below? Is it possible to retain the original RGB values of the image (The RGB values that existed before the downscaling)?
Resizing Code (credit):
open func resizeImage(image:NSImage, newSize:NSSize) -> NSImage{
let rep = NSBitmapImageRep(bitmapDataPlanes: nil, pixelsWide: Int(newSize.width), pixelsHigh: Int(newSize.height), bitsPerSample: 8, samplesPerPixel: 4, hasAlpha: true, isPlanar: false, colorSpaceName: NSCalibratedRGBColorSpace, bytesPerRow: 0, bitsPerPixel: 0)
rep?.size = newSize
NSGraphicsContext.saveGraphicsState()
let bitmap = NSGraphicsContext.init(bitmapImageRep: rep!)
NSGraphicsContext.setCurrent(bitmap)
image.draw(in: NSMakeRect(0, 0, newSize.width, newSize.height), from: NSMakeRect(0, 0, image.size.width, image.size.height), operation: .sourceOver, fraction: CGFloat(1))
let newImage = NSImage(size: newSize)
newImage.addRepresentation(rep!)
return newImage
}
The code I used to extract the RGB values from an NSImage is down below:
RGB Extraction Code (credit) :
extension NSImage {
func pixelData() -> [Pixel] {
var bmp = self.representations[0] as! NSBitmapImageRep
var data: UnsafeMutablePointer<UInt8> = bmp.bitmapData!
var r, g, b, a: UInt8
var pixels: [Pixel] = []
NSLog("%d", bmp.pixelsHigh)
NSLog("%d", bmp.pixelsWide)
for var row in 0..<bmp.pixelsHigh {
for var col in 0..<bmp.pixelsWide {
r = data.pointee
data = data.advanced(by: 1)
g = data.pointee
data = data.advanced(by: 1)
b = data.pointee
data = data.advanced(by: 1)
a = data.pointee
data = data.advanced(by: 1)
pixels.append(Pixel(r: r, g: g, b: b, a: a))
}
}
return pixels
}
}
class Pixel {
var r: Float!
var g: Float!
var b: Float!
init(r: UInt8, g: UInt8, b: UInt8, a: UInt8) {
self.r = Float(r)
self.g = Float(g)
self.b = Float(b)
}
}
The issue has been resolved thanks to #KenThomases. When resizing the NSImage I set the colorspace for the NSBitmapImageRep object to NSCalibratedRGBColorSpace. The original NSImage before the downscaling had a different colorspace name then the downscaled image. A simple change in color spaces produced the correct results. NSCalibratedRGBColorSpace was changed to NSDeviceRGBColorSpace.

Resources