I'm trying to get the CGColor of a specifiy point from my NSImage (png file). This function is called on NSViews which can be dragged over my NSImageView. Then this function should set a variable (defaultColor) to the Color which is exactly at the position from the NSView on the NSImageView. For testing I then colored each NSView to the color stored in the variable before (so to that color where the NSView is positioned on the NSImageView).
How you can see in the screenshots for example I displayed a 300x300 image containing four different colors in the NSImageView. The colors will be detected right, but it seems like the colors are swapped horizontally. The colors on the top will be measured when the NSViews are on the bottom and the colors on the bottom will be measured when the NSViews are on the top.
Is the byte order wrong? How I can swap this? I already read How do I get the color of a pixel in a UIImage with Swift? and Why do I get the wrong color of a pixel with following code?. Thats from where I have the code I use which I changed a little bit:
func setDefaulColor(image: NSImage)
{
let posX = self.frame.origin.x + (self.frame.width / 2)
let posY = self.frame.origin.y + (self.frame.height / 2)
var r: CGFloat = 0
var g: CGFloat = 0
var b: CGFloat = 0
var a: CGFloat = 1
if posX <= image.size.width && posY <= image.size.height
{
var imageRect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
let imageRef = image.cgImage(forProposedRect: &imageRect, context: nil, hints: nil)
var pixelData = imageRef!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = Int(posY) * imageRef!.bytesPerRow + Int(posX) * 4
r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
}
self.defaultColor = CGColor(red: r, green: g, blue: b, alpha: a)
setNeedsDisplay(NSRect(x: 0, y: 0, width: self.frame.width, height: self.frame.height))
}
Here are some screenshots:
NSViews on the top
NSViews on the bottom
The PNG-file displayed by the NSImageView should be in the RGBA format. How you can see I think the colors extracted right from the pixeldata:
r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
but it seems the pixeldata is loaded in the wrong order?
Do you know how to change the data order or why this is happening?
The coordinate system of an image and coordinate system of your view are not the same. The conversion is needed between them.
It is hard to say how
let posX = self.frame.origin.x + (self.frame.width / 2)
let posY = self.frame.origin.y + (self.frame.height / 2)
relate to your image as you did not specify any additional information.
If you have an image view and you would like to extract a pixel at a certain position (x, y) then you need to take into consideration the scaling and content mode.
The image itself is usually placed in the byte buffer so that the top-left pixel is first and is followed by the pixel right of it. The coordinate system of NSView is not though as it starts on bottom left.
To begin with it makes most sense to get relative position. This is a point with coordinates within [0, 1]. For your view it should be:
func getRelativePositionInView(_ view: NSView, absolutePosition: (x: CGFloat, y: CGFloat)) -> (x: CGFloat, y: CGFloat) {
return ((absolutePosition.x - view.frame.origin.x)/view.frame.width, (absolutePosition.y - view.frame.origin.y)/view.frame.height)
}
Now this point should be converted to image coordinate system where vertical flip needs be done and scaling applied.
If content mode is simply "scale" (whole image is shown) then the solution is simple:
func pointOnImage(_ image: NSImage, relativePositionInView: (x: CGFloat, y: CGFloat)) -> (x: CGFloat, y: CGFloat)? {
let convertedCoordinates: (x: CGFloat, y: CGFloat) = (
relativePositionInView.x * image.size.width,
(1.0-relativePositionInView.y) * image.size.height
)
guard convertedCoordinates.x >= 0.0 else { return nil }
guard convertedCoordinates.y >= 0.0 else { return nil }
guard convertedCoordinates.x < image.size.width else { return nil }
guard convertedCoordinates.y < image.size.height else { return nil }
return convertedCoordinates
}
Some other more common modes are scale-aspect-fill and scale-aspect-fit. Those need extra computations when converting points but this seems to not be a part of your issue (for now).
So the two methods will most likely fix your issue. But you can also just apply a very short fix:
let posY = whateverViewTheImageIsOn.frame.height - (self.frame.origin.y + (self.frame.height / 2))
Personally I think this to be very messy but you be the judge of that.
There are also some other considerations which may or may not be valid for your case. When displaying an image the color of pixels may appear different than they are in your buffer. Mostly this is due to scaling. For instance a pure black and white image may show gray areas on some pixels. If this is something you would like to apply when finding a color it makes more sense to look into creating an image from NSView. This approach could also remove a lot of mathematical problems for you.
I have a UIImage and I want all the cgPoints of the specific colour this image has. For example I have this Image
Now I want to get the RED colour CGPoints from this UIImage. Red colour may be a straight horizontal/vertical line. It may also be a curved/zigzag line.
This is what I have tried but I m unable to detect the required RED coloured CGPoints.
loop through image size/pixels to detect colour
var requiredPointsInImage = [CGPoint]()
let testImage = UIImage.init(named: "imgToTest1")
for heightIteration in 0..<Int(testImage!.size.height) {
for widthIteration in 0..<Int(testImage!.size.width) {
let colorOfPoints = testImage!.getPixelColor(pos: CGPoint(x:CGFloat(widthIteration), y:CGFloat(heightIteration)), withFrameSize: testImage!.size)
if colorOfPoints == MYColor {
print(colorOfPoints)
requiredPointsInImage.append(CGPoint(x:CGFloat(widthIteration), y:CGFloat(heightIteration)))
}
}
}
let newImage = drawShapesOnImage(image: testImage!, points: requiredPointsInImage)
// Colour Detection
extension UIImage {
func getPixelColor(pos: CGPoint, withFrameSize size: CGSize) -> UIColor {
let x: CGFloat = (self.size.width) * pos.x / size.width
let y: CGFloat = (self.size.height) * pos.y / size.height
let pixelPoint: CGPoint = CGPoint(x: x, y: y)
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pixelPoint.y)) + Int(pixelPoint.x)) * 3 //4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
// let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: 1)
}
}
// Drawing on detected points
func drawShapesOnImage(image: UIImage, points:[CGPoint]) -> UIImage {
UIGraphicsBeginImageContext(image.size)
image.draw(at: CGPoint.zero)
let context = UIGraphicsGetCurrentContext()
context!.setLineWidth(2.0)
context!.setStrokeColor(UIColor.green.cgColor)
let radius: CGFloat = 5.0
for point in points {
let center = CGPoint(x: point.x, y: point.y);
context!.addArc(center: CGPoint(x: center.x,y: center.y), radius: radius, startAngle: 0, endAngle: 360, clockwise: true)
context!.strokePath()
}
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return resultImage!
}
This is what I have got .....
Note: Background may not always be white, but of course not RED.
Using the same getPixelColor() function I can get exact color from this below image of any CGPoint but with y constant (say y:1)
If you think there is any other better approach to detect all these points please suggest.
SO I have found the solution, Actually the problem was I was ignoring Alpha channel in UIImage extension getPixelColor(). using alpha solve my problem.
here is the updated code!
extension UIImage {
func getPixelColor(pos: CGPoint, withFrameSize size: CGSize) -> UIColor {
let x: CGFloat = (self.size.width) * pos.x / size.width
let y: CGFloat = (self.size.height) * pos.y / size.height
let pixelPoint: CGPoint = CGPoint(x: x, y: y)
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pixelPoint.y)) + Int(pixelPoint.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
And the result is:
Can someone point me in the right direction with color detection at a distance? I have used the code below and it grabs RBG values of an image properly if an object or point of interest is less than 10 feet away. When the object is at a distance the code returns the wrong values. I want to take a picture of an object at a distance greater than 10 feet and detect the color of that image.
//On the top of your swift
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
I am a photographer and what you are trying to do is very similar to setting a white balance in post processing or using the color picker in PS.
Digital Camera's don't have pixels that capture the full spectrum of light at once. They have triplets of pixels for RGB. The captured information is interpolated and this can give very bad results. Setting the white balance in post on an image taken at night is almost impossible.
Reasons for bad interpolation:
Pixels are bigger than the smallest discernible object in the scene. (moiré artifacts)
Low light situation where Digital Gain increases color differences. (color noise artifacts)
Image was converted to low quality jpg but has lots of edges.
(jpg artifacts)
If it is a low quality jpg, get a better source img.
Fix
All you have to do to get a more accurate reading, is blur the image.
The smallest acceptable blur is 3 pixels, because this will undo some of the interpolation. Bigger blurs might be better.
Since blurs are expensive it is best to crop the image to a multiple of the blur radius. You can't take a precise fit because it will also blur the edges and beyond the edges the image is black. This will influence your reading.
It might be best if you also enforce an upper limit on the blur radius.
Shortcut to get the center of something with a size.
extension CGSize {
var center : CGPoint {
get {
return CGPoint(x: width / 2, y: height / 2)
}
}
}
The UIImage stuff
extension UIImage {
func blur(radius: CGFloat) -> UIImage? {
// extensions of UImage don't know what a CIImage is...
typealias CIImage = CoreImage.CIImage
// blur of your choice
guard let blurFilter = CIFilter(name: "CIBoxBlur") else {
return nil
}
blurFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
blurFilter.setValue(radius, forKey: kCIInputRadiusKey)
let ciContext = CIContext(options: nil)
guard let result = blurFilter.valueForKey(kCIOutputImageKey) as? CIImage else {
return nil
}
let blurRect = CGRect(x: -radius, y: -radius, width: self.size.width + (radius * 2), height: self.size.height + (radius * 2))
let cgImage = ciContext.createCGImage(result, fromRect: blurRect)
return UIImage(CGImage: cgImage)
}
func crop(cropRect : CGRect) -> UIImage? {
guard let imgRef = CGImageCreateWithImageInRect(self.CGImage, cropRect) else {
return nil
}
return UIImage(CGImage: imgRef)
}
func getPixelColor(atPoint point: CGPoint, radius:CGFloat) -> UIColor? {
var pos = point
var image = self
// if the radius is too small -> skip
if radius > 1 {
let cropRect = CGRect(x: point.x - (radius * 4), y: point.y - (radius * 4), width: radius * 8, height: radius * 8)
guard let cropImg = self.crop(cropRect) else {
return nil
}
guard let blurImg = cropImg.blur(radius) else {
return nil
}
pos = blurImg.size.center
image = blurImg
}
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(image.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
Side note :
Your problem might not be the color grabbing function but how you set the point. If you are doing it by touch and the object is farther and thus smaller on the screen, you might not set it accurately enough.
read the average color of a UIImage ==> https://www.hackingwithswift.com/example-code/media/how-to-read-the-average-color-of-a-uiimage-using-ciareaaverage
extension UIImage {
var averageColor: UIColor? {
guard let inputImage = CIImage(image: self) else { return nil }
let extentVector = CIVector(x: inputImage.extent.origin.x, y: inputImage.extent.origin.y, z: inputImage.extent.size.width, w: inputImage.extent.size.height)
guard let filter = CIFilter(name: "CIAreaAverage", parameters: [kCIInputImageKey: inputImage, kCIInputExtentKey: extentVector]) else { return nil }
guard let outputImage = filter.outputImage else { return nil }
var bitmap = [UInt8](repeating: 0, count: 4)
let context = CIContext(options: [.workingColorSpace: kCFNull])
context.render(outputImage, toBitmap: &bitmap, rowBytes: 4, bounds: CGRect(x: 0, y: 0, width: 1, height: 1), format: .RGBA8, colorSpace: nil)
return UIColor(red: CGFloat(bitmap[0]) / 255, green: CGFloat(bitmap[1]) / 255, blue: CGFloat(bitmap[2]) / 255, alpha: CGFloat(bitmap[3]) / 255)
}
}
I'm trying to get the color of a pixel in a UIImage with Swift, but it seems to always return 0. Here is the code, translated from #Minas' answer on this thread:
func getPixelColor(pos: CGPoint) -> UIColor {
var pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
var data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
var pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
var r = CGFloat(data[pixelInfo])
var g = CGFloat(data[pixelInfo+1])
var b = CGFloat(data[pixelInfo+2])
var a = CGFloat(data[pixelInfo+3])
return UIColor(red: r, green: g, blue: b, alpha: a)
}
Thanks in advance!
A bit of searching leads me here since I was facing the similar problem.
You code works fine. The problem might be raised from your image.
Code:
//On the top of your swift
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
What happens is this method will pick the pixel colour from the image's CGImage. So make sure you are picking from the right image. e.g. If you UIImage is 200x200, but the original image file from Imgaes.xcassets or wherever it came from, is 400x400, and you are picking point (100,100), you are actually picking the point on the upper left section of the image, instead of middle.
Two Solutions:
1, Use image from Imgaes.xcassets, and only put one #1x image in 1x field. Leave the #2x, #3x blank. Make sure you know the image size, and pick a point that is within the range.
//Make sure only 1x image is set
let image : UIImage = UIImage(named:"imageName")
//Make sure point is within the image
let color : UIColor = image.getPixelColor(CGPointMake(xValue, yValue))
2, Scale you CGPoint up/down the proportion to match the UIImage. e.g. let point = CGPoint(100,100) in the example above,
let xCoordinate : Float = Float(point.x) * (400.0/200.0)
let yCoordinate : Float = Float(point.y) * (400.0/200.0)
let newCoordinate : CGPoint = CGPointMake(CGFloat(xCoordinate), CGFloat(yCoordinate))
let image : UIImage = largeImage
let color : UIColor = image.getPixelColor(CGPointMake(xValue, yValue))
I've only tested the first method, and I am using it to get a colour off a colour palette. Both should work.
Happy coding :)
SWIFT 3, XCODE 8 Tested and working
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
If you are calling the answered question more than once, than you should not use the function on every pixel, because you are recreating the same set of data. If you want all of the colors in an image, do something more like this:
func findColors(_ image: UIImage) -> [UIColor] {
let pixelsWide = Int(image.size.width)
let pixelsHigh = Int(image.size.height)
guard let pixelData = image.cgImage?.dataProvider?.data else { return [] }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
var imageColors: [UIColor] = []
for x in 0..<pixelsWide {
for y in 0..<pixelsHigh {
let point = CGPoint(x: x, y: y)
let pixelInfo: Int = ((pixelsWide * Int(point.y)) + Int(point.x)) * 4
let color = UIColor(red: CGFloat(data[pixelInfo]) / 255.0,
green: CGFloat(data[pixelInfo + 1]) / 255.0,
blue: CGFloat(data[pixelInfo + 2]) / 255.0,
alpha: CGFloat(data[pixelInfo + 3]) / 255.0)
imageColors.append(color)
}
}
return imageColors
}
Here is an Example Project
As a side note, this function is significantly faster than the accepted answer, but it gives a less defined result.. I just put the UIImageView in the sourceView parameter.
func getPixelColorAtPoint(point: CGPoint, sourceView: UIView) -> UIColor {
let pixel = UnsafeMutablePointer<CUnsignedChar>.allocate(capacity: 4)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let context = CGContext(data: pixel, width: 1, height: 1, bitsPerComponent: 8, bytesPerRow: 4, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
context!.translateBy(x: -point.x, y: -point.y)
sourceView.layer.render(in: context!)
let color: UIColor = UIColor(red: CGFloat(pixel[0])/255.0,
green: CGFloat(pixel[1])/255.0,
blue: CGFloat(pixel[2])/255.0,
alpha: CGFloat(pixel[3])/255.0)
pixel.deallocate(capacity: 4)
return color
}
I was getting swapped colors for red and blue.
The original function also did not account for the actual bytes per row and bytes per pixel.
I also avoid unwrapping optionals whenever possible.
Here's an updated function.
import UIKit
extension UIImage {
/// Get the pixel color at a point in the image
func pixelColor(atLocation point: CGPoint) -> UIColor? {
guard let cgImage = cgImage, let pixelData = cgImage.dataProvider?.data else { return nil }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let bytesPerPixel = cgImage.bitsPerPixel / 8
let pixelInfo: Int = ((cgImage.bytesPerRow * Int(point.y)) + (Int(point.x) * bytesPerPixel))
let b = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let r = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
Swift3 (IOS 10.3)
Important: - This will works only for #1x image.
Request: -
if you have solution for #2x and #3x images please share. Thank you :)
extension UIImage {
func getPixelColor(atLocation location: CGPoint, withFrameSize size: CGSize) -> UIColor {
let x: CGFloat = (self.size.width) * location.x / size.width
let y: CGFloat = (self.size.height) * location.y / size.height
let pixelPoint: CGPoint = CGPoint(x: x, y: y)
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelIndex: Int = ((Int(self.size.width) * Int(pixelPoint.y)) + Int(pixelPoint.x)) * 4
let r = CGFloat(data[pixelIndex]) / CGFloat(255.0)
let g = CGFloat(data[pixelIndex+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelIndex+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelIndex+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
Usage
print(yourImageView.image!.getPixelColor(atLocation: location, withFrameSize: yourImageView.frame.size))
You can use tapGestureRecognizer for location.
Your code works fine for me, as an extension to UIImage. How are your testing your colour? here's my example:
let green = UIImage(named: "green.png")
let topLeft = CGPoint(x: 0, y: 0)
// Use your extension
let greenColour = green.getPixelColor(topLeft)
// Dump RGBA values
var redval: CGFloat = 0
var greenval: CGFloat = 0
var blueval: CGFloat = 0
var alphaval: CGFloat = 0
greenColour.getRed(&redval, green: &greenval, blue: &blueval, alpha: &alphaval)
println("Green is r: \(redval) g: \(greenval) b: \(blueval) a: \(alphaval)")
This prints:
Green is r: 0.0 g: 1.0 b: 1.0 a: 1.0
...which is correct, given that my image is a solid green square.
(What do you mean by "it always seems to return 0"? You don't happen to be testing on a black pixel, do you?)
Im getting backwards colours in terms of R and B being swapped, not sure why this I thought the order was RGBA.
func testGeneratedColorImage() {
let color = UIColor(red: 0.5, green: 0, blue: 1, alpha: 1)
let size = CGSize(width: 10, height: 10)
let image = UIImage.image(fromColor: color, size: size)
XCTAssert(image.size == size)
XCTAssertNotNil(image.cgImage)
XCTAssertNotNil(image.cgImage!.dataProvider)
let pixelData = image.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let position = CGPoint(x: 1, y: 1)
let pixelInfo: Int = ((Int(size.width) * Int(position.y)) + Int(position.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
let testColor = UIColor(red: r, green: g, blue: b, alpha: a)
XCTAssert(testColor == color, "Colour: \(testColor) does not match: \(color)")
}
Where color looks like this:
image looks like this:
and testColor looks like:
(I can understand that the blue value might be off a little bit and be 0.502 with floating point inaccuracy)
With the code switched to:
let b = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let r = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
I get testColor as:
I think you need to divide each component by 255:
var r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
var g = CGFloat(data[pixelInfo + 1]) / CGFloat(255.0)
var b = CGFloat(data[pixelInfo + 2]) / CGFloat(255.0)
var a = CGFloat(data[pixelInfo + 3]) / CGFloat(255.0)
I was trying to find the colors of all four corners of an image and was getting unexpected results, including UIColor.clear.
The issue is that the pixels start at 0, so requesting a pixel at the width of the image would actually wrap back around and give me the first pixel of the second row.
For example, the top right pixel of a 640 x 480 image would actually be x: 639, y: 0, and the bottom right pixel would be x: 639, y: 479.
Here's my implementation of the UIImage extension with this adjustment:
func getPixelColor(pos: CGPoint) -> UIColor {
guard let cgImage = cgImage, let pixelData = cgImage.dataProvider?.data else { return UIColor.clear }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let bytesPerPixel = cgImage.bitsPerPixel / 8
// adjust the pixels to constrain to be within the width/height of the image
let y = pos.y > 0 ? pos.y - 1 : 0
let x = pos.x > 0 ? pos.x - 1 : 0
let pixelInfo = ((Int(self.size.width) * Int(y)) + Int(x)) * bytesPerPixel
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
I found no answer anywhere on the internet that supplied
Simple code
HDR support
Color profile support for bgr etc.
Scale support for #2x #3x
So here it is. The as far as I can tell definitive solution:
Swift 5
import UIKit
public extension CGBitmapInfo {
// https://stackoverflow.com/a/60247693/2585092
enum ComponentLayout {
case bgra
case abgr
case argb
case rgba
case bgr
case rgb
var count: Int {
switch self {
case .bgr, .rgb: return 3
default: return 4
}
}
}
var componentLayout: ComponentLayout? {
guard let alphaInfo = CGImageAlphaInfo(rawValue: rawValue & Self.alphaInfoMask.rawValue) else { return nil }
let isLittleEndian = contains(.byteOrder32Little)
if alphaInfo == .none {
return isLittleEndian ? .bgr : .rgb
}
let alphaIsFirst = alphaInfo == .premultipliedFirst || alphaInfo == .first || alphaInfo == .noneSkipFirst
if isLittleEndian {
return alphaIsFirst ? .bgra : .abgr
} else {
return alphaIsFirst ? .argb : .rgba
}
}
var chromaIsPremultipliedByAlpha: Bool {
let alphaInfo = CGImageAlphaInfo(rawValue: rawValue & Self.alphaInfoMask.rawValue)
return alphaInfo == .premultipliedFirst || alphaInfo == .premultipliedLast
}
}
extension UIImage {
// https://stackoverflow.com/a/68103748/2585092
subscript(_ point: CGPoint) -> UIColor? {
guard
let cgImage = cgImage,
let space = cgImage.colorSpace,
let pixelData = cgImage.dataProvider?.data,
let layout = cgImage.bitmapInfo.componentLayout
else {
return nil
}
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let comp = CGFloat(layout.count)
let isHDR = CGColorSpaceUsesITUR_2100TF(space)
let hdr = CGFloat(isHDR ? 2 : 1)
let pixelInfo = Int((size.width * point.y * scale + point.x * scale) * comp * hdr)
let i = Array(0 ... Int(comp - 1)).map {
CGFloat(data[pixelInfo + $0 * Int(hdr)]) / CGFloat(255)
}
switch layout {
case .bgra:
return UIColor(red: i[2], green: i[1], blue: i[0], alpha: i[3])
case .abgr:
return UIColor(red: i[3], green: i[2], blue: i[1], alpha: i[0])
case .argb:
return UIColor(red: i[1], green: i[2], blue: i[3], alpha: i[0])
case .rgba:
return UIColor(red: i[0], green: i[1], blue: i[2], alpha: i[3])
case .bgr:
return UIColor(red: i[2], green: i[1], blue: i[0], alpha: 1)
case .rgb:
return UIColor(red: i[0], green: i[1], blue: i[2], alpha: 1)
}
}
}
Swift 5, includes solution for #2x & #3x image
extension UIImage {
subscript(_ point: CGPoint) -> UIColor? {
guard let pixelData = self.cgImage?.dataProvider?.data else { return nil }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = Int((size.width * point.y + point.x) * 4.0 * scale * scale)
let i = Array(0 ... 3).map { CGFloat(data[pixelInfo + $0]) / CGFloat(255) }
return UIColor(red: i[0], green: i[1], blue: i[2], alpha: i[3])
}
}
I use this extension :
public extension UIImage {
var pixelWidth: Int {
return cgImage?.width ?? 0
}
var pixelHeight: Int {
return cgImage?.height ?? 0
}
func pixelColor(x: Int, y: Int) -> UIColor {
if 0..<pixelWidth ~= x && 0..<pixelHeight ~= y {
log.info("Pixel coordinates are in bounds")
}else {
log.info("Pixel coordinates are out of bounds")
return .black
}
guard
let cgImage = cgImage,
let data = cgImage.dataProvider?.data,
let dataPtr = CFDataGetBytePtr(data),
let colorSpaceModel = cgImage.colorSpace?.model,
let componentLayout = cgImage.bitmapInfo.componentLayout
else {
assertionFailure("Could not get a pixel of an image")
return .clear
}
assert(
colorSpaceModel == .rgb,
"The only supported color space model is RGB")
assert(
cgImage.bitsPerPixel == 32 || cgImage.bitsPerPixel == 24,
"A pixel is expected to be either 4 or 3 bytes in size")
let bytesPerRow = cgImage.bytesPerRow
let bytesPerPixel = cgImage.bitsPerPixel/8
let pixelOffset = y*bytesPerRow + x*bytesPerPixel
if componentLayout.count == 4 {
let components = (
dataPtr[pixelOffset + 0],
dataPtr[pixelOffset + 1],
dataPtr[pixelOffset + 2],
dataPtr[pixelOffset + 3]
)
var alpha: UInt8 = 0
var red: UInt8 = 0
var green: UInt8 = 0
var blue: UInt8 = 0
switch componentLayout {
case .bgra:
alpha = components.3
red = components.2
green = components.1
blue = components.0
case .abgr:
alpha = components.0
red = components.3
green = components.2
blue = components.1
case .argb:
alpha = components.0
red = components.1
green = components.2
blue = components.3
case .rgba:
alpha = components.3
red = components.0
green = components.1
blue = components.2
default:
return .clear
}
// If chroma components are premultiplied by alpha and the alpha is `0`,
// keep the chroma components to their current values.
if cgImage.bitmapInfo.chromaIsPremultipliedByAlpha && alpha != 0 {
let invUnitAlpha = 255/CGFloat(alpha)
red = UInt8((CGFloat(red)*invUnitAlpha).rounded())
green = UInt8((CGFloat(green)*invUnitAlpha).rounded())
blue = UInt8((CGFloat(blue)*invUnitAlpha).rounded())
}
return .init(red: red, green: green, blue: blue, alpha: alpha)
} else if componentLayout.count == 3 {
let components = (
dataPtr[pixelOffset + 0],
dataPtr[pixelOffset + 1],
dataPtr[pixelOffset + 2]
)
var red: UInt8 = 0
var green: UInt8 = 0
var blue: UInt8 = 0
switch componentLayout {
case .bgr:
red = components.2
green = components.1
blue = components.0
case .rgb:
red = components.0
green = components.1
blue = components.2
default:
return .clear
}
return .init(red: red, green: green, blue: blue, alpha: UInt8(255))
} else {
assertionFailure("Unsupported number of pixel components")
return .clear
}
}
}
But for a right pixel color you need use only a image in xcasset in x1 otherwise your reference is wrong and you need to use this: let correctedImage = UIImage(data: image.pngData()!) for retrive the correct origin for your point .
The solution of https://stackoverflow.com/a/40237504/3286489, only works on sRGB colorspace type of image. However, for a different colorspace (extended sRGB??), it doesn't work.
So to make it work, need to convert it to a normal sRGB image type first, before getting the color from the cgImage. Note we need to add padding to the calculation to ensure the width is always a factor of 8
public extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
// convert to standard sRGB image
guard let cgImage = cgImage,
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB),
let context = CGContext(data: nil,
width: Int(size.width), height: Int(size.height),
bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
else { return .white }
context.draw(cgImage, in: CGRect(origin: .zero, size: size))
// Get the newly converted cgImage
guard let newCGImage = context.makeImage(),
let newDataProvider = newCGImage.dataProvider,
let data = newDataProvider.data
else { return .white }
let pixelData: UnsafePointer<UInt8> = CFDataGetBytePtr(data)
// Calculate the pixel position based on point given
let remaining = 8 - ((Int(size.width)) % 8)
let padding = (remaining < 8) ? remaining : 0
let pixelInfo: Int = (((Int(size.width) + padding) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(pixelData[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(pixelData[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(pixelData[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(pixelData[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
Optionally, if one doesn't want to convert to cgImage, just replace
// Get the newly converted cgImage
guard let newCGImage = context.makeImage(),
let newDataProvider = newCGImage.dataProvider,
let newData = newDataProvider.data
else { return .white }
let pixelData: UnsafePointer<UInt8> = CFDataGetBytePtr(newData)
With
// Get the data and bind it from UnsafeMutableRawPointer to UInt8
guard let data = context.data else { return .white }
let pixelData = data.bindMemory(
to: UInt8.self, capacity: Int(size.width * size.height * 4))
Updated
To get an even more concise code, we can improve the convert to sRGB using UIGraphicsImageRenderer directly. The calculation does changes a bit as due such redrawing refine the pixel to be 2x further.
func getPixelColor(pos: CGPoint) -> UIColor {
let newImage = UIGraphicsImageRenderer(size: size).image { _ in
draw(in: CGRect(origin: .zero, size: size))
}
guard let cgImage = newImage.cgImage,
let dataProvider = cgImage.dataProvider,
let data = dataProvider.data else { return .white }
let pixelData: UnsafePointer<UInt8> = CFDataGetBytePtr(data)
let remaining = 8 - ((Int(size.width) * 2) % 8)
let padding = (remaining < 8) ? remaining : 0
let pixelInfo: Int = (((Int(size.width * 2) + padding) * Int(pos.y * 2)) + Int(pos.x * 2)) * 4
let r = CGFloat(pixelData[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(pixelData[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(pixelData[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(pixelData[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
This is as per solution of convert to sRGB in https://stackoverflow.com/a/64538344/3286489
As usual, late to the party, but I wanted to mention that the method indicated above, doesn't always work. If the image is not RGBA, then it can crash. In my experience, running release (optimized) code, can crash, when the debug code works fine.
I tend to use a lot of vector images in my apps, and iOS can sometimes render them in monochrome color spaces. I have experienced a number of crashes, with the code given here.
Also, we should use bytesPerRow, when skipping on the vertical. Apple tends to add padding to bitmaps, and a simple 4-byte pixel offset may not work.
I draw the image into an offscreen context, then take the sample from there.
Here's what I did. It works, but is not exactly performant. In my case, it's fine, because I only use it once, at startup:
extension UIImage {
/* ################################################################## */
/**
This returns the RGB color (as a UIColor) of the pixel in the image, at the given point. It is restricted to 32-bit (RGBA/8-bit pixel) values.
This was inspired by several of the answers [in this StackOverflow Question](https://stackoverflow.com/questions/25146557/how-do-i-get-the-color-of-a-pixel-in-a-uiimage-with-swift).
**NOTE:** This is unlikely to be highly performant!
- parameter at: The point in the image to sample (NOTE: Must be within image bounds, or nil is returned).
- returns: A UIColor (or nil).
*/
func getRGBColorOfThePixel(at inPoint: CGPoint) -> UIColor? {
guard (0..<size.width).contains(inPoint.x),
(0..<size.height).contains(inPoint.y)
else { return nil }
// We draw the image into a context, in order to be sure that we are accessing image data in our required format (RGBA).
UIGraphicsBeginImageContextWithOptions(size, false, 0)
draw(at: .zero)
let imageData = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
guard let cgImage = imageData?.cgImage,
let pixelData = cgImage.dataProvider?.data
else { return nil }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let bytesPerPixel = (cgImage.bitsPerPixel + 7) / 8
let pixelByteOffset: Int = (cgImage.bytesPerRow * Int(inPoint.y)) + (Int(inPoint.x) * bytesPerPixel)
let divisor = CGFloat(255.0)
let r = CGFloat(data[pixelByteOffset]) / divisor
let g = CGFloat(data[pixelByteOffset + 1]) / divisor
let b = CGFloat(data[pixelByteOffset + 2]) / divisor
let a = CGFloat(data[pixelByteOffset + 3]) / divisor
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
If you use image from Imgaes.xcassets, and only put one #1x image in 1x field. Leave the #2x, #3x blank.