CIAreaHistogram does not return expected result - ios

I want to get the CIAreaHistogram of a UIImage.
Below I copy the code I run. The print() statement currently prints 255 lines with 0.0 0.0 0.0 followed by the last line being 0.0 0.0 0.058823529.
My UIImage input is also displayed on screen, so I'm sure there is an image.
I've also tried this filter in the CIFilter.io app. Strangely enough, given a colourful input image, the output image was a single tint grey line. Of course I expected a line, but not the solid grey.
What could I be doing wrong?
let inputImage = CIImage(cgImage: outputImage!.cgImage!)
let histogramFilter = CIFilter(name: "CIAreaHistogram", parameters: [kCIInputImageKey : inputImage,
"inputExtent" : inputImage.extent,
"inputScale" : NSNumber(value: 1.0),
"inputCount" : NSNumber(value: 256)])!
func processHistogram(ciImage: CIImage)
{
let image = renderImage(ciImage: ciImage)
for x in 0 ... 255
{
getPixelColor(fromCGImage: (image?.cgImage!)!, pos: CGPoint(x: x, y: 0))
}
}
func getPixelColor(fromCGImage cgImage: CGImage, pos: CGPoint) -> UIColor
{
let pixelData = cgImage.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(cgImage.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
print("\(r) \(g) \(b)")
return UIColor(red: r, green: g, blue: b, alpha: a)
}
func renderImage(ciImage: CIImage) -> UIImage?
{
var outputImage: UIImage?
let size = ciImage.extent.size
UIGraphicsBeginImageContext(size)
if let context = UIGraphicsGetCurrentContext()
{
context.interpolationQuality = .high
context.setShouldAntialias(true)
let inputImage = UIImage(ciImage: ciImage)
inputImage.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
outputImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
return outputImage
}

Related

Detect Colour in a UIImage and save CGPoints

I have a UIImage and I want all the cgPoints of the specific colour this image has. For example I have this Image
Now I want to get the RED colour CGPoints from this UIImage. Red colour may be a straight horizontal/vertical line. It may also be a curved/zigzag line.
This is what I have tried but I m unable to detect the required RED coloured CGPoints.
loop through image size/pixels to detect colour
var requiredPointsInImage = [CGPoint]()
let testImage = UIImage.init(named: "imgToTest1")
for heightIteration in 0..<Int(testImage!.size.height) {
for widthIteration in 0..<Int(testImage!.size.width) {
let colorOfPoints = testImage!.getPixelColor(pos: CGPoint(x:CGFloat(widthIteration), y:CGFloat(heightIteration)), withFrameSize: testImage!.size)
if colorOfPoints == MYColor {
print(colorOfPoints)
requiredPointsInImage.append(CGPoint(x:CGFloat(widthIteration), y:CGFloat(heightIteration)))
}
}
}
let newImage = drawShapesOnImage(image: testImage!, points: requiredPointsInImage)
// Colour Detection
extension UIImage {
func getPixelColor(pos: CGPoint, withFrameSize size: CGSize) -> UIColor {
let x: CGFloat = (self.size.width) * pos.x / size.width
let y: CGFloat = (self.size.height) * pos.y / size.height
let pixelPoint: CGPoint = CGPoint(x: x, y: y)
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pixelPoint.y)) + Int(pixelPoint.x)) * 3 //4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
// let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: 1)
}
}
// Drawing on detected points
func drawShapesOnImage(image: UIImage, points:[CGPoint]) -> UIImage {
UIGraphicsBeginImageContext(image.size)
image.draw(at: CGPoint.zero)
let context = UIGraphicsGetCurrentContext()
context!.setLineWidth(2.0)
context!.setStrokeColor(UIColor.green.cgColor)
let radius: CGFloat = 5.0
for point in points {
let center = CGPoint(x: point.x, y: point.y);
context!.addArc(center: CGPoint(x: center.x,y: center.y), radius: radius, startAngle: 0, endAngle: 360, clockwise: true)
context!.strokePath()
}
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return resultImage!
}
This is what I have got .....
Note: Background may not always be white, but of course not RED.
Using the same getPixelColor() function I can get exact color from this below image of any CGPoint but with y constant (say y:1)
If you think there is any other better approach to detect all these points please suggest.
SO I have found the solution, Actually the problem was I was ignoring Alpha channel in UIImage extension getPixelColor(). using alpha solve my problem.
here is the updated code!
extension UIImage {
func getPixelColor(pos: CGPoint, withFrameSize size: CGSize) -> UIColor {
let x: CGFloat = (self.size.width) * pos.x / size.width
let y: CGFloat = (self.size.height) * pos.y / size.height
let pixelPoint: CGPoint = CGPoint(x: x, y: y)
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pixelPoint.y)) + Int(pixelPoint.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
And the result is:

reading colour of pixel on sprite

I am using the below extension to read the colour of a pixel on touch in spritekit.
Its working for r g b but alpha is always returning 1 even though my sprites have alpha chanels.
Ive tried setting the background of the view in the extension to .clear but this doesnt help.
Is there somewhere in the code I can ensure the alpha is preserved?
extension SKTexture {
func getPixelColorFromTexture(position: CGPoint) -> SKColor {
let view = SKView(frame: CGRectMake(0, 0, 1, 1))
let scene = SKScene(size: CGSize(width: 1, height: 1))
let sprite = SKSpriteNode(texture: self)
sprite.anchorPoint = CGPointZero
sprite.position = CGPoint(x: -floor(position.x), y: -floor(position.y))
scene.anchorPoint = CGPointZero
scene.addChild(sprite)
view.presentScene(scene)
var pixel: [UInt8] = [0, 0, 0, 0]
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue).rawValue
let context = CGBitmapContextCreate(&pixel, 1, 1, 8, 4, CGColorSpaceCreateDeviceRGB(), bitmapInfo)
UIGraphicsPushContext(context!)
view.drawViewHierarchyInRect(view.bounds, afterScreenUpdates: true)
UIGraphicsPopContext()
return SKColor(red: CGFloat(pixel[0]) / 255.0, green: CGFloat(pixel[1]) / 255.0, blue: CGFloat(pixel[2]) / 255.0, alpha: CGFloat(pixel[3]) / 255.0)
}
}
this is the alternative code which sometimes crashes
extension UIImage{
func getPixelColor(tex: SKTexture, pos: CGPoint) -> CGFloat? {
guard let provider=cgImage?.dataProvider else{
return nil
}
// let provider = cgImage?.dataProvider
var posi = CGPoint()
let width = (tex.size().width)
let height = (tex.size().height)
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
sometimes crashes at this point --- CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
// let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
// let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
// let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return (a)
}
}

Why do I get the wrong color of a pixel with following code?

I create an UIImage with backgroundcolor RED:
let theimage:UIImage=imageWithColor(UIColor(red: 1, green: 0, blue: 0, alpha: 1) );
func imageWithColor(color: UIColor) -> UIImage {
let rect = CGRectMake(0.0, 0.0, 200.0, 200.0)
UIGraphicsBeginImageContext(rect.size)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, color.CGColor)
CGContextFillRect(context, rect)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
I am retrieving the color in the middle of the image as follow:
let h:CGFloat=theimage.size.height;
let w:CGFloat=theimage.size.width;
let test:UIColor=theimage.getPixelColor(CGPoint(x: 100, y: 100))
var rvalue:CGFloat = 0;
var gvalue:CGFloat = 0;
var bvalue:CGFloat = 0;
var alfaval:CGFloat = 0;
test.getRed(&rvalue, green: &gvalue, blue: &bvalue, alpha: &alfaval);
print("Blue Value : " + String(bvalue));
print("Red Value : " + String(rvalue));
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
As result I get :
Blue Value : 1.0
Red Value : 0.0
Why this ? I couldn't find the mistake.
The problem is not the built-in getRed function, but rather the function that builds the UIColor object from the individual color components in the provider data. Your code is assuming that the provider data is stored in RGBA format, but it apparently is not. It would appear to be in ARGB format. Also, I'm not sure you have the byte order right, either.
When you have an image, there are a variety of ways of packing those into the provider data. A few examples are shown in the Quartz 2D Programming Guide:
If you're going to have a getPixelColor routine that is hard-coded for a particular format, I might check the alphaInfo and bitmapInfo like so (in Swift 4.2):
extension UIImage {
func getPixelColor(point: CGPoint) -> UIColor? {
guard let cgImage = cgImage,
let pixelData = cgImage.dataProvider?.data
else { return nil }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let alphaInfo = cgImage.alphaInfo
assert(alphaInfo == .premultipliedFirst || alphaInfo == .first || alphaInfo == .noneSkipFirst, "This routine expects alpha to be first component")
let byteOrderInfo = cgImage.byteOrderInfo
assert(byteOrderInfo == .order32Little || byteOrderInfo == .orderDefault, "This routine expects little-endian 32bit format")
let bytesPerRow = cgImage.bytesPerRow
let pixelInfo = Int(point.y) * bytesPerRow + Int(point.x) * 4;
let a: CGFloat = CGFloat(data[pixelInfo+3]) / 255
let r: CGFloat = CGFloat(data[pixelInfo+2]) / 255
let g: CGFloat = CGFloat(data[pixelInfo+1]) / 255
let b: CGFloat = CGFloat(data[pixelInfo ]) / 255
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
And if you were to always build this image programmatically for code that is dependent upon the bit map info, I'd explicitly specify these details when I created the image:
func image(with color: UIColor, size: CGSize) -> UIImage? {
let rect = CGRect(origin: .zero, size: size)
let colorSpace = CGColorSpaceCreateDeviceRGB()
guard let context = CGContext(data: nil,
width: Int(rect.width),
height: Int(rect.height),
bitsPerComponent: 8,
bytesPerRow: Int(rect.width) * 4,
space: colorSpace,
bitmapInfo: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue) else {
return nil
}
context.setFillColor(color.cgColor)
context.fill(rect)
return context.makeImage().flatMap { UIImage(cgImage: $0) }
}
Perhaps even better, as shown in Technical Q&A 1509, you might want to have getPixelData explicitly create its own context of a predetermined format, draw the image to that context, and now the code is not contingent upon the format of the original image to which you are applying this.
extension UIImage {
func getPixelColor(point: CGPoint) -> UIColor? {
guard let cgImage = cgImage else { return nil }
let width = Int(size.width)
let height = Int(size.height)
let colorSpace = CGColorSpaceCreateDeviceRGB()
guard let context = CGContext(data: nil,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: width * 4,
space: colorSpace,
bitmapInfo: CGBitmapInfo.byteOrder32Little.rawValue | CGImageAlphaInfo.premultipliedFirst.rawValue)
else {
return nil
}
context.draw(cgImage, in: CGRect(origin: .zero, size: size))
guard let pixelBuffer = context.data else { return nil }
let pointer = pixelBuffer.bindMemory(to: UInt32.self, capacity: width * height)
let pixel = pointer[Int(point.y) * width + Int(point.x)]
let r: CGFloat = CGFloat(red(for: pixel)) / 255
let g: CGFloat = CGFloat(green(for: pixel)) / 255
let b: CGFloat = CGFloat(blue(for: pixel)) / 255
let a: CGFloat = CGFloat(alpha(for: pixel)) / 255
return UIColor(red: r, green: g, blue: b, alpha: a)
}
private func alpha(for pixelData: UInt32) -> UInt8 {
return UInt8((pixelData >> 24) & 255)
}
private func red(for pixelData: UInt32) -> UInt8 {
return UInt8((pixelData >> 16) & 255)
}
private func green(for pixelData: UInt32) -> UInt8 {
return UInt8((pixelData >> 8) & 255)
}
private func blue(for pixelData: UInt32) -> UInt8 {
return UInt8((pixelData >> 0) & 255)
}
private func rgba(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) -> UInt32 {
return (UInt32(alpha) << 24) | (UInt32(red) << 16) | (UInt32(green) << 8) | (UInt32(blue) << 0)
}
}
Clearly, if you're going to check a bunch of pixels, you'll want to refactor this (decouple the creation of the standardized pixel buffer from the code that checks the color), but hopefully this illustrates the idea.
For earlier versions of Swift, see previous revision of this answer.

Color detection at a distance (Swift)

Can someone point me in the right direction with color detection at a distance? I have used the code below and it grabs RBG values of an image properly if an object or point of interest is less than 10 feet away. When the object is at a distance the code returns the wrong values. I want to take a picture of an object at a distance greater than 10 feet and detect the color of that image.
//On the top of your swift
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
I am a photographer and what you are trying to do is very similar to setting a white balance in post processing or using the color picker in PS.
Digital Camera's don't have pixels that capture the full spectrum of light at once. They have triplets of pixels for RGB. The captured information is interpolated and this can give very bad results. Setting the white balance in post on an image taken at night is almost impossible.
Reasons for bad interpolation:
Pixels are bigger than the smallest discernible object in the scene. (moiré artifacts)
Low light situation where Digital Gain increases color differences. (color noise artifacts)
Image was converted to low quality jpg but has lots of edges.
(jpg artifacts)
If it is a low quality jpg, get a better source img.
Fix
All you have to do to get a more accurate reading, is blur the image.
The smallest acceptable blur is 3 pixels, because this will undo some of the interpolation. Bigger blurs might be better.
Since blurs are expensive it is best to crop the image to a multiple of the blur radius. You can't take a precise fit because it will also blur the edges and beyond the edges the image is black. This will influence your reading.
It might be best if you also enforce an upper limit on the blur radius.
Shortcut to get the center of something with a size.
extension CGSize {
var center : CGPoint {
get {
return CGPoint(x: width / 2, y: height / 2)
}
}
}
The UIImage stuff
extension UIImage {
func blur(radius: CGFloat) -> UIImage? {
// extensions of UImage don't know what a CIImage is...
typealias CIImage = CoreImage.CIImage
// blur of your choice
guard let blurFilter = CIFilter(name: "CIBoxBlur") else {
return nil
}
blurFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
blurFilter.setValue(radius, forKey: kCIInputRadiusKey)
let ciContext = CIContext(options: nil)
guard let result = blurFilter.valueForKey(kCIOutputImageKey) as? CIImage else {
return nil
}
let blurRect = CGRect(x: -radius, y: -radius, width: self.size.width + (radius * 2), height: self.size.height + (radius * 2))
let cgImage = ciContext.createCGImage(result, fromRect: blurRect)
return UIImage(CGImage: cgImage)
}
func crop(cropRect : CGRect) -> UIImage? {
guard let imgRef = CGImageCreateWithImageInRect(self.CGImage, cropRect) else {
return nil
}
return UIImage(CGImage: imgRef)
}
func getPixelColor(atPoint point: CGPoint, radius:CGFloat) -> UIColor? {
var pos = point
var image = self
// if the radius is too small -> skip
if radius > 1 {
let cropRect = CGRect(x: point.x - (radius * 4), y: point.y - (radius * 4), width: radius * 8, height: radius * 8)
guard let cropImg = self.crop(cropRect) else {
return nil
}
guard let blurImg = cropImg.blur(radius) else {
return nil
}
pos = blurImg.size.center
image = blurImg
}
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(image.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
Side note :
Your problem might not be the color grabbing function but how you set the point. If you are doing it by touch and the object is farther and thus smaller on the screen, you might not set it accurately enough.
read the average color of a UIImage ==> https://www.hackingwithswift.com/example-code/media/how-to-read-the-average-color-of-a-uiimage-using-ciareaaverage
extension UIImage {
var averageColor: UIColor? {
guard let inputImage = CIImage(image: self) else { return nil }
let extentVector = CIVector(x: inputImage.extent.origin.x, y: inputImage.extent.origin.y, z: inputImage.extent.size.width, w: inputImage.extent.size.height)
guard let filter = CIFilter(name: "CIAreaAverage", parameters: [kCIInputImageKey: inputImage, kCIInputExtentKey: extentVector]) else { return nil }
guard let outputImage = filter.outputImage else { return nil }
var bitmap = [UInt8](repeating: 0, count: 4)
let context = CIContext(options: [.workingColorSpace: kCFNull])
context.render(outputImage, toBitmap: &bitmap, rowBytes: 4, bounds: CGRect(x: 0, y: 0, width: 1, height: 1), format: .RGBA8, colorSpace: nil)
return UIColor(red: CGFloat(bitmap[0]) / 255, green: CGFloat(bitmap[1]) / 255, blue: CGFloat(bitmap[2]) / 255, alpha: CGFloat(bitmap[3]) / 255)
}
}

How do I get the color of a pixel in a UIImage with Swift?

I'm trying to get the color of a pixel in a UIImage with Swift, but it seems to always return 0. Here is the code, translated from #Minas' answer on this thread:
func getPixelColor(pos: CGPoint) -> UIColor {
var pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
var data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
var pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
var r = CGFloat(data[pixelInfo])
var g = CGFloat(data[pixelInfo+1])
var b = CGFloat(data[pixelInfo+2])
var a = CGFloat(data[pixelInfo+3])
return UIColor(red: r, green: g, blue: b, alpha: a)
}
Thanks in advance!
A bit of searching leads me here since I was facing the similar problem.
You code works fine. The problem might be raised from your image.
Code:
//On the top of your swift
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
What happens is this method will pick the pixel colour from the image's CGImage. So make sure you are picking from the right image. e.g. If you UIImage is 200x200, but the original image file from Imgaes.xcassets or wherever it came from, is 400x400, and you are picking point (100,100), you are actually picking the point on the upper left section of the image, instead of middle.
Two Solutions:
1, Use image from Imgaes.xcassets, and only put one #1x image in 1x field. Leave the #2x, #3x blank. Make sure you know the image size, and pick a point that is within the range.
//Make sure only 1x image is set
let image : UIImage = UIImage(named:"imageName")
//Make sure point is within the image
let color : UIColor = image.getPixelColor(CGPointMake(xValue, yValue))
2, Scale you CGPoint up/down the proportion to match the UIImage. e.g. let point = CGPoint(100,100) in the example above,
let xCoordinate : Float = Float(point.x) * (400.0/200.0)
let yCoordinate : Float = Float(point.y) * (400.0/200.0)
let newCoordinate : CGPoint = CGPointMake(CGFloat(xCoordinate), CGFloat(yCoordinate))
let image : UIImage = largeImage
let color : UIColor = image.getPixelColor(CGPointMake(xValue, yValue))
I've only tested the first method, and I am using it to get a colour off a colour palette. Both should work.
Happy coding :)
SWIFT 3, XCODE 8 Tested and working
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
If you are calling the answered question more than once, than you should not use the function on every pixel, because you are recreating the same set of data. If you want all of the colors in an image, do something more like this:
func findColors(_ image: UIImage) -> [UIColor] {
let pixelsWide = Int(image.size.width)
let pixelsHigh = Int(image.size.height)
guard let pixelData = image.cgImage?.dataProvider?.data else { return [] }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
var imageColors: [UIColor] = []
for x in 0..<pixelsWide {
for y in 0..<pixelsHigh {
let point = CGPoint(x: x, y: y)
let pixelInfo: Int = ((pixelsWide * Int(point.y)) + Int(point.x)) * 4
let color = UIColor(red: CGFloat(data[pixelInfo]) / 255.0,
green: CGFloat(data[pixelInfo + 1]) / 255.0,
blue: CGFloat(data[pixelInfo + 2]) / 255.0,
alpha: CGFloat(data[pixelInfo + 3]) / 255.0)
imageColors.append(color)
}
}
return imageColors
}
Here is an Example Project
As a side note, this function is significantly faster than the accepted answer, but it gives a less defined result.. I just put the UIImageView in the sourceView parameter.
func getPixelColorAtPoint(point: CGPoint, sourceView: UIView) -> UIColor {
let pixel = UnsafeMutablePointer<CUnsignedChar>.allocate(capacity: 4)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let context = CGContext(data: pixel, width: 1, height: 1, bitsPerComponent: 8, bytesPerRow: 4, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
context!.translateBy(x: -point.x, y: -point.y)
sourceView.layer.render(in: context!)
let color: UIColor = UIColor(red: CGFloat(pixel[0])/255.0,
green: CGFloat(pixel[1])/255.0,
blue: CGFloat(pixel[2])/255.0,
alpha: CGFloat(pixel[3])/255.0)
pixel.deallocate(capacity: 4)
return color
}
I was getting swapped colors for red and blue.
The original function also did not account for the actual bytes per row and bytes per pixel.
I also avoid unwrapping optionals whenever possible.
Here's an updated function.
import UIKit
extension UIImage {
/// Get the pixel color at a point in the image
func pixelColor(atLocation point: CGPoint) -> UIColor? {
guard let cgImage = cgImage, let pixelData = cgImage.dataProvider?.data else { return nil }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let bytesPerPixel = cgImage.bitsPerPixel / 8
let pixelInfo: Int = ((cgImage.bytesPerRow * Int(point.y)) + (Int(point.x) * bytesPerPixel))
let b = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let r = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
Swift3 (IOS 10.3)
Important: - This will works only for #1x image.
Request: -
if you have solution for #2x and #3x images please share. Thank you :)
extension UIImage {
func getPixelColor(atLocation location: CGPoint, withFrameSize size: CGSize) -> UIColor {
let x: CGFloat = (self.size.width) * location.x / size.width
let y: CGFloat = (self.size.height) * location.y / size.height
let pixelPoint: CGPoint = CGPoint(x: x, y: y)
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelIndex: Int = ((Int(self.size.width) * Int(pixelPoint.y)) + Int(pixelPoint.x)) * 4
let r = CGFloat(data[pixelIndex]) / CGFloat(255.0)
let g = CGFloat(data[pixelIndex+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelIndex+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelIndex+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
Usage
print(yourImageView.image!.getPixelColor(atLocation: location, withFrameSize: yourImageView.frame.size))
You can use tapGestureRecognizer for location.
Your code works fine for me, as an extension to UIImage. How are your testing your colour? here's my example:
let green = UIImage(named: "green.png")
let topLeft = CGPoint(x: 0, y: 0)
// Use your extension
let greenColour = green.getPixelColor(topLeft)
// Dump RGBA values
var redval: CGFloat = 0
var greenval: CGFloat = 0
var blueval: CGFloat = 0
var alphaval: CGFloat = 0
greenColour.getRed(&redval, green: &greenval, blue: &blueval, alpha: &alphaval)
println("Green is r: \(redval) g: \(greenval) b: \(blueval) a: \(alphaval)")
This prints:
Green is r: 0.0 g: 1.0 b: 1.0 a: 1.0
...which is correct, given that my image is a solid green square.
(What do you mean by "it always seems to return 0"? You don't happen to be testing on a black pixel, do you?)
Im getting backwards colours in terms of R and B being swapped, not sure why this I thought the order was RGBA.
func testGeneratedColorImage() {
let color = UIColor(red: 0.5, green: 0, blue: 1, alpha: 1)
let size = CGSize(width: 10, height: 10)
let image = UIImage.image(fromColor: color, size: size)
XCTAssert(image.size == size)
XCTAssertNotNil(image.cgImage)
XCTAssertNotNil(image.cgImage!.dataProvider)
let pixelData = image.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let position = CGPoint(x: 1, y: 1)
let pixelInfo: Int = ((Int(size.width) * Int(position.y)) + Int(position.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
let testColor = UIColor(red: r, green: g, blue: b, alpha: a)
XCTAssert(testColor == color, "Colour: \(testColor) does not match: \(color)")
}
Where color looks like this:
image looks like this:
and testColor looks like:
(I can understand that the blue value might be off a little bit and be 0.502 with floating point inaccuracy)
With the code switched to:
let b = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let r = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
I get testColor as:
I think you need to divide each component by 255:
var r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
var g = CGFloat(data[pixelInfo + 1]) / CGFloat(255.0)
var b = CGFloat(data[pixelInfo + 2]) / CGFloat(255.0)
var a = CGFloat(data[pixelInfo + 3]) / CGFloat(255.0)
I was trying to find the colors of all four corners of an image and was getting unexpected results, including UIColor.clear.
The issue is that the pixels start at 0, so requesting a pixel at the width of the image would actually wrap back around and give me the first pixel of the second row.
For example, the top right pixel of a 640 x 480 image would actually be x: 639, y: 0, and the bottom right pixel would be x: 639, y: 479.
Here's my implementation of the UIImage extension with this adjustment:
func getPixelColor(pos: CGPoint) -> UIColor {
guard let cgImage = cgImage, let pixelData = cgImage.dataProvider?.data else { return UIColor.clear }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let bytesPerPixel = cgImage.bitsPerPixel / 8
// adjust the pixels to constrain to be within the width/height of the image
let y = pos.y > 0 ? pos.y - 1 : 0
let x = pos.x > 0 ? pos.x - 1 : 0
let pixelInfo = ((Int(self.size.width) * Int(y)) + Int(x)) * bytesPerPixel
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
I found no answer anywhere on the internet that supplied
Simple code
HDR support
Color profile support for bgr etc.
Scale support for #2x #3x
So here it is. The as far as I can tell definitive solution:
Swift 5
import UIKit
public extension CGBitmapInfo {
// https://stackoverflow.com/a/60247693/2585092
enum ComponentLayout {
case bgra
case abgr
case argb
case rgba
case bgr
case rgb
var count: Int {
switch self {
case .bgr, .rgb: return 3
default: return 4
}
}
}
var componentLayout: ComponentLayout? {
guard let alphaInfo = CGImageAlphaInfo(rawValue: rawValue & Self.alphaInfoMask.rawValue) else { return nil }
let isLittleEndian = contains(.byteOrder32Little)
if alphaInfo == .none {
return isLittleEndian ? .bgr : .rgb
}
let alphaIsFirst = alphaInfo == .premultipliedFirst || alphaInfo == .first || alphaInfo == .noneSkipFirst
if isLittleEndian {
return alphaIsFirst ? .bgra : .abgr
} else {
return alphaIsFirst ? .argb : .rgba
}
}
var chromaIsPremultipliedByAlpha: Bool {
let alphaInfo = CGImageAlphaInfo(rawValue: rawValue & Self.alphaInfoMask.rawValue)
return alphaInfo == .premultipliedFirst || alphaInfo == .premultipliedLast
}
}
extension UIImage {
// https://stackoverflow.com/a/68103748/2585092
subscript(_ point: CGPoint) -> UIColor? {
guard
let cgImage = cgImage,
let space = cgImage.colorSpace,
let pixelData = cgImage.dataProvider?.data,
let layout = cgImage.bitmapInfo.componentLayout
else {
return nil
}
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let comp = CGFloat(layout.count)
let isHDR = CGColorSpaceUsesITUR_2100TF(space)
let hdr = CGFloat(isHDR ? 2 : 1)
let pixelInfo = Int((size.width * point.y * scale + point.x * scale) * comp * hdr)
let i = Array(0 ... Int(comp - 1)).map {
CGFloat(data[pixelInfo + $0 * Int(hdr)]) / CGFloat(255)
}
switch layout {
case .bgra:
return UIColor(red: i[2], green: i[1], blue: i[0], alpha: i[3])
case .abgr:
return UIColor(red: i[3], green: i[2], blue: i[1], alpha: i[0])
case .argb:
return UIColor(red: i[1], green: i[2], blue: i[3], alpha: i[0])
case .rgba:
return UIColor(red: i[0], green: i[1], blue: i[2], alpha: i[3])
case .bgr:
return UIColor(red: i[2], green: i[1], blue: i[0], alpha: 1)
case .rgb:
return UIColor(red: i[0], green: i[1], blue: i[2], alpha: 1)
}
}
}
Swift 5, includes solution for #2x & #3x image
extension UIImage {
subscript(_ point: CGPoint) -> UIColor? {
guard let pixelData = self.cgImage?.dataProvider?.data else { return nil }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = Int((size.width * point.y + point.x) * 4.0 * scale * scale)
let i = Array(0 ... 3).map { CGFloat(data[pixelInfo + $0]) / CGFloat(255) }
return UIColor(red: i[0], green: i[1], blue: i[2], alpha: i[3])
}
}
I use this extension :
public extension UIImage {
var pixelWidth: Int {
return cgImage?.width ?? 0
}
var pixelHeight: Int {
return cgImage?.height ?? 0
}
func pixelColor(x: Int, y: Int) -> UIColor {
if 0..<pixelWidth ~= x && 0..<pixelHeight ~= y {
log.info("Pixel coordinates are in bounds")
}else {
log.info("Pixel coordinates are out of bounds")
return .black
}
guard
let cgImage = cgImage,
let data = cgImage.dataProvider?.data,
let dataPtr = CFDataGetBytePtr(data),
let colorSpaceModel = cgImage.colorSpace?.model,
let componentLayout = cgImage.bitmapInfo.componentLayout
else {
assertionFailure("Could not get a pixel of an image")
return .clear
}
assert(
colorSpaceModel == .rgb,
"The only supported color space model is RGB")
assert(
cgImage.bitsPerPixel == 32 || cgImage.bitsPerPixel == 24,
"A pixel is expected to be either 4 or 3 bytes in size")
let bytesPerRow = cgImage.bytesPerRow
let bytesPerPixel = cgImage.bitsPerPixel/8
let pixelOffset = y*bytesPerRow + x*bytesPerPixel
if componentLayout.count == 4 {
let components = (
dataPtr[pixelOffset + 0],
dataPtr[pixelOffset + 1],
dataPtr[pixelOffset + 2],
dataPtr[pixelOffset + 3]
)
var alpha: UInt8 = 0
var red: UInt8 = 0
var green: UInt8 = 0
var blue: UInt8 = 0
switch componentLayout {
case .bgra:
alpha = components.3
red = components.2
green = components.1
blue = components.0
case .abgr:
alpha = components.0
red = components.3
green = components.2
blue = components.1
case .argb:
alpha = components.0
red = components.1
green = components.2
blue = components.3
case .rgba:
alpha = components.3
red = components.0
green = components.1
blue = components.2
default:
return .clear
}
// If chroma components are premultiplied by alpha and the alpha is `0`,
// keep the chroma components to their current values.
if cgImage.bitmapInfo.chromaIsPremultipliedByAlpha && alpha != 0 {
let invUnitAlpha = 255/CGFloat(alpha)
red = UInt8((CGFloat(red)*invUnitAlpha).rounded())
green = UInt8((CGFloat(green)*invUnitAlpha).rounded())
blue = UInt8((CGFloat(blue)*invUnitAlpha).rounded())
}
return .init(red: red, green: green, blue: blue, alpha: alpha)
} else if componentLayout.count == 3 {
let components = (
dataPtr[pixelOffset + 0],
dataPtr[pixelOffset + 1],
dataPtr[pixelOffset + 2]
)
var red: UInt8 = 0
var green: UInt8 = 0
var blue: UInt8 = 0
switch componentLayout {
case .bgr:
red = components.2
green = components.1
blue = components.0
case .rgb:
red = components.0
green = components.1
blue = components.2
default:
return .clear
}
return .init(red: red, green: green, blue: blue, alpha: UInt8(255))
} else {
assertionFailure("Unsupported number of pixel components")
return .clear
}
}
}
But for a right pixel color you need use only a image in xcasset in x1 otherwise your reference is wrong and you need to use this: let correctedImage = UIImage(data: image.pngData()!) for retrive the correct origin for your point .
The solution of https://stackoverflow.com/a/40237504/3286489, only works on sRGB colorspace type of image. However, for a different colorspace (extended sRGB??), it doesn't work.
So to make it work, need to convert it to a normal sRGB image type first, before getting the color from the cgImage. Note we need to add padding to the calculation to ensure the width is always a factor of 8
public extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
// convert to standard sRGB image
guard let cgImage = cgImage,
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB),
let context = CGContext(data: nil,
width: Int(size.width), height: Int(size.height),
bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
else { return .white }
context.draw(cgImage, in: CGRect(origin: .zero, size: size))
// Get the newly converted cgImage
guard let newCGImage = context.makeImage(),
let newDataProvider = newCGImage.dataProvider,
let data = newDataProvider.data
else { return .white }
let pixelData: UnsafePointer<UInt8> = CFDataGetBytePtr(data)
// Calculate the pixel position based on point given
let remaining = 8 - ((Int(size.width)) % 8)
let padding = (remaining < 8) ? remaining : 0
let pixelInfo: Int = (((Int(size.width) + padding) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(pixelData[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(pixelData[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(pixelData[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(pixelData[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
Optionally, if one doesn't want to convert to cgImage, just replace
// Get the newly converted cgImage
guard let newCGImage = context.makeImage(),
let newDataProvider = newCGImage.dataProvider,
let newData = newDataProvider.data
else { return .white }
let pixelData: UnsafePointer<UInt8> = CFDataGetBytePtr(newData)
With
// Get the data and bind it from UnsafeMutableRawPointer to UInt8
guard let data = context.data else { return .white }
let pixelData = data.bindMemory(
to: UInt8.self, capacity: Int(size.width * size.height * 4))
Updated
To get an even more concise code, we can improve the convert to sRGB using UIGraphicsImageRenderer directly. The calculation does changes a bit as due such redrawing refine the pixel to be 2x further.
func getPixelColor(pos: CGPoint) -> UIColor {
let newImage = UIGraphicsImageRenderer(size: size).image { _ in
draw(in: CGRect(origin: .zero, size: size))
}
guard let cgImage = newImage.cgImage,
let dataProvider = cgImage.dataProvider,
let data = dataProvider.data else { return .white }
let pixelData: UnsafePointer<UInt8> = CFDataGetBytePtr(data)
let remaining = 8 - ((Int(size.width) * 2) % 8)
let padding = (remaining < 8) ? remaining : 0
let pixelInfo: Int = (((Int(size.width * 2) + padding) * Int(pos.y * 2)) + Int(pos.x * 2)) * 4
let r = CGFloat(pixelData[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(pixelData[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(pixelData[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(pixelData[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
This is as per solution of convert to sRGB in https://stackoverflow.com/a/64538344/3286489
As usual, late to the party, but I wanted to mention that the method indicated above, doesn't always work. If the image is not RGBA, then it can crash. In my experience, running release (optimized) code, can crash, when the debug code works fine.
I tend to use a lot of vector images in my apps, and iOS can sometimes render them in monochrome color spaces. I have experienced a number of crashes, with the code given here.
Also, we should use bytesPerRow, when skipping on the vertical. Apple tends to add padding to bitmaps, and a simple 4-byte pixel offset may not work.
I draw the image into an offscreen context, then take the sample from there.
Here's what I did. It works, but is not exactly performant. In my case, it's fine, because I only use it once, at startup:
extension UIImage {
/* ################################################################## */
/**
This returns the RGB color (as a UIColor) of the pixel in the image, at the given point. It is restricted to 32-bit (RGBA/8-bit pixel) values.
This was inspired by several of the answers [in this StackOverflow Question](https://stackoverflow.com/questions/25146557/how-do-i-get-the-color-of-a-pixel-in-a-uiimage-with-swift).
**NOTE:** This is unlikely to be highly performant!
- parameter at: The point in the image to sample (NOTE: Must be within image bounds, or nil is returned).
- returns: A UIColor (or nil).
*/
func getRGBColorOfThePixel(at inPoint: CGPoint) -> UIColor? {
guard (0..<size.width).contains(inPoint.x),
(0..<size.height).contains(inPoint.y)
else { return nil }
// We draw the image into a context, in order to be sure that we are accessing image data in our required format (RGBA).
UIGraphicsBeginImageContextWithOptions(size, false, 0)
draw(at: .zero)
let imageData = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
guard let cgImage = imageData?.cgImage,
let pixelData = cgImage.dataProvider?.data
else { return nil }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let bytesPerPixel = (cgImage.bitsPerPixel + 7) / 8
let pixelByteOffset: Int = (cgImage.bytesPerRow * Int(inPoint.y)) + (Int(inPoint.x) * bytesPerPixel)
let divisor = CGFloat(255.0)
let r = CGFloat(data[pixelByteOffset]) / divisor
let g = CGFloat(data[pixelByteOffset + 1]) / divisor
let b = CGFloat(data[pixelByteOffset + 2]) / divisor
let a = CGFloat(data[pixelByteOffset + 3]) / divisor
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
If you use image from Imgaes.xcassets, and only put one #1x image in 1x field. Leave the #2x, #3x blank.

Resources