Color detection at a distance (Swift) - ios

Can someone point me in the right direction with color detection at a distance? I have used the code below and it grabs RBG values of an image properly if an object or point of interest is less than 10 feet away. When the object is at a distance the code returns the wrong values. I want to take a picture of an object at a distance greater than 10 feet and detect the color of that image.
//On the top of your swift
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}

I am a photographer and what you are trying to do is very similar to setting a white balance in post processing or using the color picker in PS.
Digital Camera's don't have pixels that capture the full spectrum of light at once. They have triplets of pixels for RGB. The captured information is interpolated and this can give very bad results. Setting the white balance in post on an image taken at night is almost impossible.
Reasons for bad interpolation:
Pixels are bigger than the smallest discernible object in the scene. (moiré artifacts)
Low light situation where Digital Gain increases color differences. (color noise artifacts)
Image was converted to low quality jpg but has lots of edges.
(jpg artifacts)
If it is a low quality jpg, get a better source img.
Fix
All you have to do to get a more accurate reading, is blur the image.
The smallest acceptable blur is 3 pixels, because this will undo some of the interpolation. Bigger blurs might be better.
Since blurs are expensive it is best to crop the image to a multiple of the blur radius. You can't take a precise fit because it will also blur the edges and beyond the edges the image is black. This will influence your reading.
It might be best if you also enforce an upper limit on the blur radius.
Shortcut to get the center of something with a size.
extension CGSize {
var center : CGPoint {
get {
return CGPoint(x: width / 2, y: height / 2)
}
}
}
The UIImage stuff
extension UIImage {
func blur(radius: CGFloat) -> UIImage? {
// extensions of UImage don't know what a CIImage is...
typealias CIImage = CoreImage.CIImage
// blur of your choice
guard let blurFilter = CIFilter(name: "CIBoxBlur") else {
return nil
}
blurFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
blurFilter.setValue(radius, forKey: kCIInputRadiusKey)
let ciContext = CIContext(options: nil)
guard let result = blurFilter.valueForKey(kCIOutputImageKey) as? CIImage else {
return nil
}
let blurRect = CGRect(x: -radius, y: -radius, width: self.size.width + (radius * 2), height: self.size.height + (radius * 2))
let cgImage = ciContext.createCGImage(result, fromRect: blurRect)
return UIImage(CGImage: cgImage)
}
func crop(cropRect : CGRect) -> UIImage? {
guard let imgRef = CGImageCreateWithImageInRect(self.CGImage, cropRect) else {
return nil
}
return UIImage(CGImage: imgRef)
}
func getPixelColor(atPoint point: CGPoint, radius:CGFloat) -> UIColor? {
var pos = point
var image = self
// if the radius is too small -> skip
if radius > 1 {
let cropRect = CGRect(x: point.x - (radius * 4), y: point.y - (radius * 4), width: radius * 8, height: radius * 8)
guard let cropImg = self.crop(cropRect) else {
return nil
}
guard let blurImg = cropImg.blur(radius) else {
return nil
}
pos = blurImg.size.center
image = blurImg
}
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(image.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
Side note :
Your problem might not be the color grabbing function but how you set the point. If you are doing it by touch and the object is farther and thus smaller on the screen, you might not set it accurately enough.

read the average color of a UIImage ==> https://www.hackingwithswift.com/example-code/media/how-to-read-the-average-color-of-a-uiimage-using-ciareaaverage
extension UIImage {
var averageColor: UIColor? {
guard let inputImage = CIImage(image: self) else { return nil }
let extentVector = CIVector(x: inputImage.extent.origin.x, y: inputImage.extent.origin.y, z: inputImage.extent.size.width, w: inputImage.extent.size.height)
guard let filter = CIFilter(name: "CIAreaAverage", parameters: [kCIInputImageKey: inputImage, kCIInputExtentKey: extentVector]) else { return nil }
guard let outputImage = filter.outputImage else { return nil }
var bitmap = [UInt8](repeating: 0, count: 4)
let context = CIContext(options: [.workingColorSpace: kCFNull])
context.render(outputImage, toBitmap: &bitmap, rowBytes: 4, bounds: CGRect(x: 0, y: 0, width: 1, height: 1), format: .RGBA8, colorSpace: nil)
return UIColor(red: CGFloat(bitmap[0]) / 255, green: CGFloat(bitmap[1]) / 255, blue: CGFloat(bitmap[2]) / 255, alpha: CGFloat(bitmap[3]) / 255)
}
}

Related

CIAreaHistogram does not return expected result

I want to get the CIAreaHistogram of a UIImage.
Below I copy the code I run. The print() statement currently prints 255 lines with 0.0 0.0 0.0 followed by the last line being 0.0 0.0 0.058823529.
My UIImage input is also displayed on screen, so I'm sure there is an image.
I've also tried this filter in the CIFilter.io app. Strangely enough, given a colourful input image, the output image was a single tint grey line. Of course I expected a line, but not the solid grey.
What could I be doing wrong?
let inputImage = CIImage(cgImage: outputImage!.cgImage!)
let histogramFilter = CIFilter(name: "CIAreaHistogram", parameters: [kCIInputImageKey : inputImage,
"inputExtent" : inputImage.extent,
"inputScale" : NSNumber(value: 1.0),
"inputCount" : NSNumber(value: 256)])!
func processHistogram(ciImage: CIImage)
{
let image = renderImage(ciImage: ciImage)
for x in 0 ... 255
{
getPixelColor(fromCGImage: (image?.cgImage!)!, pos: CGPoint(x: x, y: 0))
}
}
func getPixelColor(fromCGImage cgImage: CGImage, pos: CGPoint) -> UIColor
{
let pixelData = cgImage.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(cgImage.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
print("\(r) \(g) \(b)")
return UIColor(red: r, green: g, blue: b, alpha: a)
}
func renderImage(ciImage: CIImage) -> UIImage?
{
var outputImage: UIImage?
let size = ciImage.extent.size
UIGraphicsBeginImageContext(size)
if let context = UIGraphicsGetCurrentContext()
{
context.interpolationQuality = .high
context.setShouldAntialias(true)
let inputImage = UIImage(ciImage: ciImage)
inputImage.draw(in: CGRect(x: 0, y: 0, width: size.width, height: size.height))
outputImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
return outputImage
}

Detect Colour in a UIImage and save CGPoints

I have a UIImage and I want all the cgPoints of the specific colour this image has. For example I have this Image
Now I want to get the RED colour CGPoints from this UIImage. Red colour may be a straight horizontal/vertical line. It may also be a curved/zigzag line.
This is what I have tried but I m unable to detect the required RED coloured CGPoints.
loop through image size/pixels to detect colour
var requiredPointsInImage = [CGPoint]()
let testImage = UIImage.init(named: "imgToTest1")
for heightIteration in 0..<Int(testImage!.size.height) {
for widthIteration in 0..<Int(testImage!.size.width) {
let colorOfPoints = testImage!.getPixelColor(pos: CGPoint(x:CGFloat(widthIteration), y:CGFloat(heightIteration)), withFrameSize: testImage!.size)
if colorOfPoints == MYColor {
print(colorOfPoints)
requiredPointsInImage.append(CGPoint(x:CGFloat(widthIteration), y:CGFloat(heightIteration)))
}
}
}
let newImage = drawShapesOnImage(image: testImage!, points: requiredPointsInImage)
// Colour Detection
extension UIImage {
func getPixelColor(pos: CGPoint, withFrameSize size: CGSize) -> UIColor {
let x: CGFloat = (self.size.width) * pos.x / size.width
let y: CGFloat = (self.size.height) * pos.y / size.height
let pixelPoint: CGPoint = CGPoint(x: x, y: y)
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pixelPoint.y)) + Int(pixelPoint.x)) * 3 //4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
// let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: 1)
}
}
// Drawing on detected points
func drawShapesOnImage(image: UIImage, points:[CGPoint]) -> UIImage {
UIGraphicsBeginImageContext(image.size)
image.draw(at: CGPoint.zero)
let context = UIGraphicsGetCurrentContext()
context!.setLineWidth(2.0)
context!.setStrokeColor(UIColor.green.cgColor)
let radius: CGFloat = 5.0
for point in points {
let center = CGPoint(x: point.x, y: point.y);
context!.addArc(center: CGPoint(x: center.x,y: center.y), radius: radius, startAngle: 0, endAngle: 360, clockwise: true)
context!.strokePath()
}
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return resultImage!
}
This is what I have got .....
Note: Background may not always be white, but of course not RED.
Using the same getPixelColor() function I can get exact color from this below image of any CGPoint but with y constant (say y:1)
If you think there is any other better approach to detect all these points please suggest.
SO I have found the solution, Actually the problem was I was ignoring Alpha channel in UIImage extension getPixelColor(). using alpha solve my problem.
here is the updated code!
extension UIImage {
func getPixelColor(pos: CGPoint, withFrameSize size: CGSize) -> UIColor {
let x: CGFloat = (self.size.width) * pos.x / size.width
let y: CGFloat = (self.size.height) * pos.y / size.height
let pixelPoint: CGPoint = CGPoint(x: x, y: y)
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pixelPoint.y)) + Int(pixelPoint.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
And the result is:

Get the coordinates of dominant colours from UIImageView

I'm not able to find the coordinates from UIImageView possessing dominant colours. This is the code:
for yCo in 0 ..< Int(imageView.frame.height) {
for xCo in 0 ..< Int(imageView.frame.width) where image.getPixelColor(pos: CGPoint(x: xCo, y: yCo)) == dominantColorFirst {
print(CGPoint(x: xCo, y: yCo)) // uses 99% CPU -> Leads to hang app
}
}
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor { // get pixel color
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)}
Due to large no. of iterations, Im not able to find out the coordinates. It uses more memory and hangs the app, Is there any other way to work around?
I did a image process project before. I get alpha of all pixel and cache it. (my image only 1200x1200px)
Don't know why but when I using print on the for loop, it take too much memory and delay the app.
I must walk around with
var cachePointArray = [CGPoint]()
for yCo in 0 ..< Int(imageView.frame.height) {
for xCo in 0 ..< Int(imageView.frame.width) {
let point = CGPoint(x: xCo, y: yCo)
if image.getPixelColor(pos: point) == dominantColorFirst {
cachePointArray.append(point)
}
}
}
print(cachePointArray)
and in run much faster
You can try this for test
(Sorry I dont enough point to comment, if it doesn't help, ignore it)

How do I get the RGB Value of a pixel using CGContext?

I'm trying to edit images by changing the pixels.
I have the following code:
let imageRect = CGRectMake(0, 0, self.image.image!.size.width, self.image.image!.size.height)
UIGraphicsBeginImageContext(self.image.image!.size)
let context = UIGraphicsGetCurrentContext()
CGContextSaveGState(context)
CGContextDrawImage(context, imageRect, self.image.image!.CGImage)
for x in 0...Int(self.image.image!.size.width) {
for y in 0...Int(self.image.image!.size.height) {
var red = 0
if y % 2 == 0 {
red = 255
}
CGContextSetRGBFillColor(context, CGFloat(red/255), 0.5, 0.5, 1)
CGContextFillRect(context, CGRectMake(CGFloat(x), CGFloat(y), 1, 1))
}
}
CGContextRestoreGState(context)
self.image.image = UIGraphicsGetImageFromCurrentImageContext()
I'm looping through all the pixels and changing the value of the each pixel, then converting it back to an image. Want I want to do is somehow get the value of the current pixel (in the y-for-loop) and do something with that data. I have not found anything on the internet about this particular problem.
Under the covers, UIGraphicsBeginImageContext creates a CGBitmapContext. You can get access to the context's pixel storage using CGBitmapContextGetData. The problem with this approach is that the UIGraphicsBeginImageContext function chooses the byte order and color space used to store the pixel data. Those choices (particularly the byte order) could change in future versions of iOS (or even on different devices).
So instead, let's create the context directly with CGBitmapContextCreate, so we can be sure of the byte order and color space.
In my playground, I've added a test image named pic#2x.jpeg.
import XCPlayground
import UIKit
let image = UIImage(named: "pic.jpeg")!
XCPCaptureValue("image", value: image)
Here's how we create the bitmap context, taking the image scale into account (which you didn't do in your question):
let rowCount = Int(image.size.height * image.scale)
let columnCount = Int(image.size.width * image.scale)
let stride = 64 * ((columnCount * 4 + 63) / 64)
let context = CGBitmapContextCreate(nil, columnCount, rowCount, 8, stride,
CGColorSpaceCreateDeviceRGB(),
CGBitmapInfo.ByteOrder32Little.rawValue |
CGImageAlphaInfo.PremultipliedLast.rawValue)
Next, we adjust the coordinate system to match what UIGraphicsBeginImageContextWithOptions would do, so that we can draw the image correctly and easily:
CGContextTranslateCTM(context, 0, CGFloat(rowCount))
CGContextScaleCTM(context, image.scale, -image.scale)
UIGraphicsPushContext(context!)
image.drawAtPoint(CGPointZero)
UIGraphicsPopContext()
Note that UIImage.drawAtPoint takes image.orientation into account. CGContextDrawImage does not.
Now let's get a pointer to the raw pixel data from the context. The code is clearer if we define a structure to access the individual components of each pixel:
struct Pixel {
var a: UInt8
var b: UInt8
var g: UInt8
var r: UInt8
}
let pixels = UnsafeMutablePointer<Pixel>(CGBitmapContextGetData(context))
Note that the order of the Pixel members is defined to match the specific bits I set in the bitmapInfo argument to CGBitmapContextCreate.
Now we can loop over the pixels. Note that we use rowCount and columnCount, computed above, to visit all the pixels, regardless of the image scale:
for y in 0 ..< rowCount {
if y % 2 == 0 {
for x in 0 ..< columnCount {
let pixel = pixels.advancedBy(y * stride / sizeof(Pixel.self) + x)
pixel.memory.r = 255
}
}
}
Finally, we get a new image from the context:
let newImage = UIImage(CGImage: CGBitmapContextCreateImage(context)!, scale: image.scale, orientation: UIImageOrientation.Up)
XCPCaptureValue("newImage", value: newImage)
The result, in my playground's timeline:
Finally, note that if your images are large, going through pixel by pixel can be slow. If you can find a way to perform your image manipulation using Core Image or GPUImage, it'll be a lot faster. Failing that, using Objective-C and manually vectorizing it (using NEON intrinsics) may provide a big boost.
Ok, I think I have a solution that should work for you in Swift 2.
Credit goes to this answer for the UIColor extension below.
Since I needed an image to test this on I chose a slice (50 x 50 - top left corner) of your gravatar...
So the code below converts this:
To this:
This works for me in a playground - all you should have to do is copy and paste into a playground to see the result:
//: Playground - noun: a place where people can play
import UIKit
import XCPlayground
extension CALayer {
func colorOfPoint(point:CGPoint) -> UIColor
{
var pixel:[CUnsignedChar] = [0,0,0,0]
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue)
let context = CGBitmapContextCreate(&pixel, 1, 1, 8, 4, colorSpace,bitmapInfo.rawValue)
CGContextTranslateCTM(context, -point.x, -point.y)
self.renderInContext(context!)
let red:CGFloat = CGFloat(pixel[0])/255.0
let green:CGFloat = CGFloat(pixel[1])/255.0
let blue:CGFloat = CGFloat(pixel[2])/255.0
let alpha:CGFloat = CGFloat(pixel[3])/255.0
//println("point color - red:\(red) green:\(green) blue:\(blue)")
let color = UIColor(red:red, green: green, blue:blue, alpha:alpha)
return color
}
}
extension UIColor {
var components:(red: CGFloat, green: CGFloat, blue: CGFloat, alpha: CGFloat) {
var r:CGFloat = 0
var g:CGFloat = 0
var b:CGFloat = 0
var a:CGFloat = 0
getRed(&r, green: &g, blue: &b, alpha: &a)
return (r,g,b,a)
}
}
//get an image we can work on
var imageFromURL = UIImage(data: NSData(contentsOfURL: NSURL(string:"https://www.gravatar.com/avatar/ba4178644a33a51e928ffd820269347c?s=328&d=identicon&r=PG&f=1")!)!)
//only use a small area of that image - 50 x 50 square
let imageSliceArea = CGRectMake(0, 0, 50, 50);
let imageSlice = CGImageCreateWithImageInRect(imageFromURL?.CGImage, imageSliceArea);
//we'll work on this image
var image = UIImage(CGImage: imageSlice!)
let imageView = UIImageView(image: image)
//test out the extension above on the point (0,0) - returns r 0.541 g 0.78 b 0.227 a 1.0
var pointColor = imageView.layer.colorOfPoint(CGPoint(x: 0, y: 0))
let imageRect = CGRectMake(0, 0, image.size.width, image.size.height)
UIGraphicsBeginImageContext(image.size)
let context = UIGraphicsGetCurrentContext()
CGContextSaveGState(context)
CGContextDrawImage(context, imageRect, image.CGImage)
for x in 0...Int(image.size.width) {
for y in 0...Int(image.size.height) {
var pointColor = imageView.layer.colorOfPoint(CGPoint(x: x, y: y))
//I used my own creativity here - change this to whatever logic you want
if y % 2 == 0 {
CGContextSetRGBFillColor(context, pointColor.components.red , 0.5, 0.5, 1)
}
else {
CGContextSetRGBFillColor(context, 255, 0.5, 0.5, 1)
}
CGContextFillRect(context, CGRectMake(CGFloat(x), CGFloat(y), 1, 1))
}
}
CGContextRestoreGState(context)
image = UIGraphicsGetImageFromCurrentImageContext()
I hope this works for you. I had fun playing around with this!
This answer assumes you have a CGContext of the image created. An important part of the answer is rounding up the row offset to a multiple of 8 to ensure this works on any image size, which I haven't seen in other solutions online.
Swift 5
func colorAt(x: Int, y: Int) -> UIColor {
let capacity = context.width * context.height
let widthMultiple = 8
let rowOffset = ((context.width + widthMultiple - 1) / widthMultiple) * widthMultiple // Round up to multiple of 8
let data: UnsafeMutablePointer<UInt8> = context!.data!.bindMemory(to: UInt8.self, capacity: capacity)
let offset = 4 * ((y * rowOffset) + x)
let red = data[offset+2]
let green = data[offset+1]
let blue = data[offset]
let alpha = data[offset+3]
return UIColor(red: CGFloat(red)/255.0, green: CGFloat(green)/255.0, blue: CGFloat(blue)/255.0, alpha: CGFloat(alpha)/255.0)
}

How do I get the color of a pixel in a UIImage with Swift?

I'm trying to get the color of a pixel in a UIImage with Swift, but it seems to always return 0. Here is the code, translated from #Minas' answer on this thread:
func getPixelColor(pos: CGPoint) -> UIColor {
var pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
var data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
var pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
var r = CGFloat(data[pixelInfo])
var g = CGFloat(data[pixelInfo+1])
var b = CGFloat(data[pixelInfo+2])
var a = CGFloat(data[pixelInfo+3])
return UIColor(red: r, green: g, blue: b, alpha: a)
}
Thanks in advance!
A bit of searching leads me here since I was facing the similar problem.
You code works fine. The problem might be raised from your image.
Code:
//On the top of your swift
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
What happens is this method will pick the pixel colour from the image's CGImage. So make sure you are picking from the right image. e.g. If you UIImage is 200x200, but the original image file from Imgaes.xcassets or wherever it came from, is 400x400, and you are picking point (100,100), you are actually picking the point on the upper left section of the image, instead of middle.
Two Solutions:
1, Use image from Imgaes.xcassets, and only put one #1x image in 1x field. Leave the #2x, #3x blank. Make sure you know the image size, and pick a point that is within the range.
//Make sure only 1x image is set
let image : UIImage = UIImage(named:"imageName")
//Make sure point is within the image
let color : UIColor = image.getPixelColor(CGPointMake(xValue, yValue))
2, Scale you CGPoint up/down the proportion to match the UIImage. e.g. let point = CGPoint(100,100) in the example above,
let xCoordinate : Float = Float(point.x) * (400.0/200.0)
let yCoordinate : Float = Float(point.y) * (400.0/200.0)
let newCoordinate : CGPoint = CGPointMake(CGFloat(xCoordinate), CGFloat(yCoordinate))
let image : UIImage = largeImage
let color : UIColor = image.getPixelColor(CGPointMake(xValue, yValue))
I've only tested the first method, and I am using it to get a colour off a colour palette. Both should work.
Happy coding :)
SWIFT 3, XCODE 8 Tested and working
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
If you are calling the answered question more than once, than you should not use the function on every pixel, because you are recreating the same set of data. If you want all of the colors in an image, do something more like this:
func findColors(_ image: UIImage) -> [UIColor] {
let pixelsWide = Int(image.size.width)
let pixelsHigh = Int(image.size.height)
guard let pixelData = image.cgImage?.dataProvider?.data else { return [] }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
var imageColors: [UIColor] = []
for x in 0..<pixelsWide {
for y in 0..<pixelsHigh {
let point = CGPoint(x: x, y: y)
let pixelInfo: Int = ((pixelsWide * Int(point.y)) + Int(point.x)) * 4
let color = UIColor(red: CGFloat(data[pixelInfo]) / 255.0,
green: CGFloat(data[pixelInfo + 1]) / 255.0,
blue: CGFloat(data[pixelInfo + 2]) / 255.0,
alpha: CGFloat(data[pixelInfo + 3]) / 255.0)
imageColors.append(color)
}
}
return imageColors
}
Here is an Example Project
As a side note, this function is significantly faster than the accepted answer, but it gives a less defined result.. I just put the UIImageView in the sourceView parameter.
func getPixelColorAtPoint(point: CGPoint, sourceView: UIView) -> UIColor {
let pixel = UnsafeMutablePointer<CUnsignedChar>.allocate(capacity: 4)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let context = CGContext(data: pixel, width: 1, height: 1, bitsPerComponent: 8, bytesPerRow: 4, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
context!.translateBy(x: -point.x, y: -point.y)
sourceView.layer.render(in: context!)
let color: UIColor = UIColor(red: CGFloat(pixel[0])/255.0,
green: CGFloat(pixel[1])/255.0,
blue: CGFloat(pixel[2])/255.0,
alpha: CGFloat(pixel[3])/255.0)
pixel.deallocate(capacity: 4)
return color
}
I was getting swapped colors for red and blue.
The original function also did not account for the actual bytes per row and bytes per pixel.
I also avoid unwrapping optionals whenever possible.
Here's an updated function.
import UIKit
extension UIImage {
/// Get the pixel color at a point in the image
func pixelColor(atLocation point: CGPoint) -> UIColor? {
guard let cgImage = cgImage, let pixelData = cgImage.dataProvider?.data else { return nil }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let bytesPerPixel = cgImage.bitsPerPixel / 8
let pixelInfo: Int = ((cgImage.bytesPerRow * Int(point.y)) + (Int(point.x) * bytesPerPixel))
let b = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let r = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
Swift3 (IOS 10.3)
Important: - This will works only for #1x image.
Request: -
if you have solution for #2x and #3x images please share. Thank you :)
extension UIImage {
func getPixelColor(atLocation location: CGPoint, withFrameSize size: CGSize) -> UIColor {
let x: CGFloat = (self.size.width) * location.x / size.width
let y: CGFloat = (self.size.height) * location.y / size.height
let pixelPoint: CGPoint = CGPoint(x: x, y: y)
let pixelData = self.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelIndex: Int = ((Int(self.size.width) * Int(pixelPoint.y)) + Int(pixelPoint.x)) * 4
let r = CGFloat(data[pixelIndex]) / CGFloat(255.0)
let g = CGFloat(data[pixelIndex+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelIndex+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelIndex+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
Usage
print(yourImageView.image!.getPixelColor(atLocation: location, withFrameSize: yourImageView.frame.size))
You can use tapGestureRecognizer for location.
Your code works fine for me, as an extension to UIImage. How are your testing your colour? here's my example:
let green = UIImage(named: "green.png")
let topLeft = CGPoint(x: 0, y: 0)
// Use your extension
let greenColour = green.getPixelColor(topLeft)
// Dump RGBA values
var redval: CGFloat = 0
var greenval: CGFloat = 0
var blueval: CGFloat = 0
var alphaval: CGFloat = 0
greenColour.getRed(&redval, green: &greenval, blue: &blueval, alpha: &alphaval)
println("Green is r: \(redval) g: \(greenval) b: \(blueval) a: \(alphaval)")
This prints:
Green is r: 0.0 g: 1.0 b: 1.0 a: 1.0
...which is correct, given that my image is a solid green square.
(What do you mean by "it always seems to return 0"? You don't happen to be testing on a black pixel, do you?)
Im getting backwards colours in terms of R and B being swapped, not sure why this I thought the order was RGBA.
func testGeneratedColorImage() {
let color = UIColor(red: 0.5, green: 0, blue: 1, alpha: 1)
let size = CGSize(width: 10, height: 10)
let image = UIImage.image(fromColor: color, size: size)
XCTAssert(image.size == size)
XCTAssertNotNil(image.cgImage)
XCTAssertNotNil(image.cgImage!.dataProvider)
let pixelData = image.cgImage!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let position = CGPoint(x: 1, y: 1)
let pixelInfo: Int = ((Int(size.width) * Int(position.y)) + Int(position.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
let testColor = UIColor(red: r, green: g, blue: b, alpha: a)
XCTAssert(testColor == color, "Colour: \(testColor) does not match: \(color)")
}
Where color looks like this:
image looks like this:
and testColor looks like:
(I can understand that the blue value might be off a little bit and be 0.502 with floating point inaccuracy)
With the code switched to:
let b = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let r = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
I get testColor as:
I think you need to divide each component by 255:
var r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
var g = CGFloat(data[pixelInfo + 1]) / CGFloat(255.0)
var b = CGFloat(data[pixelInfo + 2]) / CGFloat(255.0)
var a = CGFloat(data[pixelInfo + 3]) / CGFloat(255.0)
I was trying to find the colors of all four corners of an image and was getting unexpected results, including UIColor.clear.
The issue is that the pixels start at 0, so requesting a pixel at the width of the image would actually wrap back around and give me the first pixel of the second row.
For example, the top right pixel of a 640 x 480 image would actually be x: 639, y: 0, and the bottom right pixel would be x: 639, y: 479.
Here's my implementation of the UIImage extension with this adjustment:
func getPixelColor(pos: CGPoint) -> UIColor {
guard let cgImage = cgImage, let pixelData = cgImage.dataProvider?.data else { return UIColor.clear }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let bytesPerPixel = cgImage.bitsPerPixel / 8
// adjust the pixels to constrain to be within the width/height of the image
let y = pos.y > 0 ? pos.y - 1 : 0
let x = pos.x > 0 ? pos.x - 1 : 0
let pixelInfo = ((Int(self.size.width) * Int(y)) + Int(x)) * bytesPerPixel
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
I found no answer anywhere on the internet that supplied
Simple code
HDR support
Color profile support for bgr etc.
Scale support for #2x #3x
So here it is. The as far as I can tell definitive solution:
Swift 5
import UIKit
public extension CGBitmapInfo {
// https://stackoverflow.com/a/60247693/2585092
enum ComponentLayout {
case bgra
case abgr
case argb
case rgba
case bgr
case rgb
var count: Int {
switch self {
case .bgr, .rgb: return 3
default: return 4
}
}
}
var componentLayout: ComponentLayout? {
guard let alphaInfo = CGImageAlphaInfo(rawValue: rawValue & Self.alphaInfoMask.rawValue) else { return nil }
let isLittleEndian = contains(.byteOrder32Little)
if alphaInfo == .none {
return isLittleEndian ? .bgr : .rgb
}
let alphaIsFirst = alphaInfo == .premultipliedFirst || alphaInfo == .first || alphaInfo == .noneSkipFirst
if isLittleEndian {
return alphaIsFirst ? .bgra : .abgr
} else {
return alphaIsFirst ? .argb : .rgba
}
}
var chromaIsPremultipliedByAlpha: Bool {
let alphaInfo = CGImageAlphaInfo(rawValue: rawValue & Self.alphaInfoMask.rawValue)
return alphaInfo == .premultipliedFirst || alphaInfo == .premultipliedLast
}
}
extension UIImage {
// https://stackoverflow.com/a/68103748/2585092
subscript(_ point: CGPoint) -> UIColor? {
guard
let cgImage = cgImage,
let space = cgImage.colorSpace,
let pixelData = cgImage.dataProvider?.data,
let layout = cgImage.bitmapInfo.componentLayout
else {
return nil
}
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let comp = CGFloat(layout.count)
let isHDR = CGColorSpaceUsesITUR_2100TF(space)
let hdr = CGFloat(isHDR ? 2 : 1)
let pixelInfo = Int((size.width * point.y * scale + point.x * scale) * comp * hdr)
let i = Array(0 ... Int(comp - 1)).map {
CGFloat(data[pixelInfo + $0 * Int(hdr)]) / CGFloat(255)
}
switch layout {
case .bgra:
return UIColor(red: i[2], green: i[1], blue: i[0], alpha: i[3])
case .abgr:
return UIColor(red: i[3], green: i[2], blue: i[1], alpha: i[0])
case .argb:
return UIColor(red: i[1], green: i[2], blue: i[3], alpha: i[0])
case .rgba:
return UIColor(red: i[0], green: i[1], blue: i[2], alpha: i[3])
case .bgr:
return UIColor(red: i[2], green: i[1], blue: i[0], alpha: 1)
case .rgb:
return UIColor(red: i[0], green: i[1], blue: i[2], alpha: 1)
}
}
}
Swift 5, includes solution for #2x & #3x image
extension UIImage {
subscript(_ point: CGPoint) -> UIColor? {
guard let pixelData = self.cgImage?.dataProvider?.data else { return nil }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = Int((size.width * point.y + point.x) * 4.0 * scale * scale)
let i = Array(0 ... 3).map { CGFloat(data[pixelInfo + $0]) / CGFloat(255) }
return UIColor(red: i[0], green: i[1], blue: i[2], alpha: i[3])
}
}
I use this extension :
public extension UIImage {
var pixelWidth: Int {
return cgImage?.width ?? 0
}
var pixelHeight: Int {
return cgImage?.height ?? 0
}
func pixelColor(x: Int, y: Int) -> UIColor {
if 0..<pixelWidth ~= x && 0..<pixelHeight ~= y {
log.info("Pixel coordinates are in bounds")
}else {
log.info("Pixel coordinates are out of bounds")
return .black
}
guard
let cgImage = cgImage,
let data = cgImage.dataProvider?.data,
let dataPtr = CFDataGetBytePtr(data),
let colorSpaceModel = cgImage.colorSpace?.model,
let componentLayout = cgImage.bitmapInfo.componentLayout
else {
assertionFailure("Could not get a pixel of an image")
return .clear
}
assert(
colorSpaceModel == .rgb,
"The only supported color space model is RGB")
assert(
cgImage.bitsPerPixel == 32 || cgImage.bitsPerPixel == 24,
"A pixel is expected to be either 4 or 3 bytes in size")
let bytesPerRow = cgImage.bytesPerRow
let bytesPerPixel = cgImage.bitsPerPixel/8
let pixelOffset = y*bytesPerRow + x*bytesPerPixel
if componentLayout.count == 4 {
let components = (
dataPtr[pixelOffset + 0],
dataPtr[pixelOffset + 1],
dataPtr[pixelOffset + 2],
dataPtr[pixelOffset + 3]
)
var alpha: UInt8 = 0
var red: UInt8 = 0
var green: UInt8 = 0
var blue: UInt8 = 0
switch componentLayout {
case .bgra:
alpha = components.3
red = components.2
green = components.1
blue = components.0
case .abgr:
alpha = components.0
red = components.3
green = components.2
blue = components.1
case .argb:
alpha = components.0
red = components.1
green = components.2
blue = components.3
case .rgba:
alpha = components.3
red = components.0
green = components.1
blue = components.2
default:
return .clear
}
// If chroma components are premultiplied by alpha and the alpha is `0`,
// keep the chroma components to their current values.
if cgImage.bitmapInfo.chromaIsPremultipliedByAlpha && alpha != 0 {
let invUnitAlpha = 255/CGFloat(alpha)
red = UInt8((CGFloat(red)*invUnitAlpha).rounded())
green = UInt8((CGFloat(green)*invUnitAlpha).rounded())
blue = UInt8((CGFloat(blue)*invUnitAlpha).rounded())
}
return .init(red: red, green: green, blue: blue, alpha: alpha)
} else if componentLayout.count == 3 {
let components = (
dataPtr[pixelOffset + 0],
dataPtr[pixelOffset + 1],
dataPtr[pixelOffset + 2]
)
var red: UInt8 = 0
var green: UInt8 = 0
var blue: UInt8 = 0
switch componentLayout {
case .bgr:
red = components.2
green = components.1
blue = components.0
case .rgb:
red = components.0
green = components.1
blue = components.2
default:
return .clear
}
return .init(red: red, green: green, blue: blue, alpha: UInt8(255))
} else {
assertionFailure("Unsupported number of pixel components")
return .clear
}
}
}
But for a right pixel color you need use only a image in xcasset in x1 otherwise your reference is wrong and you need to use this: let correctedImage = UIImage(data: image.pngData()!) for retrive the correct origin for your point .
The solution of https://stackoverflow.com/a/40237504/3286489, only works on sRGB colorspace type of image. However, for a different colorspace (extended sRGB??), it doesn't work.
So to make it work, need to convert it to a normal sRGB image type first, before getting the color from the cgImage. Note we need to add padding to the calculation to ensure the width is always a factor of 8
public extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
// convert to standard sRGB image
guard let cgImage = cgImage,
let colorSpace = CGColorSpace(name: CGColorSpace.sRGB),
let context = CGContext(data: nil,
width: Int(size.width), height: Int(size.height),
bitsPerComponent: 8, bytesPerRow: 0, space: colorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
else { return .white }
context.draw(cgImage, in: CGRect(origin: .zero, size: size))
// Get the newly converted cgImage
guard let newCGImage = context.makeImage(),
let newDataProvider = newCGImage.dataProvider,
let data = newDataProvider.data
else { return .white }
let pixelData: UnsafePointer<UInt8> = CFDataGetBytePtr(data)
// Calculate the pixel position based on point given
let remaining = 8 - ((Int(size.width)) % 8)
let padding = (remaining < 8) ? remaining : 0
let pixelInfo: Int = (((Int(size.width) + padding) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(pixelData[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(pixelData[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(pixelData[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(pixelData[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
Optionally, if one doesn't want to convert to cgImage, just replace
// Get the newly converted cgImage
guard let newCGImage = context.makeImage(),
let newDataProvider = newCGImage.dataProvider,
let newData = newDataProvider.data
else { return .white }
let pixelData: UnsafePointer<UInt8> = CFDataGetBytePtr(newData)
With
// Get the data and bind it from UnsafeMutableRawPointer to UInt8
guard let data = context.data else { return .white }
let pixelData = data.bindMemory(
to: UInt8.self, capacity: Int(size.width * size.height * 4))
Updated
To get an even more concise code, we can improve the convert to sRGB using UIGraphicsImageRenderer directly. The calculation does changes a bit as due such redrawing refine the pixel to be 2x further.
func getPixelColor(pos: CGPoint) -> UIColor {
let newImage = UIGraphicsImageRenderer(size: size).image { _ in
draw(in: CGRect(origin: .zero, size: size))
}
guard let cgImage = newImage.cgImage,
let dataProvider = cgImage.dataProvider,
let data = dataProvider.data else { return .white }
let pixelData: UnsafePointer<UInt8> = CFDataGetBytePtr(data)
let remaining = 8 - ((Int(size.width) * 2) % 8)
let padding = (remaining < 8) ? remaining : 0
let pixelInfo: Int = (((Int(size.width * 2) + padding) * Int(pos.y * 2)) + Int(pos.x * 2)) * 4
let r = CGFloat(pixelData[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(pixelData[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(pixelData[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(pixelData[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
This is as per solution of convert to sRGB in https://stackoverflow.com/a/64538344/3286489
As usual, late to the party, but I wanted to mention that the method indicated above, doesn't always work. If the image is not RGBA, then it can crash. In my experience, running release (optimized) code, can crash, when the debug code works fine.
I tend to use a lot of vector images in my apps, and iOS can sometimes render them in monochrome color spaces. I have experienced a number of crashes, with the code given here.
Also, we should use bytesPerRow, when skipping on the vertical. Apple tends to add padding to bitmaps, and a simple 4-byte pixel offset may not work.
I draw the image into an offscreen context, then take the sample from there.
Here's what I did. It works, but is not exactly performant. In my case, it's fine, because I only use it once, at startup:
extension UIImage {
/* ################################################################## */
/**
This returns the RGB color (as a UIColor) of the pixel in the image, at the given point. It is restricted to 32-bit (RGBA/8-bit pixel) values.
This was inspired by several of the answers [in this StackOverflow Question](https://stackoverflow.com/questions/25146557/how-do-i-get-the-color-of-a-pixel-in-a-uiimage-with-swift).
**NOTE:** This is unlikely to be highly performant!
- parameter at: The point in the image to sample (NOTE: Must be within image bounds, or nil is returned).
- returns: A UIColor (or nil).
*/
func getRGBColorOfThePixel(at inPoint: CGPoint) -> UIColor? {
guard (0..<size.width).contains(inPoint.x),
(0..<size.height).contains(inPoint.y)
else { return nil }
// We draw the image into a context, in order to be sure that we are accessing image data in our required format (RGBA).
UIGraphicsBeginImageContextWithOptions(size, false, 0)
draw(at: .zero)
let imageData = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
guard let cgImage = imageData?.cgImage,
let pixelData = cgImage.dataProvider?.data
else { return nil }
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let bytesPerPixel = (cgImage.bitsPerPixel + 7) / 8
let pixelByteOffset: Int = (cgImage.bytesPerRow * Int(inPoint.y)) + (Int(inPoint.x) * bytesPerPixel)
let divisor = CGFloat(255.0)
let r = CGFloat(data[pixelByteOffset]) / divisor
let g = CGFloat(data[pixelByteOffset + 1]) / divisor
let b = CGFloat(data[pixelByteOffset + 2]) / divisor
let a = CGFloat(data[pixelByteOffset + 3]) / divisor
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
If you use image from Imgaes.xcassets, and only put one #1x image in 1x field. Leave the #2x, #3x blank.

Resources