How do I get the RGB Value of a pixel using CGContext? - ios

I'm trying to edit images by changing the pixels.
I have the following code:
let imageRect = CGRectMake(0, 0, self.image.image!.size.width, self.image.image!.size.height)
UIGraphicsBeginImageContext(self.image.image!.size)
let context = UIGraphicsGetCurrentContext()
CGContextSaveGState(context)
CGContextDrawImage(context, imageRect, self.image.image!.CGImage)
for x in 0...Int(self.image.image!.size.width) {
for y in 0...Int(self.image.image!.size.height) {
var red = 0
if y % 2 == 0 {
red = 255
}
CGContextSetRGBFillColor(context, CGFloat(red/255), 0.5, 0.5, 1)
CGContextFillRect(context, CGRectMake(CGFloat(x), CGFloat(y), 1, 1))
}
}
CGContextRestoreGState(context)
self.image.image = UIGraphicsGetImageFromCurrentImageContext()
I'm looping through all the pixels and changing the value of the each pixel, then converting it back to an image. Want I want to do is somehow get the value of the current pixel (in the y-for-loop) and do something with that data. I have not found anything on the internet about this particular problem.

Under the covers, UIGraphicsBeginImageContext creates a CGBitmapContext. You can get access to the context's pixel storage using CGBitmapContextGetData. The problem with this approach is that the UIGraphicsBeginImageContext function chooses the byte order and color space used to store the pixel data. Those choices (particularly the byte order) could change in future versions of iOS (or even on different devices).
So instead, let's create the context directly with CGBitmapContextCreate, so we can be sure of the byte order and color space.
In my playground, I've added a test image named pic#2x.jpeg.
import XCPlayground
import UIKit
let image = UIImage(named: "pic.jpeg")!
XCPCaptureValue("image", value: image)
Here's how we create the bitmap context, taking the image scale into account (which you didn't do in your question):
let rowCount = Int(image.size.height * image.scale)
let columnCount = Int(image.size.width * image.scale)
let stride = 64 * ((columnCount * 4 + 63) / 64)
let context = CGBitmapContextCreate(nil, columnCount, rowCount, 8, stride,
CGColorSpaceCreateDeviceRGB(),
CGBitmapInfo.ByteOrder32Little.rawValue |
CGImageAlphaInfo.PremultipliedLast.rawValue)
Next, we adjust the coordinate system to match what UIGraphicsBeginImageContextWithOptions would do, so that we can draw the image correctly and easily:
CGContextTranslateCTM(context, 0, CGFloat(rowCount))
CGContextScaleCTM(context, image.scale, -image.scale)
UIGraphicsPushContext(context!)
image.drawAtPoint(CGPointZero)
UIGraphicsPopContext()
Note that UIImage.drawAtPoint takes image.orientation into account. CGContextDrawImage does not.
Now let's get a pointer to the raw pixel data from the context. The code is clearer if we define a structure to access the individual components of each pixel:
struct Pixel {
var a: UInt8
var b: UInt8
var g: UInt8
var r: UInt8
}
let pixels = UnsafeMutablePointer<Pixel>(CGBitmapContextGetData(context))
Note that the order of the Pixel members is defined to match the specific bits I set in the bitmapInfo argument to CGBitmapContextCreate.
Now we can loop over the pixels. Note that we use rowCount and columnCount, computed above, to visit all the pixels, regardless of the image scale:
for y in 0 ..< rowCount {
if y % 2 == 0 {
for x in 0 ..< columnCount {
let pixel = pixels.advancedBy(y * stride / sizeof(Pixel.self) + x)
pixel.memory.r = 255
}
}
}
Finally, we get a new image from the context:
let newImage = UIImage(CGImage: CGBitmapContextCreateImage(context)!, scale: image.scale, orientation: UIImageOrientation.Up)
XCPCaptureValue("newImage", value: newImage)
The result, in my playground's timeline:
Finally, note that if your images are large, going through pixel by pixel can be slow. If you can find a way to perform your image manipulation using Core Image or GPUImage, it'll be a lot faster. Failing that, using Objective-C and manually vectorizing it (using NEON intrinsics) may provide a big boost.

Ok, I think I have a solution that should work for you in Swift 2.
Credit goes to this answer for the UIColor extension below.
Since I needed an image to test this on I chose a slice (50 x 50 - top left corner) of your gravatar...
So the code below converts this:
To this:
This works for me in a playground - all you should have to do is copy and paste into a playground to see the result:
//: Playground - noun: a place where people can play
import UIKit
import XCPlayground
extension CALayer {
func colorOfPoint(point:CGPoint) -> UIColor
{
var pixel:[CUnsignedChar] = [0,0,0,0]
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue)
let context = CGBitmapContextCreate(&pixel, 1, 1, 8, 4, colorSpace,bitmapInfo.rawValue)
CGContextTranslateCTM(context, -point.x, -point.y)
self.renderInContext(context!)
let red:CGFloat = CGFloat(pixel[0])/255.0
let green:CGFloat = CGFloat(pixel[1])/255.0
let blue:CGFloat = CGFloat(pixel[2])/255.0
let alpha:CGFloat = CGFloat(pixel[3])/255.0
//println("point color - red:\(red) green:\(green) blue:\(blue)")
let color = UIColor(red:red, green: green, blue:blue, alpha:alpha)
return color
}
}
extension UIColor {
var components:(red: CGFloat, green: CGFloat, blue: CGFloat, alpha: CGFloat) {
var r:CGFloat = 0
var g:CGFloat = 0
var b:CGFloat = 0
var a:CGFloat = 0
getRed(&r, green: &g, blue: &b, alpha: &a)
return (r,g,b,a)
}
}
//get an image we can work on
var imageFromURL = UIImage(data: NSData(contentsOfURL: NSURL(string:"https://www.gravatar.com/avatar/ba4178644a33a51e928ffd820269347c?s=328&d=identicon&r=PG&f=1")!)!)
//only use a small area of that image - 50 x 50 square
let imageSliceArea = CGRectMake(0, 0, 50, 50);
let imageSlice = CGImageCreateWithImageInRect(imageFromURL?.CGImage, imageSliceArea);
//we'll work on this image
var image = UIImage(CGImage: imageSlice!)
let imageView = UIImageView(image: image)
//test out the extension above on the point (0,0) - returns r 0.541 g 0.78 b 0.227 a 1.0
var pointColor = imageView.layer.colorOfPoint(CGPoint(x: 0, y: 0))
let imageRect = CGRectMake(0, 0, image.size.width, image.size.height)
UIGraphicsBeginImageContext(image.size)
let context = UIGraphicsGetCurrentContext()
CGContextSaveGState(context)
CGContextDrawImage(context, imageRect, image.CGImage)
for x in 0...Int(image.size.width) {
for y in 0...Int(image.size.height) {
var pointColor = imageView.layer.colorOfPoint(CGPoint(x: x, y: y))
//I used my own creativity here - change this to whatever logic you want
if y % 2 == 0 {
CGContextSetRGBFillColor(context, pointColor.components.red , 0.5, 0.5, 1)
}
else {
CGContextSetRGBFillColor(context, 255, 0.5, 0.5, 1)
}
CGContextFillRect(context, CGRectMake(CGFloat(x), CGFloat(y), 1, 1))
}
}
CGContextRestoreGState(context)
image = UIGraphicsGetImageFromCurrentImageContext()
I hope this works for you. I had fun playing around with this!

This answer assumes you have a CGContext of the image created. An important part of the answer is rounding up the row offset to a multiple of 8 to ensure this works on any image size, which I haven't seen in other solutions online.
Swift 5
func colorAt(x: Int, y: Int) -> UIColor {
let capacity = context.width * context.height
let widthMultiple = 8
let rowOffset = ((context.width + widthMultiple - 1) / widthMultiple) * widthMultiple // Round up to multiple of 8
let data: UnsafeMutablePointer<UInt8> = context!.data!.bindMemory(to: UInt8.self, capacity: capacity)
let offset = 4 * ((y * rowOffset) + x)
let red = data[offset+2]
let green = data[offset+1]
let blue = data[offset]
let alpha = data[offset+3]
return UIColor(red: CGFloat(red)/255.0, green: CGFloat(green)/255.0, blue: CGFloat(blue)/255.0, alpha: CGFloat(alpha)/255.0)
}

Related

How to apply CIFilter on Image Pixel

I know it is possible to change color of pixel using pixel buffer, like in the below code, but I just want to blur a pixel using 'CIFilter' rather than changing color. I don't want to apply a 'CIFilter' on whole image.
//data pointer – stores an array of the pixel components. For example (r0, b0, g0, a0, r1, g1, b1, a1 .... rn, gn, bn, an)
let data : UnsafeMutablePointer<UInt8> = calloc(bytesPerRow, height)!.assumingMemoryBound(to: UInt8.self)
//get the index of the pixel (4 components times the x position plus the y position times the row width)
let pixelIndex = 4 * (location.x + (location.y * width))
//set the pixel components to the color components
data[pixelIndex] = red
data[pixelIndex+1] = green
data[pixelIndex+2] = blue
data[pixelIndex+3] = alpha
Also can we use below code for applying CIFilter on Pixel?
if let pixelData = self.cgImage?.dataProvider?.data {
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let red = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let green = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let blue = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let alpha = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
}
Check out the CIBlendWithMask filter. It allows you to create a mask of any shape (even a single pixel) and use that to blend the input with another input. If you make inputBackgroundImage be the original image, and make inputImage be the original image with your desired filter applied, the inputImageMask is an all-black image with only the single pixel you want to change in white.
I typed this up pretty quick without testing code - could be a few errors. I have done something very similar recently, so shouldn't be too off. I'd like to know whatcha get and if this doesn't work, I bet it's close.
/*
Implementations
Notes:
- I don't know what kind of `CIFilter` blur you'd like to apply, so I'm just using one from here
- https://developer.apple.com/library/archive/documentation/GraphicsImaging/Reference/CoreImageFilterReference/#//apple_ref/doc/filter/ci/CIBoxBlur
*/
//Normal Image
let inputImage:UIImage = originalImage
//Blurred Image of only the BLURRED PIXELS -- we change the rest of the pixels to clear -- thus we can use this as the backgroundImage and the maskedImage
let unblurredImage = getBackgroundImage()
let filter = CIFilter(name: "CIBoxBlur")
filter?.setValue(unblurredImage, kCIInputImageKey)
let blurredImage = filter?.outputImage
//Now we can blend the 2 images
let blendFilter = CIFilter(name: "CIBlendWithAlphaMask")
blendFilter?.setValue(CIImage(image: inputImage), kCIInputImageKey)
blendFilter?.setValue(blurredImage, "inputBackgroundImage")
blendFilter?.setValue(blurredImage, "inputMaskImage")
let finalCIImage = blendFilter?.outputImage
let finalImage = UIImage(ciImage: finalCIImage)
/*
Functions used in the process
*/
//Create an image of only the pixels we want to blur
func getBackgroundImage(ciimage: CIImage) -> UIImage {
let inputCGImage = ciimage.convertCIImageToCGImage()!
let colorSpace = CGColorSpaceCreateDeviceRGB()
let width = inputCGImage.width
let height = inputCGImage.height
let bytesPerPixel = 4
let bitsPerComponent = 8
let bytesPerRow = bytesPerPixel * width
let bitmapInfo = RGBA32.bitmapInfo
guard let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: bitsPerComponent, bytesPerRow: bytesPerRow, space: colorSpace, bitmapInfo: bitmapInfo) else {
print("Couldn't create CGContext")
return nil
}
context.draw(inputCGImage, in: CGRect(x: 0, y: 0, width: width, height: height))
let pixelBuffer = buffer.bindMemory(to: RGBA32.self, capacity: width * height)
for row in 0 ..< Int(height) {
for column in 0 ..< Int(width) {
let offset = row * width + column
/*
You need to define aPixelIWantBlurred however you desire
Also, we don't edit the pixels we want to blur - we edit the other pixels to a transparent value. This allows us to use this as the background image and the masked image
*/
if pixelBuffer[offset] != aPixelIWantBlurred {
pixelBuffer[offset] = RGBA32.init(red: 0, green: 0, blue: 0, alpha: 0)
}
}
}
let outputCGImage = context.makeImage()!
let outputImage = UIImage(cgImage: outputCGImage, scale: image.scale, orientation: image.imageOrientation)
return outputImage
}
extension CIImage {
func convertCIImageToCGImage() -> CGImage! {
let context = CIContext(options: nil)
return context.createCGImage(self, from: self.extent)
}
}
struct RGBA32: Equatable {
private var color: UInt32
var redComponent: UInt8 {
return UInt8((color >> 24) & 255)
}
var greenComponent: UInt8 {
return UInt8((color >> 16) & 255)
}
var blueComponent: UInt8 {
return UInt8((color >> 8) & 255)
}
var alphaComponent: UInt8 {
return UInt8((color >> 0) & 255)
}
init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
let red = UInt32(red)
let green = UInt32(green)
let blue = UInt32(blue)
let alpha = UInt32(alpha)
color = (red << 24) | (green << 16) | (blue << 8) | (alpha << 0)
}
static let red = RGBA32(red: 255, green: 0, blue: 0, alpha: 255)
static let green = RGBA32(red: 0, green: 255, blue: 0, alpha: 255)
static let blue = RGBA32(red: 0, green: 0, blue: 255, alpha: 255)
static let white = RGBA32(red: 255, green: 255, blue: 255, alpha: 255)
static let black = RGBA32(red: 0, green: 0, blue: 0, alpha: 255)
static let magenta = RGBA32(red: 255, green: 0, blue: 255, alpha: 255)
static let yellow = RGBA32(red: 255, green: 255, blue: 0, alpha: 255)
static let cyan = RGBA32(red: 0, green: 255, blue: 255, alpha: 255)
static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
static func ==(lhs: RGBA32, rhs: RGBA32) -> Bool {
return lhs.color == rhs.color
}
}

Getting pixel color at point takes too long for HEIC images

The method below gets the pixel color at a specific point in the selected photo. When a JPG is selected, the image process below is fast.
When a HEIC photo is selected, the image process below is slow. Can be approx 40x slower (7 secs vs 5 mins).
I was wondering why that is and what I can do to fix it? Just to be clear the code below works, just takes a long time for HEIC to be processed.
class func getPixelColorAtPoint(point:CGPoint, sourceView: UIView) -> UIColor{
let pixel = UnsafeMutablePointer<CUnsignedChar>.allocate(capacity: 4)
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
let context = CGContext(data: pixel, width: 1, height: 1, bitsPerComponent: 8, bytesPerRow: 4, space: colorSpace, bitmapInfo: bitmapInfo.rawValue)
var color: UIColor? = nil
if let context = context {
context.translateBy(x: -point.x, y: -point.y)
sourceView.layer.render(in: context)
color = UIColor(red: CGFloat(pixel[0])/255.0,
green: CGFloat(pixel[1])/255.0,
blue: CGFloat(pixel[2])/255.0,
alpha: CGFloat(pixel[3])/255.0)
pixel.deallocate(capacity: 4)
}
return color!
}
If it’s taking 5 minutes, I assume you must be calling this routine repeatedly, rendering this sourceView each time. You should render the whole view to a complete pixel buffer once and only once, and then you can efficiently retrieve the color of multiple pixels from this buffer.
For example:
var data: Data?
var width: Int!
var height: Int!
func pixelColor(at point: CGPoint, for sourceView: UIView) -> UIColor? {
if data == nil { fillBuffer(for: sourceView) }
return data?.withUnsafeBytes { rawBufferPointer in
let pixels = rawBufferPointer.bindMemory(to: Pixel.self)
let pixel = pixels[Int(point.y) * width + Int(point.x)]
return pixel.color
}
}
func fillBuffer(for sourceView: UIView) {
// get image snapshot
let bounds = sourceView.bounds
let format = UIGraphicsImageRendererFormat()
format.scale = 1
let image = UIGraphicsImageRenderer(bounds: bounds, format: format).image { _ in
sourceView.drawHierarchy(in: bounds, afterScreenUpdates: false)
}
guard let cgImage = image.cgImage else { return }
// prepare to get pixel buffer
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.premultipliedLast.rawValue)
width = Int(bounds.width)
height = Int(bounds.height)
// populate and save pixel buffer
if let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: 4 * width, space: colorSpace, bitmapInfo: bitmapInfo.rawValue) {
context.draw(cgImage, in: bounds)
data = context.data.flatMap { Data(bytes: $0, count: 4 * width * height) }
}
}
Clearly, render this to the buffer however you want, but the basic idea is to save the pixel buffer to some Data from which you can quickly retrieve color values.
By the way, rather than navigating the RGBA bytes separately, I think it’s useful to use a struct, like below, to render slightly more intuitive code:
struct Pixel {
let red: UInt8
let green: UInt8
let blue: UInt8
let alpha: UInt8
}
extension Pixel {
var color: UIColor {
return UIColor(red: CGFloat(red) / 255,
green: CGFloat(green) / 255,
blue: CGFloat(blue) / 255,
alpha: CGFloat(alpha) / 255)
}
}
The more interesting question, IMHO, is why you are retrieving the color of multiple pixels. Do you really need the color for individual pixels, or are you doing some calculation based upon the results (e.g. histograms, average color, etc.)? I.e. if you really only need colors pixel-by-pixel, that’s fine, but if there’s a broader problem you’re trying to solve, we might be able to show you more efficient approaches than processing pixel by pixel. These range from parallelized routines, vImage high performance image processing, etc. But we can’t say more without knowing what you’re doing with these repeated calls to retrieve the pixel colors.

How to draw a gradient color wheel using CAGradientLayer?

I got some reference from these links-
What is the Algorithm behind a color wheel?
Math behind the Colour Wheel
Basic color schemes
How to fill a bezier path with gradient color
I have gone through the concept of "HSV colour space". But I want to draw a color wheel using RGB with the help of CAGradientLayer.
Here is the code snippet of making a color wheel by using simple RGB color array and UIBezierPath -
func drawColorWheel()
{
context?.saveGState()
range = CGFloat(100.00 / CGFloat(colorArray.count))
for k in 0 ..< colorArray.count
{
drawSlice(startPercent: CGFloat(k) * range, endPercent: CGFloat(CGFloat(k + 1) * range), color: colorArray.object(at: k) as! UIColor)
}
context?.restoreGState()
}
func drawSlice(startPercent: CGFloat, endPercent: CGFloat, color: UIColor)
{
let startAngle = getAngleAt(percentage: startPercent)
let endAngle = getAngleAt(percentage: endPercent)
let path = getArcPath(startAngle: startAngle, endAngle: endAngle)
color.setFill()
path.fill()
}
Where getAngleAt() and getArcPath() are the private functions to draw the path with an angle.
Here is the final output of my code -
Now, my question is the how to give these colors a gradient effect so that each colors mix up with gradient color layer?
One approach is to build an image and manipulate the pixel buffer manually:
Create CGContext of certain size and certain type;
Access its data buffer via data property;
Rebind that to something that makes it easy to manipulate that buffer (I use a Pixel, a struct for the 32-bit representation of a pixel);
Loop through the pixels, one by one, converting that to an angle and radius for the circle within this image; and
Create a pixel of the appropriate color if it's inside the circle; make it a zero-alpha pixel if not.
So in Swift 3:
func buildHueCircle(in rect: CGRect, radius: CGFloat, scale: CGFloat = UIScreen.main.scale) -> UIImage? {
let width = Int(rect.size.width * scale)
let height = Int(rect.size.height * scale)
let center = CGPoint(x: width / 2, y: height / 2)
let space = CGColorSpaceCreateDeviceRGB()
let context = CGContext(data: nil, width: width, height: height, bitsPerComponent: 8, bytesPerRow: width * 4, space: space, bitmapInfo: Pixel.bitmapInfo)!
let buffer = context.data!
let pixels = buffer.bindMemory(to: Pixel.self, capacity: width * height)
var pixel: Pixel
for y in 0 ..< height {
for x in 0 ..< width {
let angle = fmod(atan2(CGFloat(x) - center.x, CGFloat(y) - center.y) + 2 * .pi, 2 * .pi)
let distance = hypot(CGFloat(x) - center.x, CGFloat(y) - center.y)
let value = UIColor(hue: angle / 2 / .pi, saturation: 1, brightness: 1, alpha: 1)
var red: CGFloat = 0
var green: CGFloat = 0
var blue: CGFloat = 0
var alpha: CGFloat = 0
value.getRed(&red, green: &green, blue: &blue, alpha: &alpha)
if distance <= (radius * scale) {
pixel = Pixel(red: UInt8(red * 255),
green: UInt8(green * 255),
blue: UInt8(blue * 255),
alpha: UInt8(alpha * 255))
} else {
pixel = Pixel(red: 255, green: 255, blue: 255, alpha: 0)
}
pixels[y * width + x] = pixel
}
}
let cgImage = context.makeImage()!
return UIImage(cgImage: cgImage, scale: scale, orientation: .up)
}
Where
struct Pixel: Equatable {
private var rgba: UInt32
var red: UInt8 {
return UInt8((rgba >> 24) & 255)
}
var green: UInt8 {
return UInt8((rgba >> 16) & 255)
}
var blue: UInt8 {
return UInt8((rgba >> 8) & 255)
}
var alpha: UInt8 {
return UInt8((rgba >> 0) & 255)
}
init(red: UInt8, green: UInt8, blue: UInt8, alpha: UInt8) {
rgba = (UInt32(red) << 24) | (UInt32(green) << 16) | (UInt32(blue) << 8) | (UInt32(alpha) << 0)
}
static let bitmapInfo = CGImageAlphaInfo.premultipliedLast.rawValue | CGBitmapInfo.byteOrder32Little.rawValue
static func ==(lhs: Pixel, rhs: Pixel) -> Bool {
return lhs.rgba == rhs.rgba
}
}
That yields:
Or you can tweak the brightness or saturation based upon the radius:

iOS CGFloat for Image drawing loop

I want to draw a grid which would fill the whole screen of an iOS device with squares which form a gradient and this is the method I rote:
static func getGradientImage(fromHue first_hue: CGFloat, toHue second_hue: CGFloat) -> UIImage{
UIGraphicsBeginImageContextWithOptions(CGSize(width: SystemDefaults.screen.width, height: SystemDefaults.screen.height), false, 0)
let context = UIGraphicsGetCurrentContext()
let pixel: CGFloat = SystemDefaults.screen.height / 16
let raw_cols: CGFloat = SystemDefaults.screen.width / pixel
let cols = Int(raw_cols + 0.4) //Because for iPhone 4 this number is 10.666 and converting it to Int gives 10, which is not enough. It does not affect other screens.
let width: CGFloat = SystemDefaults.screen.width / CGFloat(cols)
for index in 0..<16{
let mlt_row = CGFloat(16 - index)
for idx in 0..<cols{
let mlt_col = CGFloat((idx + 5) * (16 / cols))
let rct = CGRect(x: CGFloat(idx) * width, y: CGFloat(index) * pixel, width: width, height: pixel) //That's where I think the problem is. Coordinates should be more even
let hue = (first_hue * mlt_row + second_hue * mlt_col) / (mlt_row + mlt_col)
let clr = UIColor(hue: hue, saturation: 0.85, brightness: 1, alpha: 1)
CGContextSetFillColorWithColor(context, clr.CGColor)
CGContextFillRect(context, rct)
}
}
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result
}
The problem is that an output Image has tiny lines between columns and I assume It's because the width parameter is a complicated CGFloat and system fails to "move" the next rct for exactly that distance. And that's because, for example, iPhone 5's screen isn't exactly 16/9, rather 16/9.(something). How should I draw the Image so that I fill screen with exactly 144 (for iPhone5) squares with no lines in between them?

Color detection at a distance (Swift)

Can someone point me in the right direction with color detection at a distance? I have used the code below and it grabs RBG values of an image properly if an object or point of interest is less than 10 feet away. When the object is at a distance the code returns the wrong values. I want to take a picture of an object at a distance greater than 10 feet and detect the color of that image.
//On the top of your swift
extension UIImage {
func getPixelColor(pos: CGPoint) -> UIColor {
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(self.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(self.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
I am a photographer and what you are trying to do is very similar to setting a white balance in post processing or using the color picker in PS.
Digital Camera's don't have pixels that capture the full spectrum of light at once. They have triplets of pixels for RGB. The captured information is interpolated and this can give very bad results. Setting the white balance in post on an image taken at night is almost impossible.
Reasons for bad interpolation:
Pixels are bigger than the smallest discernible object in the scene. (moiré artifacts)
Low light situation where Digital Gain increases color differences. (color noise artifacts)
Image was converted to low quality jpg but has lots of edges.
(jpg artifacts)
If it is a low quality jpg, get a better source img.
Fix
All you have to do to get a more accurate reading, is blur the image.
The smallest acceptable blur is 3 pixels, because this will undo some of the interpolation. Bigger blurs might be better.
Since blurs are expensive it is best to crop the image to a multiple of the blur radius. You can't take a precise fit because it will also blur the edges and beyond the edges the image is black. This will influence your reading.
It might be best if you also enforce an upper limit on the blur radius.
Shortcut to get the center of something with a size.
extension CGSize {
var center : CGPoint {
get {
return CGPoint(x: width / 2, y: height / 2)
}
}
}
The UIImage stuff
extension UIImage {
func blur(radius: CGFloat) -> UIImage? {
// extensions of UImage don't know what a CIImage is...
typealias CIImage = CoreImage.CIImage
// blur of your choice
guard let blurFilter = CIFilter(name: "CIBoxBlur") else {
return nil
}
blurFilter.setValue(CIImage(image: self), forKey: kCIInputImageKey)
blurFilter.setValue(radius, forKey: kCIInputRadiusKey)
let ciContext = CIContext(options: nil)
guard let result = blurFilter.valueForKey(kCIOutputImageKey) as? CIImage else {
return nil
}
let blurRect = CGRect(x: -radius, y: -radius, width: self.size.width + (radius * 2), height: self.size.height + (radius * 2))
let cgImage = ciContext.createCGImage(result, fromRect: blurRect)
return UIImage(CGImage: cgImage)
}
func crop(cropRect : CGRect) -> UIImage? {
guard let imgRef = CGImageCreateWithImageInRect(self.CGImage, cropRect) else {
return nil
}
return UIImage(CGImage: imgRef)
}
func getPixelColor(atPoint point: CGPoint, radius:CGFloat) -> UIColor? {
var pos = point
var image = self
// if the radius is too small -> skip
if radius > 1 {
let cropRect = CGRect(x: point.x - (radius * 4), y: point.y - (radius * 4), width: radius * 8, height: radius * 8)
guard let cropImg = self.crop(cropRect) else {
return nil
}
guard let blurImg = cropImg.blur(radius) else {
return nil
}
pos = blurImg.size.center
image = blurImg
}
let pixelData = CGDataProviderCopyData(CGImageGetDataProvider(image.CGImage))
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = ((Int(image.size.width) * Int(pos.y)) + Int(pos.x)) * 4
let r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
let g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
let b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
let a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
return UIColor(red: r, green: g, blue: b, alpha: a)
}
}
Side note :
Your problem might not be the color grabbing function but how you set the point. If you are doing it by touch and the object is farther and thus smaller on the screen, you might not set it accurately enough.
read the average color of a UIImage ==> https://www.hackingwithswift.com/example-code/media/how-to-read-the-average-color-of-a-uiimage-using-ciareaaverage
extension UIImage {
var averageColor: UIColor? {
guard let inputImage = CIImage(image: self) else { return nil }
let extentVector = CIVector(x: inputImage.extent.origin.x, y: inputImage.extent.origin.y, z: inputImage.extent.size.width, w: inputImage.extent.size.height)
guard let filter = CIFilter(name: "CIAreaAverage", parameters: [kCIInputImageKey: inputImage, kCIInputExtentKey: extentVector]) else { return nil }
guard let outputImage = filter.outputImage else { return nil }
var bitmap = [UInt8](repeating: 0, count: 4)
let context = CIContext(options: [.workingColorSpace: kCFNull])
context.render(outputImage, toBitmap: &bitmap, rowBytes: 4, bounds: CGRect(x: 0, y: 0, width: 1, height: 1), format: .RGBA8, colorSpace: nil)
return UIColor(red: CGFloat(bitmap[0]) / 255, green: CGFloat(bitmap[1]) / 255, blue: CGFloat(bitmap[2]) / 255, alpha: CGFloat(bitmap[3]) / 255)
}
}

Resources