iOS CGFloat for Image drawing loop - ios

I want to draw a grid which would fill the whole screen of an iOS device with squares which form a gradient and this is the method I rote:
static func getGradientImage(fromHue first_hue: CGFloat, toHue second_hue: CGFloat) -> UIImage{
UIGraphicsBeginImageContextWithOptions(CGSize(width: SystemDefaults.screen.width, height: SystemDefaults.screen.height), false, 0)
let context = UIGraphicsGetCurrentContext()
let pixel: CGFloat = SystemDefaults.screen.height / 16
let raw_cols: CGFloat = SystemDefaults.screen.width / pixel
let cols = Int(raw_cols + 0.4) //Because for iPhone 4 this number is 10.666 and converting it to Int gives 10, which is not enough. It does not affect other screens.
let width: CGFloat = SystemDefaults.screen.width / CGFloat(cols)
for index in 0..<16{
let mlt_row = CGFloat(16 - index)
for idx in 0..<cols{
let mlt_col = CGFloat((idx + 5) * (16 / cols))
let rct = CGRect(x: CGFloat(idx) * width, y: CGFloat(index) * pixel, width: width, height: pixel) //That's where I think the problem is. Coordinates should be more even
let hue = (first_hue * mlt_row + second_hue * mlt_col) / (mlt_row + mlt_col)
let clr = UIColor(hue: hue, saturation: 0.85, brightness: 1, alpha: 1)
CGContextSetFillColorWithColor(context, clr.CGColor)
CGContextFillRect(context, rct)
}
}
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result
}
The problem is that an output Image has tiny lines between columns and I assume It's because the width parameter is a complicated CGFloat and system fails to "move" the next rct for exactly that distance. And that's because, for example, iPhone 5's screen isn't exactly 16/9, rather 16/9.(something). How should I draw the Image so that I fill screen with exactly 144 (for iPhone5) squares with no lines in between them?

Related

How to get the color of a pixel of a NSImage?

I'm trying to get the CGColor of a specifiy point from my NSImage (png file). This function is called on NSViews which can be dragged over my NSImageView. Then this function should set a variable (defaultColor) to the Color which is exactly at the position from the NSView on the NSImageView. For testing I then colored each NSView to the color stored in the variable before (so to that color where the NSView is positioned on the NSImageView).
How you can see in the screenshots for example I displayed a 300x300 image containing four different colors in the NSImageView. The colors will be detected right, but it seems like the colors are swapped horizontally. The colors on the top will be measured when the NSViews are on the bottom and the colors on the bottom will be measured when the NSViews are on the top.
Is the byte order wrong? How I can swap this? I already read How do I get the color of a pixel in a UIImage with Swift? and Why do I get the wrong color of a pixel with following code?. Thats from where I have the code I use which I changed a little bit:
func setDefaulColor(image: NSImage)
{
let posX = self.frame.origin.x + (self.frame.width / 2)
let posY = self.frame.origin.y + (self.frame.height / 2)
var r: CGFloat = 0
var g: CGFloat = 0
var b: CGFloat = 0
var a: CGFloat = 1
if posX <= image.size.width && posY <= image.size.height
{
var imageRect = CGRect(x: 0, y: 0, width: image.size.width, height: image.size.height)
let imageRef = image.cgImage(forProposedRect: &imageRect, context: nil, hints: nil)
var pixelData = imageRef!.dataProvider!.data
let data: UnsafePointer<UInt8> = CFDataGetBytePtr(pixelData)
let pixelInfo: Int = Int(posY) * imageRef!.bytesPerRow + Int(posX) * 4
r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
}
self.defaultColor = CGColor(red: r, green: g, blue: b, alpha: a)
setNeedsDisplay(NSRect(x: 0, y: 0, width: self.frame.width, height: self.frame.height))
}
Here are some screenshots:
NSViews on the top
NSViews on the bottom
The PNG-file displayed by the NSImageView should be in the RGBA format. How you can see I think the colors extracted right from the pixeldata:
r = CGFloat(data[pixelInfo]) / CGFloat(255.0)
g = CGFloat(data[pixelInfo+1]) / CGFloat(255.0)
b = CGFloat(data[pixelInfo+2]) / CGFloat(255.0)
a = CGFloat(data[pixelInfo+3]) / CGFloat(255.0)
but it seems the pixeldata is loaded in the wrong order?
Do you know how to change the data order or why this is happening?
The coordinate system of an image and coordinate system of your view are not the same. The conversion is needed between them.
It is hard to say how
let posX = self.frame.origin.x + (self.frame.width / 2)
let posY = self.frame.origin.y + (self.frame.height / 2)
relate to your image as you did not specify any additional information.
If you have an image view and you would like to extract a pixel at a certain position (x, y) then you need to take into consideration the scaling and content mode.
The image itself is usually placed in the byte buffer so that the top-left pixel is first and is followed by the pixel right of it. The coordinate system of NSView is not though as it starts on bottom left.
To begin with it makes most sense to get relative position. This is a point with coordinates within [0, 1]. For your view it should be:
func getRelativePositionInView(_ view: NSView, absolutePosition: (x: CGFloat, y: CGFloat)) -> (x: CGFloat, y: CGFloat) {
return ((absolutePosition.x - view.frame.origin.x)/view.frame.width, (absolutePosition.y - view.frame.origin.y)/view.frame.height)
}
Now this point should be converted to image coordinate system where vertical flip needs be done and scaling applied.
If content mode is simply "scale" (whole image is shown) then the solution is simple:
func pointOnImage(_ image: NSImage, relativePositionInView: (x: CGFloat, y: CGFloat)) -> (x: CGFloat, y: CGFloat)? {
let convertedCoordinates: (x: CGFloat, y: CGFloat) = (
relativePositionInView.x * image.size.width,
(1.0-relativePositionInView.y) * image.size.height
)
guard convertedCoordinates.x >= 0.0 else { return nil }
guard convertedCoordinates.y >= 0.0 else { return nil }
guard convertedCoordinates.x < image.size.width else { return nil }
guard convertedCoordinates.y < image.size.height else { return nil }
return convertedCoordinates
}
Some other more common modes are scale-aspect-fill and scale-aspect-fit. Those need extra computations when converting points but this seems to not be a part of your issue (for now).
So the two methods will most likely fix your issue. But you can also just apply a very short fix:
let posY = whateverViewTheImageIsOn.frame.height - (self.frame.origin.y + (self.frame.height / 2))
Personally I think this to be very messy but you be the judge of that.
There are also some other considerations which may or may not be valid for your case. When displaying an image the color of pixels may appear different than they are in your buffer. Mostly this is due to scaling. For instance a pure black and white image may show gray areas on some pixels. If this is something you would like to apply when finding a color it makes more sense to look into creating an image from NSView. This approach could also remove a lot of mathematical problems for you.

How to resize a UIImage without antialiasing?

I am developing an iOS board game. I am trying to give the board a kind of "texture".
What I did was I created this very small image (really small, be sure to look carefully):
And I passed this image to the UIColor.init(patternImage:) initializer to create a UIColor that is this image. I used this UIColor to fill some square UIBezierPaths, and the result looks like this:
All copies of that image lines up perfectly and they form many diagonal straight lines. So far so good.
Now on the iPad, the squares that I draw will be larger, and the borders of those squares will be larger too. I have successfully calculated what the stroke width and size of the squares should be, so that is not a problem.
However, since the squares are larger on an iPad, there will be more diagonal lines per square. I do not want that. I need to resize the very small image to a bigger one, and that the size depends on the stroke width of the squares. Specifically, the width of the resized image should be twice as much as the stroke width.
I wrote this extension to resize the image, adapted from this post:
extension UIImage {
func resized(toWidth newWidth: CGFloat) -> UIImage {
let scale = newWidth / size.width
let newHeight = size.height * scale
UIGraphicsBeginImageContextWithOptions(CGSize(width: newWidth, height: newHeight), false, 0)
self.draw(in: CGRect(x: 0, y: 0, width: newWidth, height: newHeight))
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}
And called it like this:
// this is the code I used to draw a single square
let path = UIBezierPath(rect: CGRect(origin: point(for: Position(x, y)), size: CGSize(width: squareLength, height: squareLength)))
UIColor.black.setStroke()
path.lineWidth = strokeWidth
// this is the line that's important!
UIColor(patternImage: #imageLiteral(resourceName:
"texture").resized(toWidth: strokeWidth * 2)).setFill()
path.fill()
path.stroke()
Now the game board looks like this on an iPhone:
You might need to zoom in the webpage a bit to see what I mean. The board now looks extremely ugly. You can see the "borders" of each copy of the image. I don't want this. On an iPad though, the board looks fine. I suspect that this only happens when I downsize the image.
I figured that this might be due to the antialiasing that happens when I use the extension. I found this post and this post about removing antialiasing, but the former seems to be doing this in a image view while I am doing this in the draw(_:) method of my custom GameBoardView. The latter's solution seems to be exactly the same as what I am using.
How can I resize without antialiasing? Or on a higher level of abstraction, How can I make my board look pretty?
class Ruled: UIView {
override func draw(_ rect: CGRect) {
let T: CGFloat = 15 // desired thickness of lines
let G: CGFloat = 30 // desired gap between lines
let W = rect.size.width
let H = rect.size.height
guard let c = UIGraphicsGetCurrentContext() else { return }
c.setStrokeColor(UIColor.orange.cgColor)
c.setLineWidth(T)
var p = -(W > H ? W : H) - T
while p <= W {
c.move( to: CGPoint(x: p-T, y: -T) )
c.addLine( to: CGPoint(x: p+T+H, y: T+H) )
c.strokePath()
p += G + T + T
}
}
}
Enjoy.
Note that you would, obviously, clip that view.
If you want to have a number of them on the screen or in a pattern, just do that.
To clip to a given rectangle:
The class above simply draws it the "size of the UIView".
However, often, you want to draw a number of the "boxes" actually within the view, at different coordinates. (A good example is for a calendar).
Furthermore, this example explicitly draws "both stripes" rather than drawing one stripe over the background color:
func simpleStripes(x: CGFloat, y: CGFloat, width: CGFloat, height: CGFloat) {
let stripeWidth: CGFloat = 20.0 // whatever you want
let m = stripeWidth / 2.0
guard let c = UIGraphicsGetCurrentContext() else { return }
c.setLineWidth(stripeWidth)
let r = CGRect(x: x, y: y, width: width, height: height)
let longerSide = width > height ? width : height
c.saveGState()
c.clip(to: r)
var p = x - longerSide
while p <= x + width {
c.setStrokeColor(pale blue)
c.move( to: CGPoint(x: p-m, y: y-m) )
c.addLine( to: CGPoint(x: p+m+height, y: y+m+height) )
c.strokePath()
p += stripeWidth
c.setStrokeColor(pale gray)
c.move( to: CGPoint(x: p-m, y: y-m) )
c.addLine( to: CGPoint(x: p+m+height, y: y+m+height) )
c.strokePath()
p += stripeWidth
}
c.restoreGState()
}
extension UIImage {
func ResizeImage(targetSize: CGSize) -> UIImage
{
let size = self.size
let widthRatio = targetSize.width / self.size.width
let heightRatio = targetSize.height / self.size.height
// Figure out what our orientation is, and use that to form the rectangle
var newSize: CGSize
if(widthRatio > heightRatio) {
newSize = CGSize(width: size.width * heightRatio, height: size.height * heightRatio)
} else {
newSize = CGSize(width: size.width * widthRatio, height: size.height * widthRatio)
}
// This is the rect that we've calculated out and this is what is actually used below
let rect = CGRect(x: 0, y: 0, width: newSize.width,height: newSize.height)
// Actually do the resizing to the rect using the ImageContext stuff
UIGraphicsBeginImageContextWithOptions(newSize, false, 1.0)
self.draw(in: rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage!
}
}

iOS Out of memory error

I recently needed to implement the music visualizer in iOS swift as shown in the image/gif.
I used this TempiFFT to precesses the audio input and modified the UI for the required visual effects.
the main features of the visual effects are:
1) the gradient of the bars
2) the dashed effect in the bars
3) fading out the previous bars
for the fading effect i used this How To Make A Simple Drawing App with UIKit and Swift where it initially draws on temporary image and then copies to the main image with the reduced opacity (if required).
I somehow successfully implemented the features. The bars work properly but "didReceiveMemoryWarning" is called after sometime (around 3 minutes) and the app stops few seconds after that.
So how do I resolve the memory warning issue and the app stops issue
Following is the code for drawing bars:
func drawBars() {
// start drawng on temp image
UIGraphicsBeginImageContext(self.tempImage.frame.size)
// get context
let context = UIGraphicsGetCurrentContext()
tempImage.image?.draw(in: CGRect(x:0, y:0, width: tempImage.frame.width, height:tempImage.frame.height))
let height = Double(self.tempImage.frame.size.height)
let width = Double(self.tempImage.frame.size.width)
let barWidth = width / bars
//let barSpace = width / bars * 0.2
// number of bars
let count = self.fft?.numberOfBands ?? 0
// number of magnitudes
let fft = self.fft?.bandMagnitudes ?? [0, 0, 0]
if count == 0 {
return
}
// Draw the spectrum.
let maxDB: Float = 64.0
let minDB: Float = -32.0
let headroom = maxDB - minDB
//let colWidth = tempi_round_device_scale(d: CGFloat(width) / CGFloat(count))
// need to display only few (10) bars so will consider very (count/10)th bar
let divider = count / Int(bars) - 1
var barCount = 0
for i in 0..<count {
// checking if display bar or not
if i % divider != 0 || i / divider > 9 {
continue
}
let magnitude = fft[i]
// Incoming magnitudes are linear, making it impossible to see very low or very high values. Decibels to the rescue!
var magnitudeDB = TempiFFT.toDB(magnitude)
// Normalize the incoming magnitude so that -Inf = 0
magnitudeDB = max(0, magnitudeDB + abs(minDB))
let dbRatio = min(1.0, magnitudeDB / headroom)
let magnitudeNorm = CGFloat(dbRatio) * CGFloat(height)
//let colRect: CGRect = CGRect(x: x, y: plotYStart, width: colWidth, height: magnitudeNorm)
//let startPoint = CGPoint(x: viewWidth / 2, y: 0)
//let endPoint = CGPoint(x: viewWidth / 2, y: viewHeight)
//context.saveGState()
//context.clip(to: colRect)
//context.drawLinearGradient(gradient!, start: startPoint, end: endPoint, options: CGGradientDrawingOptions(rawValue: 0))
//context.restoreGState()
//x += colWidth
//}
//for var i in (0..<10) {
//print(barCount)
let x = Double(barCount) * barWidth + 0.5 * barWidth
// cliping the bar area for dash and gradient effect
context?.saveGState()
context?.addRect(CGRect(x:x - width / bars * 0.4, y:height - Double(magnitudeNorm), width:width / bars * 0.8, height:Double(magnitudeNorm)))
context?.clip()
// context?.drawLinearGradient(CGGradient(colorsSpace: CGColorSpaceCreateDeviceRGB(), colors: [UIColor.red.cgColor, UIColor.blue.cgColor] as CFArray, locations: [0, 1])! , start: CGPoint(x:x, y:height + 5), end: CGPoint(x:x, y:height - Double(magnitudeNorm)), options: CGGradientDrawingOptions(rawValue: 0))
// draw the gradient first
context?.drawLinearGradient(CGGradient(colorsSpace: CGColorSpaceCreateDeviceRGB(), colors: [UIColor.white.cgColor, UIColor.cyan.cgColor, UIColor.magenta.cgColor, UIColor.magenta.cgColor] as CFArray, locations: [0, 0.5, 0.85, 1])! , start: CGPoint(x:x, y:height + 5), end: CGPoint(x:x, y:0), options: CGGradientDrawingOptions(rawValue: 0))
context?.restoreGState()
//context?.addRect(CGRect(x:x - width / bars * 0.4, y:height - Double(magnitudeNorm), width:width / bars * 0.8, height:Double(magnitudeNorm)))
//context?.drawPath(using: CGPathDrawingMode.stroke)
// now draw the dash line
context?.move(to: CGPoint(x:x, y:height + 5))
context?.addLine(to: CGPoint(x:x, y:height - Double(magnitudeNorm)))
//context?.addLine(to: CGPoint(x:x, y:height - Double((i / divider + 1) * 10)))
context?.setLineCap(CGLineCap.butt)
context?.setLineWidth(CGFloat(width / bars * 0.85))
context?.setStrokeColor(red: 0, green: 0, blue: 0, alpha: 1)
context?.setBlendMode(CGBlendMode.normal)
context?.setLineDash(phase: 1, lengths: [5, 10])
context?.strokePath()
barCount += 1;
}
// save the image
tempImage.image = UIGraphicsGetImageFromCurrentImageContext()
tempImage.alpha = 1
UIGraphicsEndImageContext()
tempi_dispatch_main { () -> () in
//self.spectralView.fft = fft
//self.spectralView.setNeedsDisplay()
//merge the image with the previous image
//on main thread
self.refresh()
}
//refresh()
}
func refresh() {
UIGraphicsBeginImageContext(mainImage.frame.size)
//let context = UIGraphicsGetCurrentContext()
// draw the previous image with 0.9 opacity(transparancy) for a fade out effect
mainImage.image?.draw(in: CGRect(x:0, y:0, width:mainImage.frame.width, height:mainImage.frame.height), blendMode: CGBlendMode.normal, alpha: 0.9)
// draw the new bars over the previous one
tempImage.image?.draw(in: CGRect(x:0, y:0, width:mainImage.frame.width, height:mainImage.frame.height), blendMode: CGBlendMode.normal, alpha: 1)
// finally show that new image on the image view
mainImage.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
tempImage.image = nil
}

Drawing color picker/wheel with CoreGraphics creates black lines between colors

The Swift below creates a color picker/wheeler. It mostly works, but there are black bars between different color bands as illustrated by the attachment.
Adjusting for the device scale did not help.
Any ideas?
// Constants
let Saturation = CGFloat(0.88)
let Brightness = CGFloat(0.88)
let MaxHueValues = 360
let Hues = (0...359).map { $0 }
override func drawRect(rect: CGRect) {
// Set context & adapt to screen resolution
let context = UIGraphicsGetCurrentContext()
//let scale = UIScreen.mainScreen().scale
//CGContextScaleCTM(context, scale, scale)
// Get height for each row
rowHeight = frame.height / CGFloat(Hues.count)
// Draw one hue per row
for hue in Hues {
let hueValue = CGFloat(hue) / CGFloat(MaxHueValues)
let color = UIColor(hue: hueValue, saturation: Saturation, brightness: Brightness, alpha: 1.0)
CGContextSetFillColorWithColor(context, color.CGColor)
let yPos = CGFloat(hue) * rowHeight
CGContextFillRect(context, CGRect(x: 0, y: yPos, width: frame.width, height: rowHeight))
}
}
It's probably rounding errors. If your y position and height values are not integers then the system will try to interpolate your drawing to simulate sub-pixel rendering, and it really can't. Try adjusting your frame so it is an even multiple of the number of hues (so if have 360 hues, make your frame 360 points high, or 720 points.)

How do I get the RGB Value of a pixel using CGContext?

I'm trying to edit images by changing the pixels.
I have the following code:
let imageRect = CGRectMake(0, 0, self.image.image!.size.width, self.image.image!.size.height)
UIGraphicsBeginImageContext(self.image.image!.size)
let context = UIGraphicsGetCurrentContext()
CGContextSaveGState(context)
CGContextDrawImage(context, imageRect, self.image.image!.CGImage)
for x in 0...Int(self.image.image!.size.width) {
for y in 0...Int(self.image.image!.size.height) {
var red = 0
if y % 2 == 0 {
red = 255
}
CGContextSetRGBFillColor(context, CGFloat(red/255), 0.5, 0.5, 1)
CGContextFillRect(context, CGRectMake(CGFloat(x), CGFloat(y), 1, 1))
}
}
CGContextRestoreGState(context)
self.image.image = UIGraphicsGetImageFromCurrentImageContext()
I'm looping through all the pixels and changing the value of the each pixel, then converting it back to an image. Want I want to do is somehow get the value of the current pixel (in the y-for-loop) and do something with that data. I have not found anything on the internet about this particular problem.
Under the covers, UIGraphicsBeginImageContext creates a CGBitmapContext. You can get access to the context's pixel storage using CGBitmapContextGetData. The problem with this approach is that the UIGraphicsBeginImageContext function chooses the byte order and color space used to store the pixel data. Those choices (particularly the byte order) could change in future versions of iOS (or even on different devices).
So instead, let's create the context directly with CGBitmapContextCreate, so we can be sure of the byte order and color space.
In my playground, I've added a test image named pic#2x.jpeg.
import XCPlayground
import UIKit
let image = UIImage(named: "pic.jpeg")!
XCPCaptureValue("image", value: image)
Here's how we create the bitmap context, taking the image scale into account (which you didn't do in your question):
let rowCount = Int(image.size.height * image.scale)
let columnCount = Int(image.size.width * image.scale)
let stride = 64 * ((columnCount * 4 + 63) / 64)
let context = CGBitmapContextCreate(nil, columnCount, rowCount, 8, stride,
CGColorSpaceCreateDeviceRGB(),
CGBitmapInfo.ByteOrder32Little.rawValue |
CGImageAlphaInfo.PremultipliedLast.rawValue)
Next, we adjust the coordinate system to match what UIGraphicsBeginImageContextWithOptions would do, so that we can draw the image correctly and easily:
CGContextTranslateCTM(context, 0, CGFloat(rowCount))
CGContextScaleCTM(context, image.scale, -image.scale)
UIGraphicsPushContext(context!)
image.drawAtPoint(CGPointZero)
UIGraphicsPopContext()
Note that UIImage.drawAtPoint takes image.orientation into account. CGContextDrawImage does not.
Now let's get a pointer to the raw pixel data from the context. The code is clearer if we define a structure to access the individual components of each pixel:
struct Pixel {
var a: UInt8
var b: UInt8
var g: UInt8
var r: UInt8
}
let pixels = UnsafeMutablePointer<Pixel>(CGBitmapContextGetData(context))
Note that the order of the Pixel members is defined to match the specific bits I set in the bitmapInfo argument to CGBitmapContextCreate.
Now we can loop over the pixels. Note that we use rowCount and columnCount, computed above, to visit all the pixels, regardless of the image scale:
for y in 0 ..< rowCount {
if y % 2 == 0 {
for x in 0 ..< columnCount {
let pixel = pixels.advancedBy(y * stride / sizeof(Pixel.self) + x)
pixel.memory.r = 255
}
}
}
Finally, we get a new image from the context:
let newImage = UIImage(CGImage: CGBitmapContextCreateImage(context)!, scale: image.scale, orientation: UIImageOrientation.Up)
XCPCaptureValue("newImage", value: newImage)
The result, in my playground's timeline:
Finally, note that if your images are large, going through pixel by pixel can be slow. If you can find a way to perform your image manipulation using Core Image or GPUImage, it'll be a lot faster. Failing that, using Objective-C and manually vectorizing it (using NEON intrinsics) may provide a big boost.
Ok, I think I have a solution that should work for you in Swift 2.
Credit goes to this answer for the UIColor extension below.
Since I needed an image to test this on I chose a slice (50 x 50 - top left corner) of your gravatar...
So the code below converts this:
To this:
This works for me in a playground - all you should have to do is copy and paste into a playground to see the result:
//: Playground - noun: a place where people can play
import UIKit
import XCPlayground
extension CALayer {
func colorOfPoint(point:CGPoint) -> UIColor
{
var pixel:[CUnsignedChar] = [0,0,0,0]
let colorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(rawValue: CGImageAlphaInfo.PremultipliedLast.rawValue)
let context = CGBitmapContextCreate(&pixel, 1, 1, 8, 4, colorSpace,bitmapInfo.rawValue)
CGContextTranslateCTM(context, -point.x, -point.y)
self.renderInContext(context!)
let red:CGFloat = CGFloat(pixel[0])/255.0
let green:CGFloat = CGFloat(pixel[1])/255.0
let blue:CGFloat = CGFloat(pixel[2])/255.0
let alpha:CGFloat = CGFloat(pixel[3])/255.0
//println("point color - red:\(red) green:\(green) blue:\(blue)")
let color = UIColor(red:red, green: green, blue:blue, alpha:alpha)
return color
}
}
extension UIColor {
var components:(red: CGFloat, green: CGFloat, blue: CGFloat, alpha: CGFloat) {
var r:CGFloat = 0
var g:CGFloat = 0
var b:CGFloat = 0
var a:CGFloat = 0
getRed(&r, green: &g, blue: &b, alpha: &a)
return (r,g,b,a)
}
}
//get an image we can work on
var imageFromURL = UIImage(data: NSData(contentsOfURL: NSURL(string:"https://www.gravatar.com/avatar/ba4178644a33a51e928ffd820269347c?s=328&d=identicon&r=PG&f=1")!)!)
//only use a small area of that image - 50 x 50 square
let imageSliceArea = CGRectMake(0, 0, 50, 50);
let imageSlice = CGImageCreateWithImageInRect(imageFromURL?.CGImage, imageSliceArea);
//we'll work on this image
var image = UIImage(CGImage: imageSlice!)
let imageView = UIImageView(image: image)
//test out the extension above on the point (0,0) - returns r 0.541 g 0.78 b 0.227 a 1.0
var pointColor = imageView.layer.colorOfPoint(CGPoint(x: 0, y: 0))
let imageRect = CGRectMake(0, 0, image.size.width, image.size.height)
UIGraphicsBeginImageContext(image.size)
let context = UIGraphicsGetCurrentContext()
CGContextSaveGState(context)
CGContextDrawImage(context, imageRect, image.CGImage)
for x in 0...Int(image.size.width) {
for y in 0...Int(image.size.height) {
var pointColor = imageView.layer.colorOfPoint(CGPoint(x: x, y: y))
//I used my own creativity here - change this to whatever logic you want
if y % 2 == 0 {
CGContextSetRGBFillColor(context, pointColor.components.red , 0.5, 0.5, 1)
}
else {
CGContextSetRGBFillColor(context, 255, 0.5, 0.5, 1)
}
CGContextFillRect(context, CGRectMake(CGFloat(x), CGFloat(y), 1, 1))
}
}
CGContextRestoreGState(context)
image = UIGraphicsGetImageFromCurrentImageContext()
I hope this works for you. I had fun playing around with this!
This answer assumes you have a CGContext of the image created. An important part of the answer is rounding up the row offset to a multiple of 8 to ensure this works on any image size, which I haven't seen in other solutions online.
Swift 5
func colorAt(x: Int, y: Int) -> UIColor {
let capacity = context.width * context.height
let widthMultiple = 8
let rowOffset = ((context.width + widthMultiple - 1) / widthMultiple) * widthMultiple // Round up to multiple of 8
let data: UnsafeMutablePointer<UInt8> = context!.data!.bindMemory(to: UInt8.self, capacity: capacity)
let offset = 4 * ((y * rowOffset) + x)
let red = data[offset+2]
let green = data[offset+1]
let blue = data[offset]
let alpha = data[offset+3]
return UIColor(red: CGFloat(red)/255.0, green: CGFloat(green)/255.0, blue: CGFloat(blue)/255.0, alpha: CGFloat(alpha)/255.0)
}

Resources