Get average color of UIImage - ios

I'm trying to do something similar to what Twitter and many other apps do - set the background to the average color of an image. The problem is that based on the array of images that I have, it gets the last average color and it's never being changed. The background color for the UIScrollView in which those images are located. I'm not sure why.
This is the code that I'm using to extract the average color (PS: I found it here)
import UIKit
extension UIImage {
var averageColor: UIColor? {
guard let inputImage = CIImage(image: self) else { return nil }
let extentVector = CIVector(x: inputImage.extent.origin.x, y: inputImage.extent.origin.y, z: inputImage.extent.size.width, w: inputImage.extent.size.height)
guard let filter = CIFilter(name: "CIAreaAverage", withInputParameters: [kCIInputImageKey: inputImage, kCIInputExtentKey: extentVector]) else { return nil }
guard let outputImage = filter.outputImage else { return nil }
var bitmap = [UInt8](repeating: 0, count: 4)
let context = CIContext(options: [kCIContextWorkingColorSpace: kCFNull])
context.render(outputImage, toBitmap: &bitmap, rowBytes: 4, bounds: CGRect(x: 0, y: 0, width: 1, height: 1), format: kCIFormatRGBA8, colorSpace: nil)
return UIColor(red: CGFloat(bitmap[0]) / 255, green: CGFloat(bitmap[1]) / 255, blue: CGFloat(bitmap[2]) / 255, alpha: CGFloat(bitmap[3]) / 255)
}
}
And here's the code for the function that's being called in the viewDidLoad():
func setScrollView() {
for i in stride(from: 0, to: imagelist.count, by: 1) {
var frame = CGRect.zero
frame.origin.x = self.scrollView.frame.size.width * CGFloat(i)
frame.origin.y = 0
frame.size = self.scrollView.frame.size
scrollView.isPagingEnabled = true
let newUIImageView = UIImageView()
let myImage:UIImage = UIImage(named: imagelist[i])!
let bgColorFromImage = myImage.averageColor
newUIImageView.image = myImage
newUIImageView.frame = frame
newUIImageView.contentMode = UIViewContentMode.scaleAspectFit
scrollView.backgroundColor = bgColorFromImage // Changes the color to the average color of the image
scrollView.addSubview(newUIImageView)
self.scrollView.contentSize = CGSize(width: self.scrollView.frame.size.width * CGFloat(imagelist.count), height: self.scrollView.frame.size.height)
pageControl.addTarget(self, action: #selector(changePage), for: UIControlEvents.valueChanged)
}
}

I figured it out. I needed to create an array of UIColors to store all of the colors of the images:
var colors = [UIColor]()
Then in setScrollView() append the color like so:
colors.append(myImage.averageColor!)
And lastly, in the scrollViewDidEndDecelerating(_ scrollView: UIScrollView) set the background like so:
scrollView.backgroundColor = colors[Int(pageNumber)]

Related

CoreImageContext's CreateCGImage producing wrong CGRect

Code:
enum GradientDirection {
case up
case left
case upLeft
case upRight
}
extension SKTexture {
convenience init(size: CGSize, color1: CIColor, color2: CIColor, direction: GradientDirection = .up) {
let coreImageContext = CIContext(options: nil)
let gradientFilter = CIFilter(name: "CILinearGradient")
gradientFilter!.setDefaults()
var startVector:CIVector
var endVector:CIVector
switch direction {
case .up:
startVector = CIVector(x: size.width/2, y: 0)
endVector = CIVector(x: size.width/2, y: size.height)
case .left:
startVector = CIVector(x: size.width, y: size.height/2)
endVector = CIVector(x: 0, y: size.height/2)
case .upLeft:
startVector = CIVector(x: size.width, y: 0)
endVector = CIVector(x: 0, y: size.height)
case .upRight:
startVector = CIVector(x: 0, y: 0)
endVector = CIVector(x: size.width, y: size.height)
}
gradientFilter!.setValue(startVector, forKey: "inputPoint0")
gradientFilter!.setValue(endVector, forKey: "inputPoint1")
gradientFilter!.setValue(color1, forKey: "inputColor0")
gradientFilter!.setValue(color2, forKey: "inputColor1")
let imgRect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
let cgimg = coreImageContext.createCGImage(gradientFilter!.outputImage!, from: imgRect)!
print("cgimg: ", cgimg) // *** Observer this output ***** 103.0 width and height
self.init(cgImage: cgimg)
}
}
Calling Initializer:
// e.g. CGSize(width: 102.69999694824219, height: 102.69999694824219)
let textureSize = CGSize(width: self.frame.width, height: self.frame.height)
let shapeTexture = SKTexture(size: textureSize, color1: bottomColor, color2: topColor, direction: .upRight)
Passing width/height: 102.69999694824219, produces shapeTexture with width/height: 103.
Seems like coreImageContext.createCGImage is rounding off 102.69999694824219 to 103.0.
This results in minor unexpected output. How can I by-pass this rounding off? Or is there is any other method to generate Gradient image for Nodes?
More Code:
class BubbleNode: SKShapeNode {
private var backgroundNode: SKCropNode!
var label: SKLabelNode!
private var state: BubbleNodeState!
private let BubbleAnimationDuration = 0.2
private let BubbleIconPercentualInset = 0.4
var model: BubbleModel! {
didSet {
self.label.text = model.name
}
}
override init() {
super.init()
}
convenience init(withRadius radius: CGFloat) {
self.init()
self.init(circleOfRadius: radius)
state = .normal
self.configure()
}
private func configure() {
self.name = "mybubble"
physicsBody = SKPhysicsBody(circleOfRadius: 4 + self.path!.boundingBox.size.width / 2.0)
physicsBody!.isDynamic = true
physicsBody!.affectedByGravity = false
physicsBody!.allowsRotation = false
physicsBody!.mass = 0.3
physicsBody!.friction = 0.0
physicsBody!.linearDamping = 3
backgroundNode = SKCropNode()
backgroundNode.isUserInteractionEnabled = false
backgroundNode.position = CGPoint.zero
backgroundNode.zPosition = 0
self.addChild(backgroundNode)
label = SKLabelNode(fontNamed: "")
label.preferredMaxLayoutWidth = self.frame.size.width - 16
label.numberOfLines = 0
label.position = CGPoint.zero
label.fontColor = .white
label.fontSize = 10
label.isUserInteractionEnabled = false
label.verticalAlignmentMode = .center
label.horizontalAlignmentMode = .center
label.zPosition = 2
self.addChild(label)
}
func addGradientNode(withRadius radius: CGFloat) {
let gradientNode = SKShapeNode(path: self.path!)
gradientNode.zPosition = 1
gradientNode.fillColor = .white
gradientNode.strokeColor = .clear
let bottomColor = CIColor(red: 0.922, green: 0.256, blue: 0.523, alpha: 1)
let topColor = CIColor(red: 0.961, green: 0.364, blue: 0.155, alpha: 1)
let textureSize = CGSize(width: self.frame.width, height: self.frame.height)
let shapeTexture = SKTexture(size: textureSize, color1: bottomColor, color2: topColor, direction: .upRight)
gradientNode.fillTexture = shapeTexture
self.addChild(gradientNode)
print("path: ", self.path!)
print("textureSize: ", textureSize)
print("shapeTexture: ", shapeTexture)
}
}
Any image/texture will always have integer sizes since there are no sub-pixels in memory. So frameworks like Core Image will always round the given size up to the next integer value.
In contrast, the frame of a view is given in points, which need to be multiplied by the view's contentScaleFactor to get the actual pixel size (that you should use to generate your gradient). UIKit also allows for sub-pixel frame sizes, but under the hood, it will also round up when rendering the views to the screen.

CALayer live blur inside AVVideoCompositionCoreAnimationTool

I am trying to come up with a way to perform a live blur during AVVideoCompositionCoreAnimationTool export. I have tried UIVisualEffectView and stealing the layer of the underlying view. It works in preview but as soon as you use it inside AVVideoCompositionCoreAnimationTool the layer is black. So I started building a CALayer that does this but it is not updating often enough. What can I do to make it draw more often or what might work for using the AVVideoCompositionCoreAnimationTool and a live blur in iOS? Here is the layer I built.
class CABlurLayer : CALayer{
let maxBlurRadius : CGFloat = 20
var currentImageIndex : Float = 0
var blur : Int = 10
var context : CGContext?
var link : Timer?
var snap : UIImage?
var targetLayer : CALayer?
override init() {
super.init()
}
convenience init(targetLayer:CALayer?){
self.init()
self.targetLayer = targetLayer
self.drawsAsynchronously = true
if let tl = targetLayer{
self.masksToBounds = tl.masksToBounds
}
updateSnapShots()
link = Timer.scheduledTimer(timeInterval: 1/60, target: self, selector: #selector(updateBlur), userInfo: nil, repeats: true)
}
#objc func updateBlur(){
updateSnapShots()
DispatchQueue.main.async {
self.setNeedsDisplay()
}
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
func updateSnapShots(){
guard let tl = targetLayer else{return}
UIGraphicsBeginImageContextWithOptions(self.bounds.size, false, 0)
guard let ctx = UIGraphicsGetCurrentContext() else{return}
tl.render(in: ctx)
let snapshot = UIGraphicsGetImageFromCurrentImageContext()
snap = snapshot?.applyBlurWithRadius(CGFloat(blur), tintColor: UIColor().withAlphaComponent(0), saturationDeltaFactor: 1.4)
}
override func draw(in ctx: CGContext) {
guard let blurredImage = snap,
let tl = targetLayer else{return}
var origin = tl.frame.origin
if let pres = tl.presentation(){
origin = pres.frame.origin
}
UIGraphicsPushContext(ctx)
blurredImage.draw(at: origin)
UIGraphicsPopContext()
}
}
class MyViewController : UIViewController {
override func loadView() {
let view = UIView()
view.backgroundColor = .white
self.view = view
let ur = URL(string: "https://images.pexels.com/photos/457882/pexels-photo-457882.jpeg?auto=compress&cs=tinysrgb&dpr=2&w=500")
URLSession.shared.dataTask(with: ur!) { (dt, response, error) in
if let data = dt{
print("we have a response")
let img = UIImage(data: data)
DispatchQueue.main.async {
let layer = CALayer()
layer.frame = CGRect(x: 0, y: 0, width: 500, height: 500)
view.layer.addSublayer(layer)
let imageLayer = CALayer()
imageLayer.masksToBounds = true
imageLayer.frame = CGRect(x: 0, y: 150, width: 400, height: 300)
imageLayer.contentsGravity = .resizeAspectFill
imageLayer.contents = img?.cgImage
layer.addSublayer(imageLayer)
let blur = CABlurLayer(targetLayer: imageLayer)
blur.frame = layer.bounds
layer.addSublayer(blur)
blur.blur = 20
let pos = CABasicAnimation(keyPath: "position.x")
pos.toValue = imageLayer.position.x
pos.fromValue = imageLayer.position.x - 100
pos.duration = 2
pos.repeatCount = 100
pos.autoreverses = true
imageLayer.add(pos, forKey: nil)
}
}
}.resume()
}
}
// Present the view controller in the Live View window
PlaygroundPage.current.liveView = MyViewController()
PlaygroundPage.current.needsIndefiniteExecution = true
UIImage Extensions
import UIKit
import Accelerate
public extension UIImage {
public func applyLightEffect() -> UIImage? {
return applyBlurWithRadius(30, tintColor: UIColor(white: 1.0, alpha: 0.3), saturationDeltaFactor: 1.8)
}
public func applyExtraLightEffect() -> UIImage? {
return applyBlurWithRadius(20, tintColor: UIColor(white: 0.97, alpha: 0.82), saturationDeltaFactor: 1.8)
}
public func applyDarkEffect() -> UIImage? {
return applyBlurWithRadius(20, tintColor: UIColor(white: 0.11, alpha: 0.73), saturationDeltaFactor: 1.8)
}
public func applyTintEffectWithColor(_ tintColor: UIColor) -> UIImage? {
let effectColorAlpha: CGFloat = 0.6
var effectColor = tintColor
let componentCount = tintColor.cgColor.numberOfComponents
if componentCount == 2 {
var b: CGFloat = 0
if tintColor.getWhite(&b, alpha: nil) {
effectColor = UIColor(white: b, alpha: effectColorAlpha)
}
} else {
var red: CGFloat = 0
var green: CGFloat = 0
var blue: CGFloat = 0
if tintColor.getRed(&red, green: &green, blue: &blue, alpha: nil) {
effectColor = UIColor(red: red, green: green, blue: blue, alpha: effectColorAlpha)
}
}
return applyBlurWithRadius(10, tintColor: effectColor, saturationDeltaFactor: -1.0, maskImage: nil)
}
public func applyBlurWithRadius(_ blurRadius: CGFloat, tintColor: UIColor?, saturationDeltaFactor: CGFloat, maskImage: UIImage? = nil) -> UIImage? {
// Check pre-conditions.
if (size.width < 1 || size.height < 1) {
print("*** error: invalid size: \(size.width) x \(size.height). Both dimensions must be >= 1: \(self)")
return nil
}
guard let cgImage = self.cgImage else {
print("*** error: image must be backed by a CGImage: \(self)")
return nil
}
if maskImage != nil && maskImage!.cgImage == nil {
print("*** error: maskImage must be backed by a CGImage: \(String(describing: maskImage))")
return nil
}
let __FLT_EPSILON__ = CGFloat(Float.ulpOfOne)
let screenScale = UIScreen.main.scale
let imageRect = CGRect(origin: CGPoint.zero, size: size)
var effectImage = self
let hasBlur = blurRadius > __FLT_EPSILON__
let hasSaturationChange = fabs(saturationDeltaFactor - 1.0) > __FLT_EPSILON__
if hasBlur || hasSaturationChange {
func createEffectBuffer(_ context: CGContext) -> vImage_Buffer {
let data = context.data
let width = vImagePixelCount(context.width)
let height = vImagePixelCount(context.height)
let rowBytes = context.bytesPerRow
return vImage_Buffer(data: data, height: height, width: width, rowBytes: rowBytes)
}
UIGraphicsBeginImageContextWithOptions(size, false, screenScale)
guard let effectInContext = UIGraphicsGetCurrentContext() else { return nil }
effectInContext.scaleBy(x: 1.0, y: -1.0)
effectInContext.translateBy(x: 0, y: -size.height)
effectInContext.draw(cgImage, in: imageRect)
var effectInBuffer = createEffectBuffer(effectInContext)
UIGraphicsBeginImageContextWithOptions(size, false, screenScale)
guard let effectOutContext = UIGraphicsGetCurrentContext() else { return nil }
var effectOutBuffer = createEffectBuffer(effectOutContext)
if hasBlur {
// A description of how to compute the box kernel width from the Gaussian
// radius (aka standard deviation) appears in the SVG spec:
// http://www.w3.org/TR/SVG/filters.html#feGaussianBlurElement
//
// For larger values of 's' (s >= 2.0), an approximation can be used: Three
// successive box-blurs build a piece-wise quadratic convolution kernel, which
// approximates the Gaussian kernel to within roughly 3%.
//
// let d = floor(s * 3*sqrt(2*pi)/4 + 0.5)
//
// ... if d is odd, use three box-blurs of size 'd', centered on the output pixel.
//
let inputRadius = blurRadius * screenScale
let d = floor(inputRadius * 3.0 * CGFloat(sqrt(2 * .pi) / 4 + 0.5))
var radius = UInt32(d)
if radius % 2 != 1 {
radius += 1 // force radius to be odd so that the three box-blur methodology works.
}
let imageEdgeExtendFlags = vImage_Flags(kvImageEdgeExtend)
vImageBoxConvolve_ARGB8888(&effectInBuffer, &effectOutBuffer, nil, 0, 0, radius, radius, nil, imageEdgeExtendFlags)
vImageBoxConvolve_ARGB8888(&effectOutBuffer, &effectInBuffer, nil, 0, 0, radius, radius, nil, imageEdgeExtendFlags)
vImageBoxConvolve_ARGB8888(&effectInBuffer, &effectOutBuffer, nil, 0, 0, radius, radius, nil, imageEdgeExtendFlags)
}
var effectImageBuffersAreSwapped = false
if hasSaturationChange {
let s: CGFloat = saturationDeltaFactor
let floatingPointSaturationMatrix: [CGFloat] = [
0.0722 + 0.9278 * s, 0.0722 - 0.0722 * s, 0.0722 - 0.0722 * s, 0,
0.7152 - 0.7152 * s, 0.7152 + 0.2848 * s, 0.7152 - 0.7152 * s, 0,
0.2126 - 0.2126 * s, 0.2126 - 0.2126 * s, 0.2126 + 0.7873 * s, 0,
0, 0, 0, 1
]
let divisor: CGFloat = 256
let matrixSize = floatingPointSaturationMatrix.count
var saturationMatrix = [Int16](repeating: 0, count: matrixSize)
for i: Int in 0 ..< matrixSize {
saturationMatrix[i] = Int16(round(floatingPointSaturationMatrix[i] * divisor))
}
if hasBlur {
vImageMatrixMultiply_ARGB8888(&effectOutBuffer, &effectInBuffer, saturationMatrix, Int32(divisor), nil, nil, vImage_Flags(kvImageNoFlags))
effectImageBuffersAreSwapped = true
} else {
vImageMatrixMultiply_ARGB8888(&effectInBuffer, &effectOutBuffer, saturationMatrix, Int32(divisor), nil, nil, vImage_Flags(kvImageNoFlags))
}
}
if !effectImageBuffersAreSwapped {
effectImage = UIGraphicsGetImageFromCurrentImageContext()!
}
UIGraphicsEndImageContext()
if effectImageBuffersAreSwapped {
effectImage = UIGraphicsGetImageFromCurrentImageContext()!
}
UIGraphicsEndImageContext()
}
// Set up output context.
UIGraphicsBeginImageContextWithOptions(size, false, screenScale)
guard let outputContext = UIGraphicsGetCurrentContext() else { return nil }
outputContext.scaleBy(x: 1.0, y: -1.0)
outputContext.translateBy(x: 0, y: -size.height)
// Draw base image.
outputContext.draw(cgImage, in: imageRect)
// Draw effect image.
if hasBlur {
outputContext.saveGState()
if let maskCGImage = maskImage?.cgImage {
outputContext.clip(to: imageRect, mask: maskCGImage);
}
outputContext.draw(effectImage.cgImage!, in: imageRect)
outputContext.restoreGState()
}
// Add in color tint.
if let color = tintColor {
outputContext.saveGState()
outputContext.setFillColor(color.cgColor)
outputContext.fill(imageRect)
outputContext.restoreGState()
}
// Output image is ready.
let outputImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return outputImage
}
public func blurImage()->UIImage?{
return self.applyBlurWithRadius(20, tintColor: UIColor().withAlphaComponent(0), saturationDeltaFactor: 1.4)
}
}
This has been around a while so I thought I would share my solution. I stole the CABackdropLayer from a UIVisualEffectView to achieve a live blur. You can init a layer of this type but it is private. However since a public view uses this layer and I am just taking it from that view I am not having to access a private api in a super direct way.

UIView bounds.applying but with rotation

I'd like to create a dash border around a view, which can be moved/rotated/scaled.
Here's my code:
func addBorder() {
let f = selectedObject.bounds.applying(selectedObject.transform)
borderView.backgroundColor = UIColor(red: 1, green: 0, blue: 0, alpha: 0.5) //just for testing
borderView.frame = f
borderView.center = selectedObject.center
borderView.transform = CGAffineTransform(translationX: selectedObject.transform.tx, y: selectedObject.transform.ty)
removeBorder() //remove old border
let f2 = CGRect(x: 0, y: 0, width: borderView.frame.width, height: borderView.frame.height)
let dashedBorder = CAShapeLayer()
dashedBorder.strokeColor = UIColor.black.cgColor
dashedBorder.lineDashPattern = [2, 2]
dashedBorder.frame = f2
dashedBorder.fillColor = nil
dashedBorder.path = UIBezierPath(rect: f2).cgPath
dashedBorder.name = "border"
borderView.layer.addSublayer(dashedBorder)
}
And it looks like this:
It's not bad, but I want the border to be rotated as well, because it may be misleading for the user as touch area is only on the image.
I've tried to apply rotation to the transform:
func addBorder() {
let f = selectedObject.bounds.applying(selectedObject.transform)
borderView.backgroundColor = UIColor(red: 1, green: 0, blue: 0, alpha: 0.5) //just for testing
borderView.frame = f
borderView.center = selectedObject.center
let rotation = atan2(selectedObject.transform.b, selectedObject.transform.a)
borderView.transform = CGAffineTransform(rotationAngle: rotation).translatedBy(x: selectedObject.transform.tx, y: selectedObject.transform.ty)
removeBorder() //remove old border
let f2 = CGRect(x: 0, y: 0, width: borderView.frame.width, height: borderView.frame.height)
let dashedBorder = CAShapeLayer()
dashedBorder.strokeColor = UIColor.black.cgColor
dashedBorder.lineDashPattern = [2, 2]
dashedBorder.frame = f2
dashedBorder.fillColor = nil
dashedBorder.path = UIBezierPath(rect: f2).cgPath
dashedBorder.name = "border"
borderView.layer.addSublayer(dashedBorder)
}
But after rotating it looks like this:
How can I fix this?
Here is a sample based on your code that should do:
//initial transforms
selectedObject.transform = CGAffineTransform.init(rotationAngle: .pi / 4).translatedBy(x: 150, y: 15)
func addBorder() {
let borderView = UIView.init(frame: selectedObject.bounds)
self.view.addSubview(borderView)
borderView.backgroundColor = UIColor(red: 1, green: 0, blue: 0, alpha: 0.5) //just for testing
borderView.center = selectedObject.center
borderView.transform = selectedObject.transform
removeBorder() //remove old border
let dashedBorder = CAShapeLayer()
dashedBorder.strokeColor = UIColor.black.cgColor
dashedBorder.lineDashPattern = [2, 2]
dashedBorder.fillColor = nil
dashedBorder.path = UIBezierPath(rect: borderView.bounds).cgPath
dashedBorder.name = "border"
borderView.layer.addSublayer(dashedBorder)
}
Here is the solution of for problem:
func addBorder() {
borderView.backgroundColor = UIColor(red: 1, green: 0, blue: 0, alpha: 0.5) //just for testing
let degrees: CGFloat = 20.0 //the value in degrees for rotation
let radians: CGFloat = degrees * (.pi / 180)
borderView.transform = CGAffineTransform(rotationAngle: radians)
removeBorder()
let dashedBorder = CAShapeLayer()
dashedBorder.strokeColor = UIColor.black.cgColor
dashedBorder.lineDashPattern = [2, 2]
dashedBorder.frame = borderView.bounds
dashedBorder.fillColor = nil
dashedBorder.path = UIBezierPath(roundedRect: borderView.bounds, cornerRadius:0).cgPath
dashedBorder.name = "border"
borderView.layer.addSublayer(dashedBorder)
}
The above code is tested in Xcode 10 with Swift 4.2
Even though I've accepted the answer, because it helped me understand the issue I'm posting the final answer, because it's more to it. And I think it can be helpful for someone else, because I couldn't find this solution on Stackoverflow or somewhere else.
The idea is to create a borderView with bounds same as selectedObject. This was the solution from #Incredible_dev, however there was one issue: the line itself stretches as the borderView is scaled in any direction. And I want to keep the line size and just it want to be around selectedObject. So, I multiply selectedObject bounds with scale extracted from selectedObject.transform. Then I copy translation and rotation from the selectedObject.
Here's the final code:
var borderView: UIView!
var selectedObject: UIView?
extension CGAffineTransform { //helper extension
func getScale() -> CGFloat {
return (self.a * self.a + self.c * self.c).squareRoot()
}
func getRotation() -> CGFloat {
return atan2(self.b, self.a)
}
}
func removeBorder() { //remove the older border
if borderView != nil {
borderView.removeFromSuperview()
}
}
func addBorder() {
guard let selectedObject = selectedObject else { return }
removeBorder() //remove old border
let t = selectedObject.transform
let s = t.getScale()
let r = t.getRotation()
borderView = UIView(frame: CGRect(x: 0, y: 0, width: selectedObject.bounds.width * s, height: selectedObject.bounds.height * s)) //multiply bounds with selectedObject's scale
dividerImageView.addSubview(borderView) //add borderView to the "scene"
borderView.transform = CGAffineTransform(translationX: t.tx, y: t.ty).rotated(by: r) //copy translation and rotation, order is important
borderView.center = selectedObject.center
let dashedBorder = CAShapeLayer() //create 2-point wide dashed line
dashedBorder.lineWidth = 2
dashedBorder.strokeColor = UIColor.black.cgColor
dashedBorder.lineDashPattern = [2, 2]
dashedBorder.fillColor = nil
dashedBorder.path = UIBezierPath(rect: borderView.bounds).cgPath
borderView.layer.addSublayer(dashedBorder)
}

how to add colored border to uiimage in swift

It is pretty easy to add border to UIImageView, using layers (borderWidth, borderColor etc.). Is there any possibility to add border to image, not to image view? Does somebody know?
Update:
I tried to follow the suggestion below und used extension. Thank you for that but I did not get the desired result. Here is my code. What is wrong?
import UIKit
class ViewController: UIViewController {
var imageView: UIImageView!
var sizeW = CGFloat()
var sizeH = CGFloat()
override func viewDidLoad() {
super.viewDidLoad()
sizeW = view.frame.width
sizeH = view.frame.height
setImage()
}
func setImage(){
//add image view
imageView = UIImageView(frame: CGRect(x: 0, y: 0, width: sizeW/2, height: sizeH/2))
imageView.center = view.center
imageView.tintColor = UIColor.orange
imageView.contentMode = UIViewContentMode.scaleAspectFit
let imgOriginal = UIImage(named: "plum")!.withRenderingMode(.alwaysTemplate)
let borderImage = imgOriginal.imageWithBorder(width: 2, color: UIColor.blue)
imageView.image = borderImage
view.addSubview(imageView)
}
}
extension UIImage {
func imageWithBorder(width: CGFloat, color: UIColor) -> UIImage? {
let square = CGSize(width: min(size.width, size.height) + width * 2, height: min(size.width, size.height) + width * 2)
let imageView = UIImageView(frame: CGRect(origin: CGPoint(x: 0, y: 0), size: square))
imageView.contentMode = .center
imageView.image = self
imageView.layer.borderWidth = width
imageView.layer.borderColor = color.cgColor
UIGraphicsBeginImageContextWithOptions(imageView.bounds.size, false, scale)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
imageView.layer.render(in: context)
let result = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return result
}
}
The second image with the red border is more or less what I need:
Strongly inspired by #herme5, refactored into more compact Swift 5/iOS12+ code as follows (fixed vertical flip issue as well):
public extension UIImage {
/**
Returns the flat colorized version of the image, or self when something was wrong
- Parameters:
- color: The colors to user. By defaut, uses the ``UIColor.white`
- Returns: the flat colorized version of the image, or the self if something was wrong
*/
func colorized(with color: UIColor = .white) -> UIImage {
UIGraphicsBeginImageContextWithOptions(size, false, scale)
defer {
UIGraphicsEndImageContext()
}
guard let context = UIGraphicsGetCurrentContext(), let cgImage = cgImage else { return self }
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
color.setFill()
context.translateBy(x: 0, y: size.height)
context.scaleBy(x: 1.0, y: -1.0)
context.clip(to: rect, mask: cgImage)
context.fill(rect)
guard let colored = UIGraphicsGetImageFromCurrentImageContext() else { return self }
return colored
}
/**
Returns the stroked version of the fransparent image with the given stroke color and the thickness.
- Parameters:
- color: The colors to user. By defaut, uses the ``UIColor.white`
- thickness: the thickness of the border. Default to `2`
- quality: The number of degrees (out of 360): the smaller the best, but the slower. Defaults to `10`.
- Returns: the stroked version of the image, or self if something was wrong
*/
func stroked(with color: UIColor = .white, thickness: CGFloat = 2, quality: CGFloat = 10) -> UIImage {
guard let cgImage = cgImage else { return self }
// Colorize the stroke image to reflect border color
let strokeImage = colorized(with: color)
guard let strokeCGImage = strokeImage.cgImage else { return self }
/// Rendering quality of the stroke
let step = quality == 0 ? 10 : abs(quality)
let oldRect = CGRect(x: thickness, y: thickness, width: size.width, height: size.height).integral
let newSize = CGSize(width: size.width + 2 * thickness, height: size.height + 2 * thickness)
let translationVector = CGPoint(x: thickness, y: 0)
UIGraphicsBeginImageContextWithOptions(newSize, false, scale)
guard let context = UIGraphicsGetCurrentContext() else { return self }
defer {
UIGraphicsEndImageContext()
}
context.translateBy(x: 0, y: newSize.height)
context.scaleBy(x: 1.0, y: -1.0)
context.interpolationQuality = .high
for angle: CGFloat in stride(from: 0, to: 360, by: step) {
let vector = translationVector.rotated(around: .zero, byDegrees: angle)
let transform = CGAffineTransform(translationX: vector.x, y: vector.y)
context.concatenate(transform)
context.draw(strokeCGImage, in: oldRect)
let resetTransform = CGAffineTransform(translationX: -vector.x, y: -vector.y)
context.concatenate(resetTransform)
}
context.draw(cgImage, in: oldRect)
guard let stroked = UIGraphicsGetImageFromCurrentImageContext() else { return self }
return stroked
}
}
extension CGPoint {
/**
Rotates the point from the center `origin` by `byDegrees` degrees along the Z axis.
- Parameters:
- origin: The center of he rotation;
- byDegrees: Amount of degrees to rotate around the Z axis.
- Returns: The rotated point.
*/
func rotated(around origin: CGPoint, byDegrees: CGFloat) -> CGPoint {
let dx = x - origin.x
let dy = y - origin.y
let radius = sqrt(dx * dx + dy * dy)
let azimuth = atan2(dy, dx) // in radians
let newAzimuth = azimuth + byDegrees * .pi / 180.0 // to radians
let x = origin.x + radius * cos(newAzimuth)
let y = origin.y + radius * sin(newAzimuth)
return CGPoint(x: x, y: y)
}
}
Here is a UIImage extension I wrote in Swift 4. As IOSDealBreaker said this is all about image processing, and some particular cases may occur. You should have a png image with a transparent background, and manage the size if larger than the original.
First get a colorised "shade" version of your image.
Then draw and redraw the shade image all around a given origin point (In our case around (0,0) at a distance that is the border thickness)
Draw your source image at the origin point so that it appears on the foreground.
You may have to enlarge your image if the borders go out of the original rect.
My method uses a lot of util methods and class extensions. Here is some maths to rotate a vector (which is actually a point) around another point: Rotating a CGPoint around another CGPoint
extension CGPoint {
func rotated(around origin: CGPoint, byDegrees: CGFloat) -> CGPoint {
let dx = self.x - origin.x
let dy = self.y - origin.y
let radius = sqrt(dx * dx + dy * dy)
let azimuth = atan2(dy, dx) // in radians
let newAzimuth = azimuth + (byDegrees * CGFloat.pi / 180.0) // convert it to radians
let x = origin.x + radius * cos(newAzimuth)
let y = origin.y + radius * sin(newAzimuth)
return CGPoint(x: x, y: y)
}
}
I wrote my custom CIFilter to colorise an image which have a transparent background: Colorize a UIImage in Swift
class ColorFilter: CIFilter {
var inputImage: CIImage?
var inputColor: CIColor?
private let kernel: CIColorKernel = {
let kernelString =
"""
kernel vec4 colorize(__sample pixel, vec4 color) {
pixel.rgb = pixel.a * color.rgb;
pixel.a *= color.a;
return pixel;
}
"""
return CIColorKernel(source: kernelString)!
}()
override var outputImage: CIImage? {
guard let inputImage = inputImage, let inputColor = inputColor else { return nil }
let inputs = [inputImage, inputColor] as [Any]
return kernel.apply(extent: inputImage.extent, arguments: inputs)
}
}
extension UIImage {
func colorized(with color: UIColor) -> UIImage {
guard let cgInput = self.cgImage else {
return self
}
let colorFilter = ColorFilter()
colorFilter.inputImage = CIImage(cgImage: cgInput)
colorFilter.inputColor = CIColor(color: color)
if let ciOutputImage = colorFilter.outputImage {
let context = CIContext(options: nil)
let cgImg = context.createCGImage(ciOutputImage, from: ciOutputImage.extent)
return UIImage(cgImage: cgImg!, scale: self.scale, orientation: self.imageOrientation).alpha(color.rgba.alpha).withRenderingMode(self.renderingMode)
} else {
return self
}
}
At this point you should have everything to make this work:
extension UIImage {
func stroked(with color: UIColor, size: CGFloat) -> UIImage {
let strokeImage = self.colorized(with: color)
let oldRect = CGRect(x: size, y: size, width: self.size.width, height: self.size.height).integral
let newSize = CGSize(width: self.size.width + (2*size), height: self.size.height + (2*size))
let translationVector = CGPoint(x: size, y: 0)
UIGraphicsBeginImageContextWithOptions(newSize, false, self.scale)
if let context = UIGraphicsGetCurrentContext() {
context.interpolationQuality = .high
let step = 10 // reduce the step to increase quality
for angle in stride(from: 0, to: 360, by: step) {
let vector = translationVector.rotated(around: .zero, byDegrees: CGFloat(angle))
let transform = CGAffineTransform(translationX: vector.x, y: vector.y)
context.concatenate(transform)
context.draw(strokeImage.cgImage!, in: oldRect)
let resetTransform = CGAffineTransform(translationX: -vector.x, y: -vector.y)
context.concatenate(resetTransform)
}
context.draw(self.cgImage!, in: oldRect)
let newImage = UIImage(cgImage: context.makeImage()!, scale: self.scale, orientation: self.imageOrientation)
UIGraphicsEndImageContext()
return newImage.withRenderingMode(self.renderingMode)
}
UIGraphicsEndImageContext()
return self
}
}
Borders to the images belongs to image processing area of iOS. It's not easy as borders for a UIView, It's pretty deep but if you're willing to go the distance, here is a library and a hint for the journey
https://github.com/BradLarson/GPUImage
try using GPUImageThresholdEdgeDetectionFilter
or try OpenCV https://docs.opencv.org/2.4/doc/tutorials/ios/image_manipulation/image_manipulation.html
Use this simple extension for UIImage
extension UIImage {
func outline() -> UIImage? {
UIGraphicsBeginImageContext(size)
let rect = CGRect(x: 0, y: 0, width: size.width, height: size.height)
self.draw(in: rect, blendMode: .normal, alpha: 1.0)
let context = UIGraphicsGetCurrentContext()
context?.setStrokeColor(red: 1.0, green: 0.5, blue: 1.0, alpha: 1.0)
context?.setLineWidth(5.0)
context?.stroke(rect)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return newImage
}
}
It will give you an image with pink border.

Filling Undefined forms with Gradient color SWIFT

I am new to programming and I have no idea how I can fill a undefined geometrical form with a gradient color...
I managed to do with a simple color like that:
func fillRegion(pixelX: Int, pixelY: Int, withColor color: UIColor) {
var red: CGFloat = 0, green: CGFloat = 0, blue: CGFloat = 0, alpha: CGFloat = 0
color.getRed(&red, green: &green, blue: &blue, alpha: &alpha)
var newColor = (UInt32)(alpha*255)<<24 | (UInt32)(red*255)<<16 | (UInt32)(green*255)<<8 | (UInt32)(blue*255)<<0
let pixelColor = regionsData.advanced(by: (pixelY * imageHeight) + pixelX).pointee
if pixelColor == blackColor { return }
var pointerRegionsData: UnsafeMutablePointer<UInt32> = regionsData
var pointerImageData: UnsafeMutablePointer<UInt32> = imageData
var pixelsChanged = false
for i in 0...(imageHeight * imageHeight - 1) {
if pointerRegionsData.pointee == pixelColor {
pointerImageData = imageData.advanced(by: i)
if pointerImageData.pointee != newColor {
// newColor = newColor + 1
pointerImageData.pointee = newColor
pixelsChanged = true
}
}
pointerRegionsData = pointerRegionsData.successor()
}
if pixelsChanged {
self.image = UIImage(cgImage: imageContext.makeImage()!)
DispatchQueue.main.async {
CATransaction.setDisableActions(true)
self.layer.contents = self.image.cgImage
self.onImageDraw?(self.image)
}
self.playTapSound()
}
}
Pixel by pixel it fill the color (ignoring the black color) any ideas how to do that with Gradient color? thanks!
You can make a gradient layer and apply an image or a shape layer as its mask. Here is a playground.
import PlaygroundSupport
import UIKit
class V: UIView {
private lazy var gradientLayer: CAGradientLayer = {
let gradientLayer = CAGradientLayer()
gradientLayer.colors = [UIColor.red.cgColor,
UIColor.purple.cgColor,
UIColor.blue.cgColor,
UIColor.white.cgColor]
gradientLayer.locations = [0, 0.3, 0.9, 1]
gradientLayer.startPoint = CGPoint(x: 0, y: 0)
gradientLayer.endPoint = CGPoint(x: 0, y: 1)
gradientLayer.mask = self.strokeLayer
self.layer.addSublayer(gradientLayer)
return gradientLayer
}()
private lazy var strokeLayer: CAShapeLayer = {
let strokeLayer = CAShapeLayer()
strokeLayer.path = UIBezierPath(ovalIn: CGRect(x:0, y: 0, width: 100, height: 100)).cgPath
return strokeLayer
}()
override func layoutSubviews() {
super.layoutSubviews()
strokeLayer.path = UIBezierPath(ovalIn: bounds).cgPath
gradientLayer.frame = bounds
layer.addSublayer(gradientLayer)
}
}
let v = V(frame: CGRect(x: 0, y: 0, width: 200, height: 200))
PlaygroundPage.current.liveView = v
I'm not 100% sure I understand the question, but it seems like you want to fill any-old shape with a gradient, right? If so, there are a couple of ways to do that, but the easiest is to make a gradient that's the same size as the boundary of the shape and then apply that as its color. I'm typing this on my PC so I'm sure there's syntax errors, but here goes...
let size = CGSize(width, height)
UIGraphicsRenderer(size, false, 0) // I KNOW I have this one wrong
let colors = [tColour.cgColor, bColour.cgColor] as CFArray
let colorSpace = CGColorSpaceCreateDeviceRGB()
let gradient = CGGradient(colorsSpace: colorSpace, colors: colors , locations: nil)
Set the colors array as needed and then send that into the UIImage. You can use locations: to change the orientation.

Resources