Programmatic "fuzzy" style background for UIView - ios

Of course, it's trivial to set a plain color for a background:
These days, instead of using "plain gray", it is popular to use a "fuzzy" or "cloudy" background, as a design feature in apps.
For example, here's a couple "fuzzy" backgrounds - it's just a plain color with perhaps some noise and maybe blur on that.
You can see backgrounds something like this all over, consider popular feed apps (whassapp etc). It's a "fad" of our day.
It occurred to me, it would be fantastic if you could do this in code in Swift
Note: starting with a PNG is not an elegant solution:
Hopefully it is possible to generate everything programmatically from scratch.
It would be great if the Inspector had a slider in the IBDesignable style, "Add faddish 'grainy' background..." - Should be possible in the new era!

This will get you started, based on something I wrote a long time ago:
#IBInspectable properties:
noiseColor: the noise/grain color, this is applied over the view's backgroundColor
noiseMinAlpha: the minimum alpha the randomized noise can be
noiseMaxAlpha: the maximum alpha the randomized noise can be
noisePasses: how many times to apply the noise, more passes will be slower but can result in a better noise effect
noiseSpacing: how common the randomized noise occurs, higher spacing means the noise will be less frequent
Explanation:
When any of the designable noise properties change the view is flagged for redraw. In the draw function the UIImage is generated (or pulled from NSCache if available).
In the generation method each pixel is iterated over and if the pixel should be noise (depending on the spacing parameter), the noise color is applied with a randomized alpha channel. This is done as many times as the number of passes.
.
// NoiseView.swift
import UIKit
let noiseImageCache = NSCache()
#IBDesignable class NoiseView: UIView {
let noiseImageSize = CGSizeMake(128, 128)
#IBInspectable var noiseColor: UIColor = UIColor.blackColor() {
didSet { setNeedsDisplay() }
}
#IBInspectable var noiseMinAlpha: CGFloat = 0 {
didSet { setNeedsDisplay() }
}
#IBInspectable var noiseMaxAlpha: CGFloat = 1 {
didSet { setNeedsDisplay() }
}
#IBInspectable var noisePasses: Int = 1 {
didSet {
noisePasses = max(0, noisePasses)
setNeedsDisplay()
}
}
#IBInspectable var noiseSpacing: Int = 1 {
didSet {
noiseSpacing = max(1, noiseSpacing)
setNeedsDisplay()
}
}
override func drawRect(rect: CGRect) {
super.drawRect(rect)
UIColor(patternImage: currentUIImage()).set()
UIRectFillUsingBlendMode(bounds, .Normal)
}
private func currentUIImage() -> UIImage {
// Key based on all parameters
let cacheKey = "\(noiseImageSize),\(noiseColor),\(noiseMinAlpha),\(noiseMaxAlpha),\(noisePasses)"
var image = noiseImageCache.objectForKey(cacheKey) as! UIImage!
if image == nil {
image = generatedUIImage()
#if !TARGET_INTERFACE_BUILDER
noiseImageCache.setObject(image, forKey: cacheKey)
#endif
}
return image
}
private func generatedUIImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(noiseImageSize, false, 0)
let accuracy: CGFloat = 1000.0
for _ in 0..<noisePasses {
for y in 0..<Int(noiseImageSize.height) {
for x in 0..<Int(noiseImageSize.width) {
if random() % noiseSpacing == 0 {
let alpha = (CGFloat(random() % Int((noiseMaxAlpha - noiseMinAlpha) * accuracy)) / accuracy) + noiseMinAlpha
noiseColor.colorWithAlphaComponent(alpha).set()
UIRectFill(CGRectMake(CGFloat(x), CGFloat(y), 1, 1))
}
}
}
}
let image = UIGraphicsGetImageFromCurrentImageContext() as UIImage
UIGraphicsEndImageContext()
return image
}
}

in Swift 3
import UIKit
let noiseImageCache = NSCache<AnyObject, AnyObject>()
#IBDesignable class NoiseView: UIView {
let noiseImageSize = CGSize(width: 128.0, height: 128.0)
#IBInspectable var noiseColor: UIColor = UIColor.black {
didSet { setNeedsDisplay() }
}
#IBInspectable var noiseMinAlpha: CGFloat = 0 {
didSet { setNeedsDisplay() }
}
#IBInspectable var noiseMaxAlpha: CGFloat = 0.5 {
didSet { setNeedsDisplay() }
}
#IBInspectable var noisePasses: Int = 3 {
didSet {
noisePasses = max(0, noisePasses)
setNeedsDisplay()
}
}
#IBInspectable var noiseSpacing: Int = 1 {
didSet {
noiseSpacing = max(1, noiseSpacing)
setNeedsDisplay()
}
}
override func draw(_ rect: CGRect) {
super.draw(rect)
UIColor(patternImage: currentUIImage()).set()
UIRectFillUsingBlendMode(bounds, .normal)
}
private func currentUIImage() -> UIImage {
// Key based on all parameters
let cacheKey = "\(noiseImageSize),\(noiseColor),\(noiseMinAlpha),\(noiseMaxAlpha),\(noisePasses)"
var image = noiseImageCache.object(forKey: cacheKey as AnyObject) as? UIImage
if image == nil {
image = generatedUIImage()
#if !TARGET_INTERFACE_BUILDER
noiseImageCache.setObject(image!, forKey: cacheKey as AnyObject)
#endif
}
return image!
}
private func generatedUIImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(noiseImageSize, false, 0)
let accuracy: CGFloat = 1000.0
for _ in 0..<noisePasses {
for y in 0..<Int(noiseImageSize.height) {
for x in 0..<Int(noiseImageSize.width) {
if Int(arc4random()) % noiseSpacing == 0 {
let alpha = (CGFloat(arc4random() % UInt32((noiseMaxAlpha - noiseMinAlpha) * accuracy)) / accuracy) + noiseMinAlpha
noiseColor.withAlphaComponent(alpha).set()
UIRectFill(CGRect(x: x, y: y, width: 1, height: 1))
}
}
}
}
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
}

You could easily build something up using GPUImage. It comes with a huge set of blurs, noise generators and filters.. You can connect them together in sequence and build up complex GPU accelerated effects.
To give you an good starting point. Here's a quick dirty prototype of a function that uses GPUImage to do something like what you want. If you set 'orUseNoise' to YES it will create a blurred image based on perlin noise INSTEAD if the image. Tweak the values pointed out to change the desired effect.
- (UIImage *)blurWithGPUImage:(UIImage *)sourceImage orUseNoise:(bool) useNoise {
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:sourceImage];
GPUImageGaussianBlurFilter *gaussFilter = [[GPUImageGaussianBlurFilter alloc] init];
[gaussFilter setBlurRadiusInPixels:6]; //<<-------TWEAK
[gaussFilter setBlurPasses:1]; //<<-------TWEAK
if(useNoise) {
GPUImagePerlinNoiseFilter* perlinNouse = [[GPUImagePerlinNoiseFilter alloc] init];
[perlinNouse setColorStart:(GPUVector4){1.0, 1.0, 1.0f, 1.0}]; //<<-------TWEAK
[perlinNouse setColorFinish:(GPUVector4){0.5,0.5, 0.5f, 1.0}]; //<<-------TWEAK
[perlinNouse setScale:200]; //<<-------TWEAK
[stillImageSource addTarget:perlinNouse];
[perlinNouse addTarget:gaussFilter];
} else {
[stillImageSource addTarget:gaussFilter];
}
[gaussFilter useNextFrameForImageCapture];
[stillImageSource processImage];
UIImage *outputImage = [gaussFilter imageFromCurrentFramebuffer];
// Set up output context.
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef outputContext = UIGraphicsGetCurrentContext();
// Invert image coordinates
CGContextScaleCTM(outputContext, 1.0, -1.0);
CGContextTranslateCTM(outputContext, 0, -self.view.frame.size.height);
// Draw base image.
CGContextDrawImage(outputContext, self.view.frame, outputImage.CGImage);
// Apply tint
CGContextSaveGState(outputContext);
UIColor* tint = [UIColor colorWithWhite:1.0f alpha:0.6]; //<<-------TWEAK
CGContextSetFillColorWithColor(outputContext, tint.CGColor);
CGContextFillRect(outputContext, self.view.frame);
CGContextRestoreGState(outputContext);
// Output image
outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
This is a simple stack of:
GPUImagePicture -> GPUImagePerlinNoiseFilter -> GPUImageGaussianBlurFilter
..with a bit of handling code to make into an image properly.
You can try changing the stack to use some of the many other filters.
NOTE: Even if you use the noise instead of the image. You will still need to provide an image until you cut that part out.

We use great component KGNoise. It is really easy to use. I think it can help you
KGNoise generates random black and white pixels into a static 128x128 image that is then tiled to fill the space. The random pixels are seeded with a value that has been chosen to look the most random, this also means that the noise will look consistent between app launches.

I agree with answer about GPUImage and since you don't want to provide image, you could create blank image like this:
func createNoiseImage(size: CGSize, color: UIColor) -> UIImage {
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, color.CGColor)
CGContextFillRect(context, CGRectMake(0, 0, size.width, size.height))
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
let filter = GPUImagePerlinNoiseFilter()
return filter.imageByFilteringImage(image)
}
The main advantage of using GPUImage is speed.

While the question asks for a "programmatic" solution, it comes to mind that what you are trying to do and refer as "fuzzy" sounds a lot like UIBlurEffect, UIVisualEffectView and UIVibrancyEffect which were introduced in iOS 8.
In order to use these, you can drag a UIVisualEffectView on your Storyboard scene to add a blur or vibrancy effect to a specific part of the screen.
If you would like to have an entire scene appearing with the visual effect on top of the previous scene, you should configure the following:
Set either the View Controller or presentation segue to Presentation = Over Current Context and make the background color of the "fuzzy"
Set the background color of the presented view controller to clearColor.
Embed the entire content of the presented view controller inside a UIVisualEffectView
With that, you can get effects like this:

Related

CIImage display MTKView vs GLKView performance

I have a series of UI Images (made from incoming jpeg Data from server) that I wish to render using MTKView. Problem is it is too slow compared to GLKView. There is lot of buffering and delay when I have a series of images to display in MTKView but no delay in GLKView.
Here is MTKView display code:
private lazy var context: CIContext = {
return CIContext(mtlDevice: self.device!, options: [CIContextOption.workingColorSpace : NSNull()])
}()
var ciImg: CIImage? {
didSet {
syncQueue.sync {
internalCoreImage = ciImg
}
}
}
func displayCoreImage(_ ciImage: CIImage) {
self.ciImg = ciImage
}
override func draw(_ rect: CGRect) {
var ciImage: CIImage?
syncQueue.sync {
ciImage = internalCoreImage
}
drawCIImage(ciImg)
}
func drawCIImage(_ ciImage:CIImage?) {
guard let image = ciImage,
let currentDrawable = currentDrawable,
let commandBuffer = commandQueue?.makeCommandBuffer()
else {
return
}
let currentTexture = currentDrawable.texture
let drawingBounds = CGRect(origin: .zero, size: drawableSize)
let scaleX = drawableSize.width / image.extent.width
let scaleY = drawableSize.height / image.extent.height
let scaledImage = image.transformed(by: CGAffineTransform(scaleX: scaleX, y: scaleY))
context.render(scaledImage, to: currentTexture, commandBuffer: commandBuffer, bounds: drawingBounds, colorSpace: CGColorSpaceCreateDeviceRGB())
commandBuffer.present(currentDrawable)
commandBuffer.commit()
}
And here is code for GLKView which is lag free and fast:
private var videoPreviewView:GLKView!
private var eaglContext:EAGLContext!
private var context:CIContext!
override init(frame: CGRect) {
super.init(frame: frame)
initCommon()
}
required init?(coder: NSCoder) {
super.init(coder: coder)
initCommon()
}
func initCommon() {
eaglContext = EAGLContext(api: .openGLES3)!
videoPreviewView = GLKView(frame: self.bounds, context: eaglContext)
context = CIContext(eaglContext: eaglContext, options: nil)
self.addSubview(videoPreviewView)
videoPreviewView.bindDrawable()
videoPreviewView.clipsToBounds = true
videoPreviewView.autoresizingMask = [.flexibleWidth, .flexibleHeight]
}
func displayCoreImage(_ ciImage: CIImage) {
let sourceExtent = ciImage.extent
let sourceAspect = sourceExtent.size.width / sourceExtent.size.height
let videoPreviewWidth = CGFloat(videoPreviewView.drawableWidth)
let videoPreviewHeight = CGFloat(videoPreviewView.drawableHeight)
let previewAspect = videoPreviewWidth/videoPreviewHeight
// we want to maintain the aspect radio of the screen size, so we clip the video image
var drawRect = sourceExtent
if sourceAspect > previewAspect
{
// use full height of the video image, and center crop the width
drawRect.origin.x = drawRect.origin.x + (drawRect.size.width - drawRect.size.height * previewAspect) / 2.0
drawRect.size.width = drawRect.size.height * previewAspect
}
else
{
// use full width of the video image, and center crop the height
drawRect.origin.y = drawRect.origin.y + (drawRect.size.height - drawRect.size.width / previewAspect) / 2.0
drawRect.size.height = drawRect.size.width / previewAspect
}
var videoRect = CGRect(x: 0, y: 0, width: videoPreviewWidth, height: videoPreviewHeight)
if sourceAspect < previewAspect
{
// use full height of the video image, and center crop the width
videoRect.origin.x += (videoRect.size.width - videoRect.size.height * sourceAspect) / 2.0;
videoRect.size.width = videoRect.size.height * sourceAspect;
}
else
{
// use full width of the video image, and center crop the height
videoRect.origin.y += (videoRect.size.height - videoRect.size.width / sourceAspect) / 2.0;
videoRect.size.height = videoRect.size.width / sourceAspect;
}
videoPreviewView.bindDrawable()
if eaglContext != EAGLContext.current() {
EAGLContext.setCurrent(eaglContext)
}
// clear eagl view to black
glClearColor(0, 0, 0, 1)
glClear(GLbitfield(GL_COLOR_BUFFER_BIT))
glEnable(GLenum(GL_BLEND))
glBlendFunc(GLenum(GL_ONE), GLenum(GL_ONE_MINUS_SRC_ALPHA))
context.draw(ciImage, in: videoRect, from: sourceExtent)
videoPreviewView.display()
}
I really want to find out where is bottleneck in Metal code. Is Metal not capable of displaying 640x360 UIImages 20 times per second?
EDIT: Setting colorPixelFormat of MTKView to rgba16Float solves the delay issue, but the reproduced colors are not accurate. So seems like colorspace conversion issue with core image. But how does GLKView renders so fast delay but not MTKView?
EDIT2: Setting colorPixelFormat of MTKView to bgra_xr10 mostly solves the delay issue. But the problem is we can not use CIRenderDestination API with this pixel color format.
Still wondering how GLKView/CIContext render the images so quickly without any delay but in MTKView we need to set colorPixelFormat to bgra_xr10 for increasing performance. And settings bgra_xr10 on iPad Mini 2 causes a crash:
-[MTLRenderPipelineDescriptorInternal validateWithDevice:], line 2590: error 'pixelFormat, for color render target(0), is not a valid MTLPixelFormat.

How to fix the Gradient on text when using an image in swift, as the gradient restarts

I'm trying to create a gradient on text, I have used UIGraphics to use a gradient image to create this. The problem I'm having is that the gradient is restarting. Does anyone know how I can scale the gradient to stretch to the text?
The text is on a wireframe and will be altered a couple of times. Sometimes it will be perfect but other times it is not.
The gradient should go yellow to blue but it restarts see photo below:
import UIKit
func colourTextWithGrad(label: UILabel) {
UIGraphicsBeginImageContext(label.frame.size)
UIImage(named: "testt.png")?.drawInRect(label.bounds)
let myGradient: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
label.textColor = UIColor(patternImage: myGradient)
}
You'll have to redraw the image each time the label size changes
This is because a pattered UIColor is only ever tiled. From the documentation:
During drawing, the image in the pattern color is tiled as necessary to cover the given area.
Therefore, you'll need to change the image size yourself when the bounds of the label changes – as pattern images don't support stretching. To do this, you can subclass UILabel, and override the layoutSubviews method. Something like this should achieve the desired result:
class GradientLabel: UILabel {
let gradientImage = UIImage(named:"gradient.png")
override func layoutSubviews() {
guard let grad = gradientImage else { // skip re-drawing gradient if it doesn't exist
return
}
// redraw your gradient image
UIGraphicsBeginImageContext(frame.size)
grad.drawInRect(bounds)
let myGradient = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// update text color
textColor = UIColor(patternImage: myGradient)
}
}
Although it's worth noting that I'd always prefer to draw a gradient myself – as you can have much more flexibility (say you want to add another color later). Also the quality of your image might be degraded when you redraw it at different sizes (although due to the nature of gradients, this should be fairly minimal).
You can draw your own gradient fairly simply by overriding the drawRect of your UILabel subclass. For example:
override func drawRect(rect: CGRect) {
// begin new image context to let the superclass draw the text in (so we can use it as a mask)
UIGraphicsBeginImageContextWithOptions(bounds.size, false, 0.0)
do {
// get your image context
let ctx = UIGraphicsGetCurrentContext()
// flip context
CGContextScaleCTM(ctx, 1, -1)
CGContextTranslateCTM(ctx, 0, -bounds.size.height)
// get the superclass to draw text
super.drawRect(rect)
}
// get image and end context
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// get drawRect context
let ctx = UIGraphicsGetCurrentContext()
// clip context to image
CGContextClipToMask(ctx, bounds, img.CGImage)
// define your colors and locations
let colors = [UIColor.orangeColor().CGColor, UIColor.redColor().CGColor, UIColor.purpleColor().CGColor, UIColor.blueColor().CGColor]
let locs:[CGFloat] = [0.0, 0.3, 0.6, 1.0]
// create your gradient
let grad = CGGradientCreateWithColors(CGColorSpaceCreateDeviceRGB(), colors, locs)
// draw gradient
CGContextDrawLinearGradient(ctx, grad, CGPoint(x: 0, y:bounds.size.height*0.5), CGPoint(x:bounds.size.width, y:bounds.size.height*0.5), CGGradientDrawingOptions(rawValue: 0))
}
Output:
Swift 4 & as subclass
class GradientLabel: UILabel {
// MARK: - Colors to create gradient from
#IBInspectable open var gradientFrom: UIColor?
#IBInspectable open var gradientTo: UIColor?
override func draw(_ rect: CGRect) {
// begin new image context to let the superclass draw the text in (so we can use it as a mask)
UIGraphicsBeginImageContextWithOptions(bounds.size, false, 0.0)
do {
// get your image context
guard let ctx = UIGraphicsGetCurrentContext() else { super.draw(rect); return }
// flip context
ctx.scaleBy(x: 1, y: -1)
ctx.translateBy(x: 0, y: -bounds.size.height)
// get the superclass to draw text
super.draw(rect)
}
// get image and end context
guard let img = UIGraphicsGetImageFromCurrentImageContext(), img.cgImage != nil else { return }
UIGraphicsEndImageContext()
// get drawRect context
guard let ctx = UIGraphicsGetCurrentContext() else { return }
// clip context to image
ctx.clip(to: bounds, mask: img.cgImage!)
// define your colors and locations
let colors: [CGColor] = [UIColor.orange.cgColor, UIColor.red.cgColor, UIColor.purple.cgColor, UIColor.blue.cgColor]
let locs: [CGFloat] = [0.0, 0.3, 0.6, 1.0]
// create your gradient
guard let grad = CGGradient(colorsSpace: CGColorSpaceCreateDeviceRGB(), colors: colors as CFArray, locations: locs) else { return }
// draw gradient
ctx.drawLinearGradient(grad, start: CGPoint(x: 0, y: bounds.size.height*0.5), end: CGPoint(x:bounds.size.width, y: bounds.size.height*0.5), options: CGGradientDrawingOptions(rawValue: 0))
}
}

Draw a shape over a high res image?

I want to be able to draw a line over a high resolution photo (e.g. 8megapixel image) in a specific place.
That is a simple enough thing, and there are many posts about that already but my problem is that the CGContext "drawing space" doesn't seem to be the same as the high res image.
I can draw the line and save an image, but my problem is with drawing the line in a specific location. My coordinate spaces seem to be different than each other. I think there must be a scale factor that I am missing or my understanding is just messed up.
So my question is:
How do I draw on to a image, that is "aspect fit" to the screen (but is much higher resolution) and have the drawing (in this case a line) be in the same position on the screen and the final full resolution composited image?
example image:
The red line is the line I am drawing. It should go from the center of the start target (theTarget) to the center of the end target (theEnd).
I have simplified my drawing function down for posting here, but i suspect it is my whole thinking/approach that is wrong.
import UIKit
class ViewController: UIViewController {
#IBOutlet weak var imageView: UIImageView!
#IBOutlet weak var theTarget: UIImageView!
#IBOutlet weak var theEnd: UIImageView!
var lineColor = UIColor.redColor()
var targetPos : CGPoint!
var endPos : CGPoint!
var originalImage : UIImage!
override func viewDidLoad() {
super.viewDidLoad()
imageView.image = UIImage(named: "reference.jpg")
originalImage = imageView.image
drawTheLine()
}
func drawTheLine () {
UIGraphicsBeginImageContext(originalImage!.size);
// Draw the original image as the background
originalImage?.drawAtPoint(CGPointMake(0, 0))
// Pass 2: Draw the line on top of original image
let context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 10.0);
targetPos = theTarget.frame.origin
endPos = theEnd.frame.origin
CGContextMoveToPoint(context, targetPos.x, targetPos.y);
CGContextAddLineToPoint(context, endPos.x, endPos.y)
CGContextSetStrokeColorWithColor(context, lineColor.CGColor)
CGContextStrokePath(context);
imageView.image = UIGraphicsGetImageFromCurrentImageContext()
}
#IBAction func saveButton(sender: AnyObject) {
// Create new image
let newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIImageWriteToSavedPhotosAlbum(newImage!, nil, nil, nil )
}
override func didReceiveMemoryWarning() {
super.didReceiveMemoryWarning()
// Dispose of any resources that can be recreated.
}
#IBAction func handlePan(recognizer:UIPanGestureRecognizer) {
let translation = recognizer.translationInView(self.view)
if let view = recognizer.view {
view.center = CGPoint(x:view.center.x + translation.x,
y:view.center.y + translation.y)
}
recognizer.setTranslation(CGPointZero, inView: self.view)
//redraw the line
drawTheLine()
print("the start pos of the line is: ", theTarget.frame.origin, " and end pos is: ", theEnd.frame.origin)
}
}
I had this exact problem a while ago, so I wrote a UIImageView extension that maps the image view's coordinates into the image coordinates, when the fill mode is .ScaleAspectFit.
extension UIImageView {
func pointInAspectScaleFitImageCoordinates(point:CGPoint) -> CGPoint {
if let img = image {
let imageSize = img.size
let imageViewSize = frame.size
let imgRatio = imageSize.width/imageSize.height // The ratio of the image before scaling.
let imgViewRatio = imageViewSize.width/imageViewSize.height // The ratio of the image view
let ratio = (imgRatio > imgViewRatio) ? imageSize.width/imageViewSize.width:imageSize.height/imageViewSize.height // The ratio of the image before scaling to after scaling.
let xOffset = (imageSize.width-(imageViewSize.width*ratio))*0.5 // The x-offset of the image on-screen (as it gets centered)
let yOffset = (imageSize.height-(imageViewSize.height*ratio))*0.5 // The y-offset of the image on-screen (as it gets centered)
let subImgOrigin = CGPoint(x: point.x*ratio, y: point.y*ratio); // The origin of the image (relative to the origin of the view)
return CGPoint(x: subImgOrigin.x+xOffset, y: subImgOrigin.y+yOffset);
}
return CGPointZero
}
}
You should be able to use this in your drawTheLine function quite easily:
func drawTheLine () {
UIGraphicsBeginImageContext(originalImage!.size);
// Draw the original image as the background
originalImage?.drawAtPoint(CGPointMake(0, 0))
// Pass 2: Draw the line on top of original image
let context = UIGraphicsGetCurrentContext();
CGContextSetLineWidth(context, 10.0);
targetPos = imageView.pointInAspectScaleFitImageCoordinates(theTarget.frame.origin)
endPos = imageView.pointInAspectScaleFitImageCoordinates(theEnd.frame.origin)
CGContextMoveToPoint(context, targetPos.x, targetPos.y);
CGContextAddLineToPoint(context, endPos.x, endPos.y)
CGContextSetStrokeColorWithColor(context, lineColor.CGColor)
CGContextStrokePath(context);
imageView.image = UIGraphicsGetImageFromCurrentImageContext()
}

How To Create in Swift a Circular Profile Picture or Rounded Corner Image with a border which does not leak?

Basing on the source code below:
#IBOutlet var myUIImageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
self.makingRoundedImageProfileWithRoundedBorder()
}
private func makingRoundedImageProfileWithRoundedBorder() {
// Making a circular image profile.
// self.myUIImageView.layer.cornerRadius = self.myUIImageView.frame.size.width / 2
// Making a rounded image profile.
self.myUIImageView.layer.cornerRadius = 20.0
self.myUIImageView.clipsToBounds = true
// Adding a border to the image profile
self.myUIImageView.layer.borderWidth = 10.0
self.myUIImageView.layer.borderColor = UIColor.whiteColor().CGColor
}
Indeed I am able to render a circular or rounded UIImageView, but the problem is that if we add the border, the image leaks a bit. It's way worse with a circular UIImageView, it leaks whenever the border is bent, so LEAKS EVERYWHERE! You can find a screenshot of the result below:
Any way to fix that in Swift? Any sample code which answers to this question will be highly appreciated.
Note: as far as possible the solution has to be compatible with iOS 7 and 8+.
First Solution
Basing on the #Jesper Schläger suggestion
"If I may suggest a quick and dirty solution:
Instead of adding a border to the image view, you could just add another white view below the image view. Make the view extend 10 points in either direction and give it a corner radius of 20.0. Give the image view a corner radius of 10.0."
Please find the Swift implementation below:
import UIKit
class ViewController: UIViewController {
#IBOutlet var myUIImageView: UIImageView!
#IBOutlet var myUIViewBackground: UIView!
override func viewDidLoad() {
super.viewDidLoad()
// Making a circular UIView: cornerRadius = self.myUIImageView.frame.size.width / 2
// Making a rounded UIView: cornerRadius = 10.0
self.roundingUIView(self.myUIImageView, cornerRadiusParam: 10)
self.roundingUIView(self.myUIViewBackground, cornerRadiusParam: 20)
}
private func roundingUIView(let aView: UIView!, let cornerRadiusParam: CGFloat!) {
aView.clipsToBounds = true
aView.layer.cornerRadius = cornerRadiusParam
}
}
Second Solution
Would be to set a circle mask over a CALayer.
Please find the Objective-C implementation of this second solution below:
CALayer *maskedLayer = [CALayer layer];
[maskedLayer setFrame:CGRectMake(50, 50, 100, 100)];
[maskedLayer setBackgroundColor:[UIColor blackColor].CGColor];
UIBezierPath *maskingPath = [UIBezierPath bezierPath];
[maskingPath addArcWithCenter:maskedLayer.position
radius:40
startAngle:0
endAngle:360
clockwise:TRUE];
CAShapeLayer *maskingLayer = [CAShapeLayer layer];
[maskingLayer setPath:maskingPath.CGPath];
[maskedLayer setMask:maskingLayer];
[self.view.layer addSublayer:maskedLayer];
If you comment out from line UIBezierPath *maskingPath = [UIBezierPath bezierPath]; through [maskedLayer setMask:maskingLayer]; you will see that the layer is a square. However when these lines are not commented the layer is a circle.
Note: I neither tested this second solution nor provided the Swift implementation, so feel free to test it and let me know if it works or not through the comment section below. Also feel free to edit this post adding the Swift implementation of this second solution.
If I may suggest a quick and dirty solution:
Instead of adding a border to the image view, you could just add another white view below the image view. Make the view extend 10 points in either direction and give it a corner radius of 20.0. Give the image view a corner radius of 10.0.
I worked on improving the code but it keeps crashing. I'll work on it, but I appear to have got a (rough) version working:
Edit Updated with a slightly nicer version. I don't like the init:coder method but maybe that can factored out/improved
class RoundedImageView: UIView {
var image: UIImage? {
didSet {
if let image = image {
self.frame = CGRect(x: 0, y: 0, width: image.size.width/image.scale, height: image.size.width/image.scale)
}
}
}
var cornerRadius: CGFloat?
private class func frameForImage(image: UIImage) -> CGRect {
return CGRect(x: 0, y: 0, width: image.size.width/image.scale, height: image.size.width/image.scale)
}
override func drawRect(rect: CGRect) {
if let image = self.image {
image.drawInRect(rect)
let cornerRadius = self.cornerRadius ?? rect.size.width/10
let path = UIBezierPath(roundedRect: rect, cornerRadius: cornerRadius)
UIColor.whiteColor().setStroke()
path.lineWidth = cornerRadius
path.stroke()
}
}
}
let image = UIImage(named: "big-teddy-bear.jpg")
let imageView = RoundedImageView()
imageView.image = image
Let me know if that's the sort of thing you're looking for.
A little explanation:
As I'm sure you've found, the "border" that iOS can apply isn't perfect, and shows the corners for some reason. I found a few other solutions but none seemed to work. The reason this is a subclass of UIView, and not UIImageView, is that drawRect: is not called for subclasses of UIImageView. I am not sure about the performance of this code, but it seems good from my (limited) testing
Original code:
class RoundedImageView: UIView {
var image: UIImage? {
didSet {
if let image = image {
self.frame = CGRect(x: 0, y: 0, width: image.size.width/image.scale, height: image.size.width/image.scale)
}
}
}
private class func frameForImage(image: UIImage) -> CGRect {
return CGRect(x: 0, y: 0, width: image.size.width/image.scale, height: image.size.width/image.scale)
}
override func drawRect(rect: CGRect) {
if let image = self.image {
self.image?.drawInRect(rect)
let path = UIBezierPath(roundedRect: rect, cornerRadius: 50)
UIColor.whiteColor().setStroke()
path.lineWidth = 10
path.stroke()
}
}
}
let image = UIImage(named: "big-teddy-bear.jpg")
let imageView = RoundedImageView()
imageView.image = image
imageView.layer.cornerRadius = 50
imageView.clipsToBounds = true

How do I get pixel color on touch from inside a SKScene?

I have a spritekit application written in swift and I want to get the color on the pixel that my finger is touching.
I have seen multiple post regarding this and tried them all out but can't seam to get it to work for me. Accourding to other post it should be possible to get the color from a UIView and as a SKScene has a SKIView that inherits from UIView it should be possible to get the color from there.
So to make the question easy and understandable I have an example.
Create a new spritekit application and add a image to it.
In my case I created a png image 200x200 pixels with a lot of different colors in it.
This is the GameScene.swift file, it is the only file I have changes from the auto generated:
import SpriteKit
extension UIView {
func getColorFromPoint(point:CGPoint) -> SKColor {
var pixelData:[UInt8] = [0,0,0,0]
let colorSpace:CGColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.toRaw())
let context = CGBitmapContextCreate(&pixelData, 1, 1, 8, 4, colorSpace, bitmapInfo)
CGContextTranslateCTM(context, -point.x, -point.y);
self.layer.renderInContext(context)
var red:CGFloat = CGFloat(pixelData[0])/CGFloat(255.0)
var green:CGFloat = CGFloat(pixelData[1])/CGFloat(255.0)
var blue:CGFloat = CGFloat(pixelData[2])/CGFloat(255.0)
var alpha:CGFloat = CGFloat(pixelData[3])/CGFloat(255.0)
var color:SKColor = SKColor(red: red, green: green, blue: blue, alpha: alpha)
return color
}
}
class GameScene: SKScene {
var myColorWheel:SKSpriteNode!
override func didMoveToView(view: SKView) {
let recognizerTap = UITapGestureRecognizer(target: self, action:Selector("handleTap:"))
view.addGestureRecognizer(recognizerTap)
myColorWheel = SKSpriteNode(imageNamed: "ColorWheel.png")
myColorWheel.anchorPoint = CGPoint(x: 0, y: 0)
myColorWheel.position = CGPoint(x: 200, y: 200)
self.addChild(myColorWheel)
}
func handleTap(recognizer : UITapGestureRecognizer)
{
let location : CGPoint = self.convertPointFromView(recognizer.locationInView(self.view))
if(myColorWheel.containsPoint(location))
{
let color = self.view?.getColorFromPoint(location)
println(color)
}
}
}
It don't matter where I press on the image on the display, the result is always:
Optional(UIDeviceRGBColorSpace 0 0 0 0)
Have you tried to take a snapshot first using:
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
Then picking the colours from that view?
Not sure how the system renders the .layer in a SKView.
Hope that helps.
Cheers

Resources