CGBitmapContextCreate: unsupported parameter combination. How to pass kCGImageAlphaNoneSkipFirst [duplicate] - ios

This question already has answers here:
kCGImageAlphaNone unresolved identifier in swift
(2 answers)
Closed 7 years ago.
I originally wrote this App (GitHub) in Obj-C, but need to convert it to Swift. Upon converting I've been having trouble getting the Context for the Bitmap created.
Error Message:
Whiteboard[2833] <Error>: CGBitmapContextCreate: unsupported parameter combination: 8 integer bits/component; 24 bits/pixel; 3-component color space; kCGImageAlphaNone; 1500 bytes/row.
Originally I had this:
self.cacheContext = CGBitmapContextCreate (self.cacheBitmap, size.width, size.height, 8, bitmapBytesPerRow, CGColorSpaceCreateDeviceRGB(), kCGImageAlphaNoneSkipFirst);
And now I have:
self.cacheContext = CGBitmapContextCreate(self.cacheBitmap!, UInt(size.width), UInt(size.height), 8, bitmapBytesPerRow, CGColorSpaceCreateDeviceRGB(), CGBitmapInfo.ByteOrder32Little);
I believe the issue has to do with CGBitmapInfo.ByteOrder32Little, but I'm not sure what to pass. Is there a way to pass kCGImageAlphaNoneSkipFirst as a CGBitmapInfo?
Full Source:
//
// WhiteBoard.swift
// Whiteboard
//
import Foundation
import UIKit
class WhiteBoard: UIView {
var hue: CGFloat
var cacheBitmap: UnsafeMutablePointer<Void>?
var cacheContext: CGContextRef?
override init(frame: CGRect) {
self.hue = 0.0;
// Create a UIView with the size of the parent view
super.init(frame: frame);
// Initialize the Cache Context of the bitmap
self.initContext(frame);
// Set the background color of the view to be White
self.backgroundColor = UIColor.whiteColor();
// Add a Save Button to the bottom right corner of the screen
let buttonFrame = CGRectMake(frame.size.width - 50, frame.size.height - 30, 40, 25);
let button = UIButton();
button.frame = buttonFrame;
button.setTitle("Save", forState: .Normal);
button.setTitleColor(UIColor.blueColor(), forState: .Normal);
button.addTarget(self, action: "downloadImage", forControlEvents: .TouchUpInside);
// Add the button to the view
self.addSubview(button);
}
required init(coder aDecoder: NSCoder) {
self.hue = 0.0;
super.init(coder: aDecoder)
}
func initContext(frame: CGRect)-> Bool {
let size = frame.size; // Get the size of the UIView
var bitmapByteCount: UInt!
var bitmapBytesPerRow: UInt!
// Calculate the number of bytes per row. 4 bytes per pixel: red, green, blue, alpha
bitmapBytesPerRow = UInt(size.width * 4);
// Total Bytes in the bitmap
bitmapByteCount = UInt(CGFloat(bitmapBytesPerRow) * size.height);
// Allocate memory for image data. This is the destination in memory where any
// drawing to the bitmap context will be rendered
self.cacheBitmap = malloc(bitmapByteCount);
// Create the Cache Context from the Bitmap
self.cacheContext = CGBitmapContextCreate(self.cacheBitmap!, UInt(size.width), UInt(size.height), 8, bitmapBytesPerRow, CGColorSpaceCreateDeviceRGB(), CGBitmapInfo.ByteOrder32Little);
// Set the background as white
CGContextSetRGBFillColor(self.cacheContext, 1.0, 1.0, 1.0, 1.0);
CGContextFillRect(self.cacheContext, frame);
CGContextSaveGState(self.cacheContext);
return true;
}
// Fired everytime a touch event is dragged
override func touchesMoved(touches: NSSet, withEvent event: UIEvent) {
let touch = touches.anyObject() as UITouch;
self.drawToCache(touch);
}
// Draw the new touch event to the cached Bitmap
func drawToCache(touch: UITouch) {
self.hue += 0.005;
if(self.hue > 1.0) {
self.hue = 0.0;
}
// Create a color object of the line color
let color = UIColor(hue: CGFloat(self.hue), saturation: CGFloat(0.7), brightness: CGFloat(1.0), alpha: CGFloat(1.0));
// Set the line size, type, and color
CGContextSetStrokeColorWithColor(self.cacheContext, color.CGColor);
CGContextSetLineCap(self.cacheContext, kCGLineCapRound);
CGContextSetLineWidth(self.cacheContext, CGFloat(15));
// Get the current and last touch point
let lastPoint = touch.previousLocationInView(self) as CGPoint;
let newPoint = touch.locationInView(self) as CGPoint;
// Draw the line
CGContextMoveToPoint(self.cacheContext, lastPoint.x, lastPoint.y);
CGContextAddLineToPoint(self.cacheContext, newPoint.x, newPoint.y);
CGContextStrokePath(self.cacheContext);
// Calculate the dirty pixels that needs to be updated
let dirtyPoint1 = CGRectMake(lastPoint.x-10, lastPoint.y-10, 20, 20);
let dirtyPoint2 = CGRectMake(newPoint.x-10, newPoint.y-10, 20, 20);
self.setNeedsDisplay();
// Only update the dirty pixels to improve performance
//self.setNeedsDisplayInRect(dirtyPoint1);
//self.setNeedsDisplayInRect(dirtyPoint2);
}
// Draw the cachedBitmap to the UIView
override func drawRect(rect: CGRect) {
// Get the current Graphics Context
let context = UIGraphicsGetCurrentContext();
// Get the Image to draw
let cacheImage = CGBitmapContextCreateImage(self.cacheContext);
// Draw the ImageContext to the screen
CGContextDrawImage(context, self.bounds, cacheImage);
}
// Download the image to the camera roll
func downloadImage() {
// Get the Image from the CGContext
let image = UIImage(CGImage: CGBitmapContextCreateImage(self.cacheContext));
// Save the Image to their Camera Roll
UIImageWriteToSavedPhotosAlbum(image, self, "image:didFinishSavingWithError:contextInfo:", nil);
}
func image(image: UIImage, didFinishSavingWithError error: NSError, contextInfo: UnsafeMutablePointer<Void>) {
if(!error.localizedDescription.isEmpty) {
UIAlertView(title: "Error", message: "Error Saving Photo", delegate: nil, cancelButtonTitle: "Ok").show();
}
}
}

In Objective-C, you would simply cast to the other enum type, like this:
(CGBitmapInfo)kCGImageAlphaNoneSkipFirst
In Swift, you have to do it like this:
CGBitmapInfo(CGImageAlphaInfo.NoneSkipFirst.rawValue)
Welcome to the wild and wacky world of Swift numerics. You have to pull the numeric value out of the original CGImageAlphaInfo enumeration with rawValue; now you can use that numeric value in the initializer of the CGBitmapInfo enumeration.
EDIT It's much simpler in iOS 9 / Swift 2.0, where you can just pass CGImageAlphaInfo.NoneSkipFirst.rawValue directly into CGBitmapContextCreate, which now just expects an integer at this spot.

Related

swift - speed improvement in UIView pixel per pixel drawing

is there a way to improve the speed / performance of drawing pixel per pixel into a UIView?
The current implementation of a 500x500 pixel UIView, is terribly slow.
class CustomView: UIView {
public var context = UIGraphicsGetCurrentContext()
public var redvalues = [[CGFloat]](repeating: [CGFloat](repeating: 1.0, count: 500), count: 500)
public var start = 0
{
didSet{
self.setNeedsDisplay()
}
}
override func draw(_ rect: CGRect
{
super.draw(rect)
context = UIGraphicsGetCurrentContext()
for yindex in 0...499{
for xindex in 0...499 {
context?.setStrokeColor(UIColor(red: redvalues[xindex][yindex], green: 0.0, blue: 0.0, alpha: 1.0).cgColor)
context?.setLineWidth(2)
context?.beginPath()
context?.move(to: CGPoint(x: CGFloat(xindex), y: CGFloat(yindex)))
context?.addLine(to: CGPoint(x: CGFloat(xindex)+1.0, y: CGFloat(yindex)))
context?.strokePath()
}
}
}
}
Thank you very much
When drawing individual pixels, you can use a bitmap context. A bitmap context takes raw pixel data as an input.
The context copies your raw pixel data so you don't have to use paths, which are likely much slower. You can then get a CGImage by using context.makeImage().
The image can then be used in an image view, which would eliminate the need to redraw the whole thing every frame.
If you don't want to manually create a bitmap context, you can use
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()
// draw everything into the context
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
Then you can use a UIImageView to display the rendered image.
It is also possible to draw into a CALayer, which does not need to be redrawn every frame but only when resized.
That's how it looks now, are there any optimizations possible or not?
public struct rgba {
var r:UInt8
var g:UInt8
var b:UInt8
var a:UInt8
}
public let imageview = UIImageView()
override func viewDidLoad() {
super.viewDidLoad()
let width_input = 500
let height_input = 500
let redPixel = rgba(r:255, g:0, b:0, a:255)
let greenPixel = rgba(r:0, g:255, b:0, a:255)
let bluePixel = rgba(r:0, g:0, b:255, a:255
var pixelData = [rgba](repeating: redPixel, count: Int(width_input*height_input))
pixelData[1] = greenPixel
pixelData[3] = bluePixel
self.view.addSubview(imageview)
imageview.frame = CGRect(x: 100,y: 100,width: 600,height: 600)
imageview.image = draw(pixel: pixelData,width: width_input,height: height_input)
}
func draw(pixel:[rgba],width:Int,height:Int) -> UIImage
{
let colorSpace = CGColorSpaceCreateDeviceRGB()
let data = UnsafeMutableRawPointer(mutating: pixel)
let bitmapContext = CGContext(data: data,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: 4*width,
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
let image = bitmapContext?.makeImage()
return UIImage(cgImage: image!)
}
I took the answer from Manuel and got it working in Swift 5. The main sticking point here was to clear the dangling pointer warning now in Xcode 12.
var image:CGImage?
pixelData.withUnsafeMutableBytes( { (rawBufferPtr: UnsafeMutableRawBufferPointer) in
if let rawPtr = rawBufferPtr.baseAddress {
let bitmapContext = CGContext(data: rawPtr,
width: width,
height: height,
bitsPerComponent: 8,
bytesPerRow: 4*width,
space: colorSpace,
bitmapInfo: CGImageAlphaInfo.premultipliedLast.rawValue)
image = bitmapContext?.makeImage()
}
})
I did have to move away from the rgba struct approach for front loading the data and moved to direct UInt32 values derived from rawValues in the enum. The 'append' or 'replaceInRange' approach to updating an existing array took hours (my bitmap was LARGE) and ended up exhausting swap space on my computer.
enum Color: UInt32 { // All 4 bytes long with full opacity
case red = 4278190335 // 0xFF0000FF
case yellow = 4294902015
case orange = 4291559679
case pink = 4290825215
case violet = 4001558271
case purple = 2147516671
case green = 16711935
case blue = 65535 // 0x0000FFFF
}
With this approach I was able to quickly build a Data buffer with that data amount via:
func prepareColorBlock(c:Color) -> Data {
var rawData = withUnsafeBytes(of:c.rawValue) { Data($0) }
rawData.reverse() // Byte order is reveresed when defined
var dataBlock = Data()
dataBlock.reserveCapacity(100)
for _ in stride(from: 0, to: 100, by: 1) {
dataBlock.append(rawData)
}
return dataBlock
}
With that I just appended each of these blocks into my mutable Data instance 'pixelData' and we are off. You can tweak how the data is assembled, as I just wanted to generate some color bars in a UIImageView to validate the work. For a 800x600 view, it took about 2.3 seconds to generate and render the whole thing.
Again, hats off to Manuel for pointing me in the right direction.

How to fix the Gradient on text when using an image in swift, as the gradient restarts

I'm trying to create a gradient on text, I have used UIGraphics to use a gradient image to create this. The problem I'm having is that the gradient is restarting. Does anyone know how I can scale the gradient to stretch to the text?
The text is on a wireframe and will be altered a couple of times. Sometimes it will be perfect but other times it is not.
The gradient should go yellow to blue but it restarts see photo below:
import UIKit
func colourTextWithGrad(label: UILabel) {
UIGraphicsBeginImageContext(label.frame.size)
UIImage(named: "testt.png")?.drawInRect(label.bounds)
let myGradient: UIImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
label.textColor = UIColor(patternImage: myGradient)
}
You'll have to redraw the image each time the label size changes
This is because a pattered UIColor is only ever tiled. From the documentation:
During drawing, the image in the pattern color is tiled as necessary to cover the given area.
Therefore, you'll need to change the image size yourself when the bounds of the label changes – as pattern images don't support stretching. To do this, you can subclass UILabel, and override the layoutSubviews method. Something like this should achieve the desired result:
class GradientLabel: UILabel {
let gradientImage = UIImage(named:"gradient.png")
override func layoutSubviews() {
guard let grad = gradientImage else { // skip re-drawing gradient if it doesn't exist
return
}
// redraw your gradient image
UIGraphicsBeginImageContext(frame.size)
grad.drawInRect(bounds)
let myGradient = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// update text color
textColor = UIColor(patternImage: myGradient)
}
}
Although it's worth noting that I'd always prefer to draw a gradient myself – as you can have much more flexibility (say you want to add another color later). Also the quality of your image might be degraded when you redraw it at different sizes (although due to the nature of gradients, this should be fairly minimal).
You can draw your own gradient fairly simply by overriding the drawRect of your UILabel subclass. For example:
override func drawRect(rect: CGRect) {
// begin new image context to let the superclass draw the text in (so we can use it as a mask)
UIGraphicsBeginImageContextWithOptions(bounds.size, false, 0.0)
do {
// get your image context
let ctx = UIGraphicsGetCurrentContext()
// flip context
CGContextScaleCTM(ctx, 1, -1)
CGContextTranslateCTM(ctx, 0, -bounds.size.height)
// get the superclass to draw text
super.drawRect(rect)
}
// get image and end context
let img = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
// get drawRect context
let ctx = UIGraphicsGetCurrentContext()
// clip context to image
CGContextClipToMask(ctx, bounds, img.CGImage)
// define your colors and locations
let colors = [UIColor.orangeColor().CGColor, UIColor.redColor().CGColor, UIColor.purpleColor().CGColor, UIColor.blueColor().CGColor]
let locs:[CGFloat] = [0.0, 0.3, 0.6, 1.0]
// create your gradient
let grad = CGGradientCreateWithColors(CGColorSpaceCreateDeviceRGB(), colors, locs)
// draw gradient
CGContextDrawLinearGradient(ctx, grad, CGPoint(x: 0, y:bounds.size.height*0.5), CGPoint(x:bounds.size.width, y:bounds.size.height*0.5), CGGradientDrawingOptions(rawValue: 0))
}
Output:
Swift 4 & as subclass
class GradientLabel: UILabel {
// MARK: - Colors to create gradient from
#IBInspectable open var gradientFrom: UIColor?
#IBInspectable open var gradientTo: UIColor?
override func draw(_ rect: CGRect) {
// begin new image context to let the superclass draw the text in (so we can use it as a mask)
UIGraphicsBeginImageContextWithOptions(bounds.size, false, 0.0)
do {
// get your image context
guard let ctx = UIGraphicsGetCurrentContext() else { super.draw(rect); return }
// flip context
ctx.scaleBy(x: 1, y: -1)
ctx.translateBy(x: 0, y: -bounds.size.height)
// get the superclass to draw text
super.draw(rect)
}
// get image and end context
guard let img = UIGraphicsGetImageFromCurrentImageContext(), img.cgImage != nil else { return }
UIGraphicsEndImageContext()
// get drawRect context
guard let ctx = UIGraphicsGetCurrentContext() else { return }
// clip context to image
ctx.clip(to: bounds, mask: img.cgImage!)
// define your colors and locations
let colors: [CGColor] = [UIColor.orange.cgColor, UIColor.red.cgColor, UIColor.purple.cgColor, UIColor.blue.cgColor]
let locs: [CGFloat] = [0.0, 0.3, 0.6, 1.0]
// create your gradient
guard let grad = CGGradient(colorsSpace: CGColorSpaceCreateDeviceRGB(), colors: colors as CFArray, locations: locs) else { return }
// draw gradient
ctx.drawLinearGradient(grad, start: CGPoint(x: 0, y: bounds.size.height*0.5), end: CGPoint(x:bounds.size.width, y: bounds.size.height*0.5), options: CGGradientDrawingOptions(rawValue: 0))
}
}

iOS How to Save Image from UIView without lose quality of image

I have images that are drawn in a UIView, then lines are drawn on them. How can I can save the image with the painted lines to a UIImage object with the same image size it began with, and without losing any image quality?
ie. I have an image of size (3264, 2448).
That is drawn in a UIView (size 375, 281) that AspectFit with the image,
then a line is painted on the image. Finally, how can I save the image from the UIView to a UIImage with size (3264, 2448) without losing image quality?
If this is not the best approach, please recommend a better way to accomplish this.
class DrawingView: UIImageView {
private var pts = [CGPoint](count: 5, repeatedValue: CGPoint())
private var ctr: uint!
var lineWidth: CGFloat = 4.0
var lineColor: UIColor = UIColor.darkGrayColor()
required init(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
init(frame: CGRect, image: UIImage) {
super.init(frame: frame)
self.image = image
beginDrawingView()
}
private func beginDrawingView() {
userInteractionEnabled = true
}
override func touchesBegan(touches: Set<NSObject>, withEvent event: UIEvent) {
ctr = 0
if let touch = touches.first as? UITouch {
pts[0] = touch.locationInView(self) as CGPoint
}
}
override func touchesMoved(touches: Set<NSObject>, withEvent event: UIEvent) {
if let touch = touches.first as? UITouch {
let p: CGPoint = touch.locationInView(self)
ctr = ctr + 1
pts[Int(ctr)] = p
if ctr == 4 {
pts[3] = CGPointMake((pts[2].x + pts[4].x)/2.0, (pts[2].y + pts[4].y)/2.0);
UIGraphicsBeginImageContextWithOptions(bounds.size, false, 0.0)
let context = UIGraphicsGetCurrentContext()
image!.drawInRect(CGRect(x: 0, y: 0, width: bounds.width, height: bounds.height))
CGContextSaveGState(context)
CGContextSetShouldAntialias(context, true)
CGContextSetLineCap(context, kCGLineCapRound)
CGContextSetLineWidth(context, lineWidth)
CGContextSetStrokeColorWithColor(context, lineColor.CGColor)
let path = CGPathCreateMutable()
CGPathMoveToPoint(path, nil, pts[0].x, pts[0].y)
CGPathAddCurveToPoint(path, nil, pts[1].x, pts[1].y, pts[2].x, pts[2].y, pts[3].x, pts[3].y)
CGContextSetBlendMode(context, kCGBlendModeNormal)
CGContextAddPath(context, path)
CGContextStrokePath(context)
image = UIGraphicsGetImageFromCurrentImageContext()
CGContextRestoreGState(context)
UIGraphicsEndImageContext()
pts[0] = pts[3]
pts[1] = pts[4]
ctr = 1
}
}
}
}
I'm not sure if this is exactly what you are looking for, but if you already have a UIImage object, you can just save it as a Base64String. Here is some code that I use for this (you'll notice that I also perform compression on any JPG images because I'm saving the string to a database, but of course you wouldn't need that component since you don't want to lose image quality):
//Convert the image to Base64String with optional JPG compression
//
-(NSString*)convertImageToBase64String:(UIImage*)image withImageType:(NSString*)imageType
{
NSData* data;
if ([imageType isEqualToString:#"PNG"]) {
data = UIImagePNGRepresentation(image);
} else if ([imageType isEqualToString:#"JPG"] || [imageType isEqualToString:#"JPEG"]) {
data = UIImageJPEGRepresentation(image, 0.9);
}
return [data base64EncodedStringWithOptions:NSDataBase64Encoding64CharacterLineLength];
}
If you don't have the UIImage object yet, though, you could get it by taking a screenshot like this:
+ (UIImage *) imageWithView:(UIView *)view
{
UIGraphicsBeginImageContextWithOptions(view.bounds.size, view.opaque, 0.0);
[view.layer renderInContext:UIGraphicsGetCurrentContext()];
UIImage * img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return img;
}
I haven't tested the last block of code, but I suspect it will give you a screenshot of the ENTIRE screen, so you would need to the view.bounds.size component to get the exact area you are looking for. Hope that helps!

Programmatic "fuzzy" style background for UIView

Of course, it's trivial to set a plain color for a background:
These days, instead of using "plain gray", it is popular to use a "fuzzy" or "cloudy" background, as a design feature in apps.
For example, here's a couple "fuzzy" backgrounds - it's just a plain color with perhaps some noise and maybe blur on that.
You can see backgrounds something like this all over, consider popular feed apps (whassapp etc). It's a "fad" of our day.
It occurred to me, it would be fantastic if you could do this in code in Swift
Note: starting with a PNG is not an elegant solution:
Hopefully it is possible to generate everything programmatically from scratch.
It would be great if the Inspector had a slider in the IBDesignable style, "Add faddish 'grainy' background..." - Should be possible in the new era!
This will get you started, based on something I wrote a long time ago:
#IBInspectable properties:
noiseColor: the noise/grain color, this is applied over the view's backgroundColor
noiseMinAlpha: the minimum alpha the randomized noise can be
noiseMaxAlpha: the maximum alpha the randomized noise can be
noisePasses: how many times to apply the noise, more passes will be slower but can result in a better noise effect
noiseSpacing: how common the randomized noise occurs, higher spacing means the noise will be less frequent
Explanation:
When any of the designable noise properties change the view is flagged for redraw. In the draw function the UIImage is generated (or pulled from NSCache if available).
In the generation method each pixel is iterated over and if the pixel should be noise (depending on the spacing parameter), the noise color is applied with a randomized alpha channel. This is done as many times as the number of passes.
.
// NoiseView.swift
import UIKit
let noiseImageCache = NSCache()
#IBDesignable class NoiseView: UIView {
let noiseImageSize = CGSizeMake(128, 128)
#IBInspectable var noiseColor: UIColor = UIColor.blackColor() {
didSet { setNeedsDisplay() }
}
#IBInspectable var noiseMinAlpha: CGFloat = 0 {
didSet { setNeedsDisplay() }
}
#IBInspectable var noiseMaxAlpha: CGFloat = 1 {
didSet { setNeedsDisplay() }
}
#IBInspectable var noisePasses: Int = 1 {
didSet {
noisePasses = max(0, noisePasses)
setNeedsDisplay()
}
}
#IBInspectable var noiseSpacing: Int = 1 {
didSet {
noiseSpacing = max(1, noiseSpacing)
setNeedsDisplay()
}
}
override func drawRect(rect: CGRect) {
super.drawRect(rect)
UIColor(patternImage: currentUIImage()).set()
UIRectFillUsingBlendMode(bounds, .Normal)
}
private func currentUIImage() -> UIImage {
// Key based on all parameters
let cacheKey = "\(noiseImageSize),\(noiseColor),\(noiseMinAlpha),\(noiseMaxAlpha),\(noisePasses)"
var image = noiseImageCache.objectForKey(cacheKey) as! UIImage!
if image == nil {
image = generatedUIImage()
#if !TARGET_INTERFACE_BUILDER
noiseImageCache.setObject(image, forKey: cacheKey)
#endif
}
return image
}
private func generatedUIImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(noiseImageSize, false, 0)
let accuracy: CGFloat = 1000.0
for _ in 0..<noisePasses {
for y in 0..<Int(noiseImageSize.height) {
for x in 0..<Int(noiseImageSize.width) {
if random() % noiseSpacing == 0 {
let alpha = (CGFloat(random() % Int((noiseMaxAlpha - noiseMinAlpha) * accuracy)) / accuracy) + noiseMinAlpha
noiseColor.colorWithAlphaComponent(alpha).set()
UIRectFill(CGRectMake(CGFloat(x), CGFloat(y), 1, 1))
}
}
}
}
let image = UIGraphicsGetImageFromCurrentImageContext() as UIImage
UIGraphicsEndImageContext()
return image
}
}
in Swift 3
import UIKit
let noiseImageCache = NSCache<AnyObject, AnyObject>()
#IBDesignable class NoiseView: UIView {
let noiseImageSize = CGSize(width: 128.0, height: 128.0)
#IBInspectable var noiseColor: UIColor = UIColor.black {
didSet { setNeedsDisplay() }
}
#IBInspectable var noiseMinAlpha: CGFloat = 0 {
didSet { setNeedsDisplay() }
}
#IBInspectable var noiseMaxAlpha: CGFloat = 0.5 {
didSet { setNeedsDisplay() }
}
#IBInspectable var noisePasses: Int = 3 {
didSet {
noisePasses = max(0, noisePasses)
setNeedsDisplay()
}
}
#IBInspectable var noiseSpacing: Int = 1 {
didSet {
noiseSpacing = max(1, noiseSpacing)
setNeedsDisplay()
}
}
override func draw(_ rect: CGRect) {
super.draw(rect)
UIColor(patternImage: currentUIImage()).set()
UIRectFillUsingBlendMode(bounds, .normal)
}
private func currentUIImage() -> UIImage {
// Key based on all parameters
let cacheKey = "\(noiseImageSize),\(noiseColor),\(noiseMinAlpha),\(noiseMaxAlpha),\(noisePasses)"
var image = noiseImageCache.object(forKey: cacheKey as AnyObject) as? UIImage
if image == nil {
image = generatedUIImage()
#if !TARGET_INTERFACE_BUILDER
noiseImageCache.setObject(image!, forKey: cacheKey as AnyObject)
#endif
}
return image!
}
private func generatedUIImage() -> UIImage {
UIGraphicsBeginImageContextWithOptions(noiseImageSize, false, 0)
let accuracy: CGFloat = 1000.0
for _ in 0..<noisePasses {
for y in 0..<Int(noiseImageSize.height) {
for x in 0..<Int(noiseImageSize.width) {
if Int(arc4random()) % noiseSpacing == 0 {
let alpha = (CGFloat(arc4random() % UInt32((noiseMaxAlpha - noiseMinAlpha) * accuracy)) / accuracy) + noiseMinAlpha
noiseColor.withAlphaComponent(alpha).set()
UIRectFill(CGRect(x: x, y: y, width: 1, height: 1))
}
}
}
}
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image!
}
}
You could easily build something up using GPUImage. It comes with a huge set of blurs, noise generators and filters.. You can connect them together in sequence and build up complex GPU accelerated effects.
To give you an good starting point. Here's a quick dirty prototype of a function that uses GPUImage to do something like what you want. If you set 'orUseNoise' to YES it will create a blurred image based on perlin noise INSTEAD if the image. Tweak the values pointed out to change the desired effect.
- (UIImage *)blurWithGPUImage:(UIImage *)sourceImage orUseNoise:(bool) useNoise {
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithImage:sourceImage];
GPUImageGaussianBlurFilter *gaussFilter = [[GPUImageGaussianBlurFilter alloc] init];
[gaussFilter setBlurRadiusInPixels:6]; //<<-------TWEAK
[gaussFilter setBlurPasses:1]; //<<-------TWEAK
if(useNoise) {
GPUImagePerlinNoiseFilter* perlinNouse = [[GPUImagePerlinNoiseFilter alloc] init];
[perlinNouse setColorStart:(GPUVector4){1.0, 1.0, 1.0f, 1.0}]; //<<-------TWEAK
[perlinNouse setColorFinish:(GPUVector4){0.5,0.5, 0.5f, 1.0}]; //<<-------TWEAK
[perlinNouse setScale:200]; //<<-------TWEAK
[stillImageSource addTarget:perlinNouse];
[perlinNouse addTarget:gaussFilter];
} else {
[stillImageSource addTarget:gaussFilter];
}
[gaussFilter useNextFrameForImageCapture];
[stillImageSource processImage];
UIImage *outputImage = [gaussFilter imageFromCurrentFramebuffer];
// Set up output context.
UIGraphicsBeginImageContext(self.view.frame.size);
CGContextRef outputContext = UIGraphicsGetCurrentContext();
// Invert image coordinates
CGContextScaleCTM(outputContext, 1.0, -1.0);
CGContextTranslateCTM(outputContext, 0, -self.view.frame.size.height);
// Draw base image.
CGContextDrawImage(outputContext, self.view.frame, outputImage.CGImage);
// Apply tint
CGContextSaveGState(outputContext);
UIColor* tint = [UIColor colorWithWhite:1.0f alpha:0.6]; //<<-------TWEAK
CGContextSetFillColorWithColor(outputContext, tint.CGColor);
CGContextFillRect(outputContext, self.view.frame);
CGContextRestoreGState(outputContext);
// Output image
outputImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return outputImage;
}
This is a simple stack of:
GPUImagePicture -> GPUImagePerlinNoiseFilter -> GPUImageGaussianBlurFilter
..with a bit of handling code to make into an image properly.
You can try changing the stack to use some of the many other filters.
NOTE: Even if you use the noise instead of the image. You will still need to provide an image until you cut that part out.
We use great component KGNoise. It is really easy to use. I think it can help you
KGNoise generates random black and white pixels into a static 128x128 image that is then tiled to fill the space. The random pixels are seeded with a value that has been chosen to look the most random, this also means that the noise will look consistent between app launches.
I agree with answer about GPUImage and since you don't want to provide image, you could create blank image like this:
func createNoiseImage(size: CGSize, color: UIColor) -> UIImage {
UIGraphicsBeginImageContext(size)
let context = UIGraphicsGetCurrentContext()
CGContextSetFillColorWithColor(context, color.CGColor)
CGContextFillRect(context, CGRectMake(0, 0, size.width, size.height))
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext();
let filter = GPUImagePerlinNoiseFilter()
return filter.imageByFilteringImage(image)
}
The main advantage of using GPUImage is speed.
While the question asks for a "programmatic" solution, it comes to mind that what you are trying to do and refer as "fuzzy" sounds a lot like UIBlurEffect, UIVisualEffectView and UIVibrancyEffect which were introduced in iOS 8.
In order to use these, you can drag a UIVisualEffectView on your Storyboard scene to add a blur or vibrancy effect to a specific part of the screen.
If you would like to have an entire scene appearing with the visual effect on top of the previous scene, you should configure the following:
Set either the View Controller or presentation segue to Presentation = Over Current Context and make the background color of the "fuzzy"
Set the background color of the presented view controller to clearColor.
Embed the entire content of the presented view controller inside a UIVisualEffectView
With that, you can get effects like this:

How do I get pixel color on touch from inside a SKScene?

I have a spritekit application written in swift and I want to get the color on the pixel that my finger is touching.
I have seen multiple post regarding this and tried them all out but can't seam to get it to work for me. Accourding to other post it should be possible to get the color from a UIView and as a SKScene has a SKIView that inherits from UIView it should be possible to get the color from there.
So to make the question easy and understandable I have an example.
Create a new spritekit application and add a image to it.
In my case I created a png image 200x200 pixels with a lot of different colors in it.
This is the GameScene.swift file, it is the only file I have changes from the auto generated:
import SpriteKit
extension UIView {
func getColorFromPoint(point:CGPoint) -> SKColor {
var pixelData:[UInt8] = [0,0,0,0]
let colorSpace:CGColorSpace = CGColorSpaceCreateDeviceRGB()
let bitmapInfo = CGBitmapInfo(CGImageAlphaInfo.PremultipliedLast.toRaw())
let context = CGBitmapContextCreate(&pixelData, 1, 1, 8, 4, colorSpace, bitmapInfo)
CGContextTranslateCTM(context, -point.x, -point.y);
self.layer.renderInContext(context)
var red:CGFloat = CGFloat(pixelData[0])/CGFloat(255.0)
var green:CGFloat = CGFloat(pixelData[1])/CGFloat(255.0)
var blue:CGFloat = CGFloat(pixelData[2])/CGFloat(255.0)
var alpha:CGFloat = CGFloat(pixelData[3])/CGFloat(255.0)
var color:SKColor = SKColor(red: red, green: green, blue: blue, alpha: alpha)
return color
}
}
class GameScene: SKScene {
var myColorWheel:SKSpriteNode!
override func didMoveToView(view: SKView) {
let recognizerTap = UITapGestureRecognizer(target: self, action:Selector("handleTap:"))
view.addGestureRecognizer(recognizerTap)
myColorWheel = SKSpriteNode(imageNamed: "ColorWheel.png")
myColorWheel.anchorPoint = CGPoint(x: 0, y: 0)
myColorWheel.position = CGPoint(x: 200, y: 200)
self.addChild(myColorWheel)
}
func handleTap(recognizer : UITapGestureRecognizer)
{
let location : CGPoint = self.convertPointFromView(recognizer.locationInView(self.view))
if(myColorWheel.containsPoint(location))
{
let color = self.view?.getColorFromPoint(location)
println(color)
}
}
}
It don't matter where I press on the image on the display, the result is always:
Optional(UIDeviceRGBColorSpace 0 0 0 0)
Have you tried to take a snapshot first using:
- (UIView *)snapshotViewAfterScreenUpdates:(BOOL)afterUpdates
Then picking the colours from that view?
Not sure how the system renders the .layer in a SKView.
Hope that helps.
Cheers

Resources