How to invert colors of a specific area of a UIView? - ios

It's easy to blur a portion of the view, keeping in mind that if the contents of views behind change, the blur changes too in realtime.
My questions
How to make an invert effect, and you can put it over a view and the contents behind would have inverted colors
How to add an effect that would know the average color of the pixels behind?
In general, How to access the pixels and manipulate them?
My question is not about UIImageView, asking about UIView in general..
there are libraries that does something similar, but they are so slow and don't run as smooth as blur!
Thanks.

If you know how to code a CIColorKernel, you'll have what you need.
Core Image has several blur filters, all of which use the GPU, which will give you the performance you need.
The CIAreaAverage will give you the average color for a specified rectangular area.
Core Image Filters
Here is about the simplest CIColorKernel you can write. It swaps the red and green value for every pixel in an image (note the "grba" instead of "rgba"):
kernel vec4 swapRedAndGreenAmount(__sample s) {
return s.grba;
}
To put this into a CIColorKernel, just use this line of code:
let swapKernel = CIKernel(string:
"kernel vec4 swapRedAndGreenAmount(__sample s) {" +
"return s.grba;" +
"}"
#tww003 has good code to convert a view's layer into a UIImage. Assuming you call your image myUiImage, to execute this swapKernel, you can:
let myInputCi = CIImage(image: myUiImage)
let myOutputCi = swapKernel.apply(withExtent: myInputCi, arguments: myInputCi)
Let myNewImage = UIImage(ciImage: myOutputCi)
That's about it. You can do alot more (including using CoreGraphics, etc.) but this is a good start.
One last note, you can chain individual filters (including hand-written color, warp, and general kernels). If you want, you can chain your color average over the underlying view with a blur and do whatever kind of inversion you wish as a single filter/effect.

I don't think I can fully answer your question, but maybe I can point you in the right direction.
Apple has some documentation on accessing the pixels data from CGImages, but of course that requires that you have an image to work with in the first place. Fortunately, you can create an image from a UIView like this:
UIGraphicsBeginImageContext(view.frame.size)
view.layer.render(in: UIGraphicsGetCurrentContext()!)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
From this image you created, you'll be able to manipulate the pixel data how you want to. It may not be the cleanest way to solve your problem, but maybe it's something worth exploring.
Unfortunately, the link I provided is written in Objective-C and is a few years old, but maybe you can figure out how to make good use of it.

1st I will recommend you to extend UIImageView for this purpose. ref
Good ref by Joe
You have to override drawRect method
import UIKit
#IBDesignable class PortholeView: UIView {
#IBInspectable var innerCornerRadius: CGFloat = 10.0
#IBInspectable var inset: CGFloat = 20.0
#IBInspectable var fillColor: UIColor = UIColor.grayColor()
#IBInspectable var strokeWidth: CGFloat = 5.0
#IBInspectable var strokeColor: UIColor = UIColor.blackColor()
override func drawRect(rect: CGRect) {
super.drawRect(rect:rect)
// Prep constants
let roundRectWidth = rect.width - (2 * inset)
let roundRectHeight = rect.height - (2 * inset)
// Use EvenOdd rule to subtract portalRect from outerFill
// (See https://stackoverflow.com/questions/14141081/uiview-drawrect-draw-the-inverted-pixels-make-a-hole-a-window-negative-space)
let outterFill = UIBezierPath(rect: rect)
let portalRect = CGRectMake(
rect.origin.x + inset,
rect.origin.y + inset,
roundRectWidth,
roundRectHeight)
fillColor.setFill()
let portal = UIBezierPath(roundedRect: portalRect, cornerRadius: innerCornerRadius)
outterFill.appendPath(portal)
outterFill.usesEvenOddFillRule = true
outterFill.fill()
strokeColor.setStroke()
portal.lineWidth = strokeWidth
portal.stroke()
}
}
Your answer is here

Related

Bezier Path Not Drawing on context

I'm trying give my user fine selection of a point they touch on a UIImage.
I have a magnifying square in the top left corner that shows where they're touching at 2x zoom. It works well.
I'm trying to add a "crosshair" in the center of the magnifying area to make selection clearer.
With the code below no line is visible.
//Bulk of the magifying code
public override func drawRect(rect: CGRect) {
let context: CGContextRef = UIGraphicsGetCurrentContext()!
CGContextScaleCTM(context, 2, 2)
CGContextTranslateCTM(context, -self.touchPoint.x, -self.touchPoint.y)
drawLine(context)
self.viewToMagnify.layer.renderInContext(context)
}
//Code for drawing horizontal line of crosshair
private func drawLine(ctx: CGContext) {
let lineHeight: CGFloat = 3.0
let lineWidth: CGFloat = min(bounds.width, bounds.height) * 0.3
let horizontalPath = UIBezierPath()
horizontalPath.lineWidth = lineHeight
let hStart = CGPoint(x:bounds.width/2 - lineWidth/2, y:bounds.height/2)
let hEnd = CGPoint(x:bounds.width/2 + lineWidth/2,y:bounds.height/2)
horizontalPath.moveToPoint(hStart)
horizontalPath.addLineToPoint(hEnd)
UIColor.whiteColor().setStroke()
horizontalPath.stroke()
}
It's possible that the line is being drawn but too small or not where I expect it to be.
I've tried other ways of drawing the line like using CGContextAddPath
I think the issue might be related to the renderInContextView not taking my drawing into account, or I'm not adding the path to the context correctly?
The magnification code is based on GJNeilson's work, all I've done is pin the centre point of the magnifying glass to the top left and remove the mask.
I think you're drawing the line then drawing the image over it. Try calling drawLine last.
Also, the scale and translate are still active when you draw the line which may be positioning it offscreen. You might have to reset it using CGContextSaveGState and CGContextRestoreGState

SKEffectNode to an SKTexture?

SKEffectionNodes have a shouldRasterise "switch" that bakes them into a bitmap, and doesn't update them until such time as the underlying nodes that are impacted by the effect are changed.
However I can't find a way to create an SKTexture from this rasterised "image".
Is it possible to get a SKTexture from a SKEffectNode?
I think you could try a code like this (it's just an example):
if let effect = SKEffectNode.init(fileNamed: "myeffect") {
effect.shouldRasterize = true
self.addChild(effect)
...
let texture = SKView().texture(from: self)
}
Update:
After you answer, hope I understood better what do you want to achieve.
This is my point of view: if you want to make a shadow of a texture, you could simply create an SKSpriteNode with this texture:
let shadow = SKSpriteNode.init(texture: <yourTexture>)
shadow.blendMode = SKBlendMode.alpha
shadow.colorBlendFactor = 1
shadow.color = SKColor.black
shadow.alpha = 0.25
What I want to say is that you could proceed step by step:
get your texture
elaborate your texture (add filters, make some other effect..)
get shadow
This way of working produces a series of useful methods you could use in your project to build other kind of elements.
Maybe, by separating the tasks you don't need to use texture(from:)
I've figured this out, in a way that solves my problems, using a Factory.
Read more on how to make a factory, from BenMobile's patient and clear articulation, here: Factory creation and use for making Sprites and Shapes
There's an issue with blurring a SKTexture or SKSpriteNode in that it's going to run out of space. The blur/glow goes beyond the edges of the sprite. To solve this, in the below, you'll see I've created a "framer" object. This is simply an empty SKSpriteNode that's double the size of the texture to be blurred. The texture to be blurred is added as a child, to this "framer" object.
It works, regardless of how hacky this is ;)
Inside a static factory class file:
import SpriteKit
class Factory {
private static let view:SKView = SKView() // the magic. This is the rendering space
static func makeShadow(from source: SKTexture, rgb: SKColor, a: CGFloat) -> SKSpriteNode {
let shadowNode = SKSpriteNode(texture: source)
shadowNode.colorBlendFactor = 0.5 // near 1 makes following line more effective
shadowNode.color = SKColor.gray // makes for a darker shadow. White for "glow" shadow
let textureSize = source.size()
let doubleTextureSize = CGSize(width: textureSize.width * 2, height: textureSize.height * 2)
let framer = SKSpriteNode(color: UIColor.clear, size: doubleTextureSize)
framer.addChild(shadowNode)
let blurAmount = 10
let filter = CIFilter(name: "CIGaussianBlur")
filter?.setValue(blurAmount, forKey: kCIInputRadiusKey)
let fxNode = SKEffectNode()
fxNode.filter = filter
fxNode.blendMode = .alpha
fxNode.addChild(framer)
fxNode.shouldRasterize = true
let tex = view.texture(from: fxNode) // ‘view’ refers to the magic first line
let shadow = SKSpriteNode(texture: tex) //WHOOPEE!!! TEXTURE!!!
shadow.colorBlendFactor = 0.5
shadow.color = rgb
shadow.alpha = a
shadow.zPosition = -1
return shadow
}
}
Inside anywhere you can access the Sprite you want to make a shadow or glow texture for:
shadowSprite = Factory.makeShadow(from: button, rgb: myColor, a: 0.33)
shadowSprite.position = CGPoint(x: self.frame.midX, y: self.frame.midY - 5)
addChild(shadowSprite)
-
button is a texture of the button to be given a shadow. a: is an alpha setting (actually transparency level, 0.0 to 1.0, where 1.0 is fully opaque) the lower this is the lighter the shadow will be.
The positioning serves to drop the shadow slightly below the button so it looks like light is coming from the top, casting shadows down and onto the background.

How to drawInRect asynchronously

I'm trying to load a huge image (talking about 131.072x131.072 pixels) tiled up nicely into 512x512 tiles of 256x256 pixels from a bunch of URLs.
Once my function returns the Image I want to draw it in a Rect on the proper position.
Since this process takes a while, I want to run the whole thing asynchronously.
Below is what I've tried so far:
override func drawRect(rect: CGRect) {
let firstColumn = Int(CGRectGetMinX(rect) / sideLength)
let lastColumn = Int(CGRectGetMaxX(rect) / sideLength)
let firstRow = Int(CGRectGetMinY(rect) / sideLength)
let lastRow = Int(CGRectGetMaxY(rect) / sideLength)
let qos = Int(QOS_CLASS_USER_INITIATED.rawValue)
dispatch_async(dispatch_get_global_queue(qos, 0)) { () -> Void in
for row in firstRow...lastRow {
for column in firstColumn...lastColumn {
let url = NSURL(string: "https://someURL/\(row)/\(column).jpg")
let tile = UIImage(data: NSData(contentsOfURL: url!)!)!
let x = self.sideLength * CGFloat(column)
let y = self.sideLength * CGFloat(row)
let point = CGPoint(x: x, y: y)
let size = CGSize(width: self.sideLength, height: self.sideLength)
var tileRect = CGRect(origin: point, size: size)
tileRect = CGRectIntersection(self.bounds, tileRect)
dispatch_async(dispatch_get_main_queue()) {
tile.drawInRect(tileRect)
}
}
}
}
}
And I'm getting this error:
<Error>: CGContextRestoreGState: invalid context 0x0. Backtrace:
<-[UIImage drawInRect:]+66>
<_TFFFC6H1Z1DB15MyClass8drawRectFS0_FVSC6CGRectT_U_FT_T_U_FT_T_+122>
<_TTRXFo__dT__XFdCb__dT__+39>
<_dispatch_call_block_and_release+12>
<_dispatch_client_callout+8>
<_dispatch_main_queue_callback_4CF+1738>
<__CFRUNLOOP_IS_SERVICING_THE_MAIN_DISPATCH_QUEUE__+9>
<__CFRunLoopRun+2073>
<CFRunLoopRunSpecific+488>
<GSEventRunModal+161>
<UIApplicationMain+171>
<main+109>
Can anybody give me a hint on how to retrieve and draw the image asynchronously?
Don't use drawRect. There's no reason to use drawRect in this scenario. Simply use views, or layers, or SpriteKit, or OpenGL ES (there are probably more possible choices). In the first two cases, you'll probably have to add/remove bits and pieces based on the part of the view which is visible on screen, but using standard views/layers will get you much better performance. Apple strongly recommends against using drawRect.
If you do use drawRect, certainly don't load data while in there. Apple clearly states that you should be drawing, not doing anything else while in there. And you certainly don't want to start asynchronous tasks while in there, this will just lead to a catastrophe. Load the data beforehand, store it somewhere, and just do the drawing while in drawRect. If you load data as the user moves around, do the loading as the user moves, not when you draw. You'll probably need to invalidate rects when the image has actually been loaded so that drawRect is then called. But again, don`t use drawRect. Just add/remove views/layers.
Also I recommend not using NSData(contentsOfURL:). Use an NSURLSession dataTask* with the appropriate completion handler. This way, all of your loads will happen simultaneously (up to the set limits), not one after the other.

Adding semi-transparent images as textures in Scenekit

When I add a semi-transparent image (sample) as a texture for a SCNNode, how can I specify a color attribute for the node where the image is transparent. Since I am able to specify either color or image as a material property, I am unable to specify the color value to the node. Is there a way to specify both color and image for the material property or is there a workaround to this problem.
If you are assigning the image to the contents of the transparent material property, you can change the materials transparencyMode to be either .AOne or .RGBZero.
.AOne means that transparency is derived from the images alpha channel.
.RGBZero means that transparency is derived from the luminance (the total red, green, and blue) in the image.
You cannot configure an arbitrary color to be treated as transparency without a custom shader.
However, from the looks of your sample image, I would think that assigning the sample image to the transparent material properties contents and using the .AOne transparency mode would give you the result you are looking for.
I'm posting this as a new answer because it's different from the other answer.
Based on your comment, I understand that you want to want to use an image with transparency as the diffuse content of a material, but use a background color wherever the image is transparent. In other words, you won't to use a composite of the image over a color as the diffuse contents.
Using UIImage
There are a few different ways you can achieve this composited image. The easiest and likely most familiar solution is to create a new UIImage that draws the image over the color. This new image will have the same size and scale as your image, but can be opaque since it has a solid background color.
func imageByComposing(image: UIImage, over color: UIColor) -> UIImage {
UIGraphicsBeginImageContextWithOptions(image.size, true, image.scale)
defer {
UIGraphicsEndImageContext()
}
let imageRect = CGRect(origin: .zero, size: image.size)
// fill with background color
color.set()
UIRectFill(imageRect)
// draw image on top
image.drawInRect(imageRect)
return UIGraphicsGetImageFromCurrentImageContext()
}
Using this image as the contents of the diffuse material property will give you the effect that you're after.
Using Shader Modifiers
If you find yourself having to change the color very frequently (possibly animating it), you could also use custom shaders or shader modifiers to composite the image over the color.
In that case, you want to composite the image A over the color B, so that the output color (CO) is:
CO = CA + CB * (1 - ɑA)
By passing the image as the diffuse contents, and assigning the output to the diffuse content, the expression can be simplified as:
Cdiffuse = Cdiffuse + Ccolor * (1 - ɑdiffuse)
Cdiffuse += Ccolor * (1 - ɑdiffuse)
Generally the output alpha would depend on the alpha of A and B, but since B (the color) is opaque (1), the output alpha is also 1.
This can be written as a small shader modifier. Since the motivation for this solutions was to be able to change the color, the color is created as a uniform variable which can be updated in code.
// Define a color that can be set/changed from code
uniform vec3 backgroundColor;
#pragma body
// Composit A (the image) over B (the color):
// output = image + color * (1-alpha_image)
float alpha = _surface.diffuse.a;
_surface.diffuse.rgb += backgroundColor * (1.0 - alpha);
// make fully opaque (since the color is fully opaque)
_surface.diffuse.a = 1.0;
This shader modifier would then be read from the file, and set in the materials shader modifier dictionary
enum ShaderLoadingError: ErrorType {
case FileNotFound, FailedToLoad
}
func shaderModifier(named shaderName: String, fileExtension: String = "glsl") throws -> String {
guard let url = NSBundle.mainBundle().URLForResource(shaderName, withExtension: fileExtension) else {
throw ShaderLoadingError.FileNotFound
}
do {
return try String(contentsOfURL: url)
} catch {
throw ShaderLoadingError.FailedToLoad
}
}
// later, in the code that configures the material ...
do {
let modifier = try shaderModifier(named: "Composit") // the name of the shader modifier file (assuming 'glsl' file extension)
theMaterial.shaderModifiers = [SCNShaderModifierEntryPointSurface: modifier]
} catch {
// Handle the error here
print(error)
}
You would then be able to change the color by setting a new value for the "backgroundColor" of the material. Note that there is no initial value, so one would have to be set.
let backgroundColor = SCNVector3Make(1.0, 0.0, 0.7) // r, g, b
// Set the color components as an SCNVector3 wrapped in an NSValue
// for the same key as the name of the uniform variable in the sahder modifier
theMaterial.setValue(NSValue(SCNVector3: backgroundColor), forKey: "backgroundColor")
As you can see, the first solution is simpler and the one I would recommend if it suits your needs. The second solution is more complicated, but enabled the background color to be animated.
Just in case someone comes across this in the future... for some tasks, ricksters solution is likely the easiest. In my case, I wanted to display a grid on top of an image that was mapped to a sphere. I originally composited the images into one and applied them, but over time I got more fancy and this started getting complex. So I made two spheres, one inside the other. I put the grid on the inner one and the image on the outer one and presto...
let outSphereGeometry = SCNSphere(radius: 20)
outSphereGeometry.segmentCount = 100
let outSphereMaterial = SCNMaterial()
outSphereMaterial.diffuse.contents = topImage
outSphereMaterial.isDoubleSided = true
outSphereGeometry.materials = [outSphereMaterial]
outSphere = SCNNode(geometry: outSphereGeometry)
outSphere.position = SCNVector3(x: 0, y: 0, z: 0)
let sphereGeometry = SCNSphere(radius: 10)
sphereGeometry.segmentCount = 100
sphereMaterial.diffuse.contents = gridImage
sphereMaterial.isDoubleSided = true
sphereGeometry.materials = [sphereMaterial]
sphere = SCNNode(geometry: sphereGeometry)
sphere.position = SCNVector3(x: 0, y: 0, z: 0)
I was surprised that I didn't need to set sphereMaterial.transparency, it seems to get this automatically.

Subtract UIView from another UIView in Swift

I'm sure this is a very simple thing to do, but I can't seem to wrap my head around the logic.
I have two UIViews. One black, semi-transparent and "full-screen" ("overlayView"), another one on top, smaller and resizeable ("cropView"). It's pretty much a crop-view setup, where I want to "dim" out the areas of an underlying image that are not being cropped.
My question is: How do I go about this? I'm sure my approach should be with CALayers and masks, but no matter what I try, I can't get behind the logic.
This is what I have right now:
This is what I would want it to look like:
How do I achieve this result in Swift?
Although you won't find a method such as subtract(...), you can easily build a screen with an overlay and a transparent cut with the following code:
Swift 4.2
private func addOverlayView() {
let overlayView = UIView(frame: self.bounds)
let targetMaskLayer = CAShapeLayer()
let squareSide = frame.width / 1.6
let squareSize = CGSize(width: squareSide, height: squareSide)
let squareOrigin = CGPoint(x: CGFloat(center.x) - (squareSide / 2),
y: CGFloat(center.y) - (squareSide / 2))
let square = UIBezierPath(roundedRect: CGRect(origin: squareOrigin, size: squareSize), cornerRadius: 16)
let path = UIBezierPath(rect: self.bounds)
path.append(square)
targetMaskLayer.path = path.cgPath
// Exclude intersected paths
targetMaskLayer.fillRule = CAShapeLayerFillRule.evenOdd
overlayView.layer.mask = targetMaskLayer
overlayView.clipsToBounds = true
overlayView.alpha = 0.6
overlayView.backgroundColor = UIColor.black
addSubview(overlayView)
}
Just call this method inside your custom view's constructor or inside your ViewController's viewDidLoad().
Walkthrough
First I create a raw overlayView, then a CAShapeLayer which I called "targetMaskLayer". The ultimate goal is to draw a square with the help of UIBezierPath inside that overlayView. After defining the square's dimensions, I set its cgPath as the targetMaskLayer's path.
Now comes an important part:
targetMaskLayer.fillRule = CAShapeLayerFillRule.evenOdd
Here I basically configure the fill rule to exclude the intersection.
Finally, I provide some styling to the overlayView and add it as a subview.
ps.: don't forget to import UIKit
There might be another drawing solution but basically you have 4 areas that need to be handled. Take the square area above and below the space with full width and add the right and left side between them with constraints to eachother.

Resources