Reverse a CALayer mask - ios

I am trying to use a CALayer with an image as contents for masking a UIView. For the mask I have complex png image. If I apply the image as a view.layer.mask I get the opposite behaviour of what I want.
Is there a way to reverse the CAlayer? Here is my code:
layerMask = CALayer()
guard let layerMask = layerMask else { return }
layerMask.contents = #imageLiteral(resourceName: "mask").cgImage
view.layer.mask = layerMask
// What I would like to to is
view.layer.mask = layerMask.inverse. // <---
I have seen several posts on reverse CAShapeLayers and Mutable paths, but nothing where I can reverse a CALayer.
What I could do is reverse the image in Photoshop so that the alpha is inverted, but the problem with that is that I won't be able to create an image with the exact size to fit all screen sizes. I hope it does make sense.

What I would do is construct the mask in real time. This is easy if you have a black image of the logo. Using standard techniques, you can draw the logo image into an image that you construct in real time, so that you are in charge of the size of the image and the size and placement of logo within it. Using a "Mask To Alpha" CIFilter, you can then convert the black to transparent for use as a layer mask.
So, to illustrate. Here's the background image: this is what we want to see wherever we punch a hole in the foreground:
Here's the foreground image, lying on top of the background and completely hiding it:
Here's the logo, in black (ignore the grey, which represents transparency):
Here's the logo drawn in code into a white background of the correct size:
And finally, here's that same image converted into a mask with the Mask To Alpha CIFilter and attached to the foreground image view as its mask:
Okay, I could have chosen my images a little better, but this is what I had lying around. You can see that wherever there was black in the logo, we are punching a hole in the foreground image and seeing the background image, which I believe is exactly what you said you wanted to do.
The key step is the last one, namely the conversion of the black-on-white image of the logo (im) to a mask; here's how I did that:
let cim = CIImage(image:im)
let filter = CIFilter(name:"CIMaskToAlpha")!
filter.setValue(cim, forKey: "inputImage")
let out = filter.outputImage!
let cgim = CIContext().createCGImage(out, from: out.extent)
let lay = CALayer()
lay.frame = self.iv.bounds
lay.contents = cgim
self.iv.layer.mask = lay

If you're using a CALayer as a mask for another CALayer, you can invert the mask by creating a large opaque layer and subtracting out the mask shape with the xor blend mode.
For example, this code subtracts a given layer from a large opaque layer to create an mask layer:
// Create a large opaque layer to serve as the inverted mask
let largeOpaqueLayer = CALayer()
largeOpaqueLayer.bounds = .veryLargeRect
largeOpaqueLayer.backgroundColor = UIColor.black.cgColor
// Subtract out the mask shape using the `xor` blend mode
let maskLayer = ...
largeOpaqueLayer.addSublayer(maskLayer)
maskLayer.compositingFilter = "xor"
Then you can use that layer as the mask for some other CALayer. For example here I'm using it as the mask of a small blue rectangle:
smallBlueRectangle.mask = largeOpaqueLayer
So you can see the mask is inverted! On the other hand if you just use the un-inverted maskLayer directly as a mask, you can see the mask is not inverted:

Related

Achieving erase/restore drawing on UIImage in Swift

I'm trying to make a simple image eraser tool, where the user can erase and restore as drawing into an image, just like in this image:
After many attempts and testing, I have achieved the sufficient "erase" functionality with the following code on the UI side:
// Drawing code - on user touch
// `currentPath` is a `UIBezierPath` property of the containing class.
guard let image = pickedImage else { return }
UIGraphicsBeginImageContextWithOptions(imageView.frame.size, false, 0)
if let context = UIGraphicsGetCurrentContext() {
mainImageView.layer.render(in: context)
context.addPath(currentPath.cgPath)
context.setBlendMode(.clear)
context.setLineWidth(translatedBrushWidth)
context.setLineCap(.round)
context.setLineJoin(.round)
context.setStrokeColor(UIColor.clear.cgColor)
context.strokePath()
let capturedImage = UIGraphicsGetImageFromCurrentImageContext()
imageView.image = capturedImage
}
UIGraphicsEndImageContext()
And upon user touch-up I am applying a scale transform to currentPath to render the image with the cutout part in full size to preserve UI performance.
What I'm trying to figure out now is how to approach the "restore" functionality. Essentially, the user should draw on the erased parts to reveal the original image.
I've tried looking at CGContextClipToMask but I'm not sure how to approach the implementation.
I've also looked at other approaches to achieving this "erase/restore" effect before rendering the actual images, such as masking a CAShapeLayer over the image but also in this approach restoring becomes a problem.
Any help will be greatly appreciated, as well as alternative approaches to erase and restore with a path on the UI-level and rendering level.
Thank you!
Yes, I would recommend adding a CALayer to your image's layer as a mask.
You can either make the mask layer a CAShapeLayer and draw geometric shapes into it, or use a simple CALayer as a mask, where the contents property of the mask layer is a CGImage. You'd then draw opaque pixels into the mask to reveal the image contents, or transparent pixels to "erase" the corresponding image pixels.
This approach is hardware accelerated and quite fast.
Handling undo/redo of eraser functions would require you to collect changes to your mask layer as well as the previous state of the mask.
Edit:
I created a small demo app on Github that shows how to use a CGImage as a mask on an image view
Here is the ReadMe file from that project:
MaskableImageView
This project demonstrates how to use a CALayer to mask a UIView.
It defines a custom subclass of UIImageView, MaskableView.
The MaskableView class has a property maskLayer that contains a CALayer.
MaskableView defines a didSet method on its bounds property so that when the view's bounds change, it resizes the mask layer to match the size of the image view.
The MaskableView has a method installSampleMask which builds an image the same size as the image view, mostly filled with opaque black, but with a small rectangle in the center filled with black at an alpha of 0.7. The translucent center rectangle causes the image view to become partly transparent and show the view underneath.
The demo app installs a couple of subviews into the MaskableView, a sample image of Scampers, one of my dogs, and a UILabel. It also installs an image of a checkerboard under the MaskableView so that you can see the translucent parts more easily.
The MaskableView has properties circleRadius, maskDrawingAlpha, and drawingAction that it uses to let the user erase/un-erase the image by tapping on the view to update the mask.
The MaskableView attaches a UIPanGestureRecognizer and a UITapGestureRecognizer to itself, with an action of gestureRecognizerUpdate. The gestureRecognizerUpdate method takes the tap/drag location from the gesture recognizer and uses it to draw a circle onto the image mask that either decreases the image mask's alpha (to partly erase pixels) or increase the image mask's alpha (to make those pixels more opaque.)
The MaskableView's mask drawing is crude, and only meant for demonstration purposes. It draws a series of discrete circles intstead of rendering a path into the mask based on the user's drag gesture. A better solution would be to connect the points from the gesture recognizer and use them to render a smoothed curve into the mask.
The app's screen looks like this:
Edit #2:
If you want to export the resulting image to a file that preserves the transparency, you can convert the CGImage to a UIImage (Using the init(cgImage:) initializer) and then use the UIImage function
func pngData() -> Data?
to convert the image to PNG data. That function returns nil if it is unable to convert the image to PNG data.
If it succeeds, you can then save the data to a file with a .png extension.
I updated the sample project to include the ability to save the resulting image to disk.
First I added an image computed property to the MaskableView. That looks like this:
public var image: UIImage? {
guard let renderer = renderer else { return nil}
let result = renderer.image {
context in
return layer.render(in: context.cgContext)
}
return result
}
Then I added a save button to the view controller that fetches the image from the MaskableView and saves it to the app's Documents directory:
#IBAction func handleSaveButton(_ sender: UIButton) {
print("In handleSaveButton")
if let image = maskableView.image,
let pngData = image.pngData(){
print(image.description)
let imageURL = getDocumentsDirectory().appendingPathComponent("image.png", isDirectory: false)
do {
try pngData.write(to: imageURL)
print("Wrote png to \(imageURL.path)")
}
catch {
print("Error writing file to \(imageURL.path)")
}
}
}
You could also save the image to the user's camera roll. It's been a while since I've done that so I'd have to dig up the steps for that.

iOS layer mask from UIImage/CGImage?

I have the following image:
On a UIImageView of the exact same frame size, I want to show everything but the red fill.
let mask = CALayer()
mask.contents = clippingImage.CGImage
self.myImageView.layer.mask = mask
I thought the black color would show through when applied as a mask, but when I set the mask the whole view is cleared. What's happening here?
When creating masks from images, you use the Alpha channel, rather than any RGB channel. Even if your mask is black, you need to set its Alpha value to 0, as the mask pays attention only to Alpha channel. Any by default black is [0,0,0,255] in terms of RGBA. If you load RGB, it will of course convert it into RGBA with A = 1.

Swift Progress Indicator Image Mask

To start, this project has been built using Swift.
I want to create a custom progress indicator that "fills up" as the script runs. The script will call a JSON feed that is pulled from the remote server.
To better visualize what I'm after, I made this:
My guess would be to have two PNG images; one white and one red, and then simply do some masking based on the progress amount.
Any thoughts on this?
Masking is probably overkill for this. Just redraw the image each time. When you do, you draw the red rectangle to fill the lower half of the drawing, to whatever height you want it; then you draw the droplet image (a PNG), which has transparency in the middle so the red rectangle shows through. So, one PNG is enough because the red rectangle can be drawn "live" each time you redraw.
I liked your drawing so much that I wanted to bring it to life, so here's my working code (my PNG is called tear.png and iv is a UIImageView in my interface; percent should be a CGFloat between 0 and 1):
func redraw(percent:CGFloat) {
let tear : UIImage! = UIImage(named:"tear")!
if tear == nil {return}
let sz = tear.size
let top = sz.height*(1-percent)
UIGraphicsBeginImageContextWithOptions(sz, false, 0)
let con = UIGraphicsGetCurrentContext()
UIColor.redColor().setFill()
CGContextFillRect(con, CGRectMake(0,top,sz.width,sz.height))
tear.drawAtPoint(CGPointMake(0,0))
self.iv.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
}
I also hooked up a UISlider whose action method converts its value to a CGFloat and calls that method, so that moving the slider back and forth moves the red fill up and down in the teardrop. I could play with this for hours!

Image masking is not working properly, the output image is black-and-white, it should be colourful

I'm trying to do image masking here is my output and code.
This is my masking reference image(Image size doesn't matter),
mask.png,
This is image on which i'm performing masking,
imggo.png,
This is the Output.
I'm using Swift, Here is my code...
override func viewDidLoad() {
super.viewDidLoad()
var maskRefImg : UIImage? = UIImage(named: "mask.png")
var maskImg :UIImage? = UIImage(named: "imggo.png")
var imgView: UIImageView? = UIImageView(frame: CGRectMake(20, 50, 99, 99))
imgView?.image = maskImage(maskRefImg!, maskImage: maskImg!)
self.view.addSubview(imgView!)
}
func maskImage(image:UIImage,maskImage:UIImage)-> UIImage{
var maskRef:CGImageRef = maskImage.CGImage
var mask:CGImageRef = CGImageMaskCreate(CGImageGetWidth(maskRef), CGImageGetHeight(maskRef), CGImageGetBitsPerComponent(maskRef), CGImageGetBitsPerPixel(maskRef), CGImageGetBytesPerRow(maskRef), CGImageGetDataProvider(maskRef), nil, true)
var masked:CGImageRef = CGImageCreateWithMask(image.CGImage, mask)
return UIImage(CGImage: masked)!
}
So,How can I make Go! image colourful.?
Would anyone provide code.?
You are calling the function maskImage with the wrong order of arguments:
The maskImage function wants the image to mask first, and then the mask. But when you call maskImage(maskRefImg!, maskImage: maskImg!) you have it exactly swapped.
So you need to call maskImage(maskImg!, maskImage: maskRefImg!)
I'm guessing that what you want to have is the tilted rectangle with the word "Go!" and that the result image should be exactly the same size as the mask image.
When you swap the images (as you must), the mask image is scaled to the "Go!" image size. But instead, you probably want the mask image centered over your "Go!" image. So you need to create a new image with the same size as your "Go!" image and draw the mask centered into that temporary image. You then use the temporary image as the actual mask to apply.
The example image when you swap the arguments also shows that the "outside" is also green. This probably because your mask image is transparent there and CGImageMaskCreate converts it to black. But the documentation of CGImageCreateWithMask basically tells you that the created image will blend the "Go!" image so that parts where your mask image is black will have the "Go!" image visible and where your mask image is white it will be transparent.
The step-by-step instructions thus are:
Create a new, temporary image that is of the same size as your input image (the "Go!" image).
Fill it with white.
Draw your mask image centered into the temporary image.
Create the actual mask by calling CGImageMaskCreate with the temporary image.
Call CGImageCreateWithMask with the "Go!" image as first argument and the actual mask we've just created as second argument.
The result might be too big (have a lot of transparency surrounding it). If you don't want that you need to crop the result image (e.g. to the size of your original mask image; make sure to crop to the center).
You can probably skip the CGImageCreateWithMask part if you immediately create the temporary image in the DeviceGray color space, as CGImageCreateWithMask wants the second argument to be an image in this color space. In that case, I suggest you modify your mask.png so it does not contain any transparency: it should be white where it's transparent now.

SKCropNode making masked image disappear entirely

I have an image I want to partially mask (wallSprite), an image to act as a mask over it (wallMaskBox), and a node to hold both (wallCropNode). When I simply add both images as children of wallCropNode, both image display correctly:
var wallSprite = SKSpriteNode(imageNamed: "wall.png")
var wallCropNode = SKCropNode()
var wallMaskBox = SKSpriteNode(imageNamed: "blacksquaretiny.png")
wallMaskBox.zPosition = 100
wallCropNode.addChild(wallSprite)
wallCropNode.addChild(wallMaskBox)
gameplayContainerNode.addChild(wallCropNode)
But when I set the mask image as a maskNode property of the crop node:
var wallSprite = SKSpriteNode(imageNamed: "wall.png")
var wallCropNode = SKCropNode()
var wallMaskBox = SKSpriteNode(imageNamed: "blacksquaretiny.png")
wallMaskBox.zPosition = 100
wallCropNode.addChild(wallSprite)
wallCropNode.maskNode = wallMaskBox
gameplayContainerNode.addChild(wallCropNode)
the wallSprite image disappears entirely, instead of being partly cropped. Any ideas?
The issue is your black square image is completely opaque. Some (or all) of its pixels should be transparent (i.e., alpha = 0). The pixels that correspond to the mask node's transparent pixels will be masked out (i.e., not rendered) in the cropped node. To demonstrate this, I used your code to create the following.
Here's the original image:
Here's the mask image that I used for the maskNode. Note that the white regions are transparent (i.e., alpha = 0). From Apple's documentation,
When rendering its children, each pixel is verified against the
corresponding pixel in the mask. If the pixel in the mask has an alpha
value of less than 0.05, the image pixel is masked out. Any pixel not
rendered by the mask node is automatically masked out.
and here's the cropped node. I took a screenshot of the scene from the iPhone 6 simulator.

Resources