I was drawing a background image in the Draw method with several paths (CGPath) over it.
To avoid drawing the background image every time a setNeedDisplay is called (when a path changes), I've created a layer with the content = background image.
This layer is added when the background image is set:
Layer.InsertSublayer(backgroundLayer, 0);
And the paths are drawn in the Draw method:
public override void Draw(RectangleF rect)
{
base.Draw(rect);
using (CGContext context = UIGraphics.GetCurrentContext())
{
CGPath path = new CGPath();
...
context.DrawPath(CGPathDrawingMode.FillStroke);
}
}
It seems that the drawing of the paths is being made in the "main" Layer, that is the superlayer of the background layer that I create (this is my assumption, don't know for sure).
Is it possible to draw the shapes in a different Layer (this way I could add a second layer to the superlayer and change the sublayer zposition)?
Thanks for your help,
L. Pinho
Related
I'm trying to make a simple image eraser tool, where the user can erase and restore as drawing into an image, just like in this image:
After many attempts and testing, I have achieved the sufficient "erase" functionality with the following code on the UI side:
// Drawing code - on user touch
// `currentPath` is a `UIBezierPath` property of the containing class.
guard let image = pickedImage else { return }
UIGraphicsBeginImageContextWithOptions(imageView.frame.size, false, 0)
if let context = UIGraphicsGetCurrentContext() {
mainImageView.layer.render(in: context)
context.addPath(currentPath.cgPath)
context.setBlendMode(.clear)
context.setLineWidth(translatedBrushWidth)
context.setLineCap(.round)
context.setLineJoin(.round)
context.setStrokeColor(UIColor.clear.cgColor)
context.strokePath()
let capturedImage = UIGraphicsGetImageFromCurrentImageContext()
imageView.image = capturedImage
}
UIGraphicsEndImageContext()
And upon user touch-up I am applying a scale transform to currentPath to render the image with the cutout part in full size to preserve UI performance.
What I'm trying to figure out now is how to approach the "restore" functionality. Essentially, the user should draw on the erased parts to reveal the original image.
I've tried looking at CGContextClipToMask but I'm not sure how to approach the implementation.
I've also looked at other approaches to achieving this "erase/restore" effect before rendering the actual images, such as masking a CAShapeLayer over the image but also in this approach restoring becomes a problem.
Any help will be greatly appreciated, as well as alternative approaches to erase and restore with a path on the UI-level and rendering level.
Thank you!
Yes, I would recommend adding a CALayer to your image's layer as a mask.
You can either make the mask layer a CAShapeLayer and draw geometric shapes into it, or use a simple CALayer as a mask, where the contents property of the mask layer is a CGImage. You'd then draw opaque pixels into the mask to reveal the image contents, or transparent pixels to "erase" the corresponding image pixels.
This approach is hardware accelerated and quite fast.
Handling undo/redo of eraser functions would require you to collect changes to your mask layer as well as the previous state of the mask.
Edit:
I created a small demo app on Github that shows how to use a CGImage as a mask on an image view
Here is the ReadMe file from that project:
MaskableImageView
This project demonstrates how to use a CALayer to mask a UIView.
It defines a custom subclass of UIImageView, MaskableView.
The MaskableView class has a property maskLayer that contains a CALayer.
MaskableView defines a didSet method on its bounds property so that when the view's bounds change, it resizes the mask layer to match the size of the image view.
The MaskableView has a method installSampleMask which builds an image the same size as the image view, mostly filled with opaque black, but with a small rectangle in the center filled with black at an alpha of 0.7. The translucent center rectangle causes the image view to become partly transparent and show the view underneath.
The demo app installs a couple of subviews into the MaskableView, a sample image of Scampers, one of my dogs, and a UILabel. It also installs an image of a checkerboard under the MaskableView so that you can see the translucent parts more easily.
The MaskableView has properties circleRadius, maskDrawingAlpha, and drawingAction that it uses to let the user erase/un-erase the image by tapping on the view to update the mask.
The MaskableView attaches a UIPanGestureRecognizer and a UITapGestureRecognizer to itself, with an action of gestureRecognizerUpdate. The gestureRecognizerUpdate method takes the tap/drag location from the gesture recognizer and uses it to draw a circle onto the image mask that either decreases the image mask's alpha (to partly erase pixels) or increase the image mask's alpha (to make those pixels more opaque.)
The MaskableView's mask drawing is crude, and only meant for demonstration purposes. It draws a series of discrete circles intstead of rendering a path into the mask based on the user's drag gesture. A better solution would be to connect the points from the gesture recognizer and use them to render a smoothed curve into the mask.
The app's screen looks like this:
Edit #2:
If you want to export the resulting image to a file that preserves the transparency, you can convert the CGImage to a UIImage (Using the init(cgImage:) initializer) and then use the UIImage function
func pngData() -> Data?
to convert the image to PNG data. That function returns nil if it is unable to convert the image to PNG data.
If it succeeds, you can then save the data to a file with a .png extension.
I updated the sample project to include the ability to save the resulting image to disk.
First I added an image computed property to the MaskableView. That looks like this:
public var image: UIImage? {
guard let renderer = renderer else { return nil}
let result = renderer.image {
context in
return layer.render(in: context.cgContext)
}
return result
}
Then I added a save button to the view controller that fetches the image from the MaskableView and saves it to the app's Documents directory:
#IBAction func handleSaveButton(_ sender: UIButton) {
print("In handleSaveButton")
if let image = maskableView.image,
let pngData = image.pngData(){
print(image.description)
let imageURL = getDocumentsDirectory().appendingPathComponent("image.png", isDirectory: false)
do {
try pngData.write(to: imageURL)
print("Wrote png to \(imageURL.path)")
}
catch {
print("Error writing file to \(imageURL.path)")
}
}
}
You could also save the image to the user's camera roll. It's been a while since I've done that so I'd have to dig up the steps for that.
Say you have this in a UIView,
override func draw(_ rect: CGRect) {
let c = UIGraphicsGetCurrentContext()
c?.setLineWidth(10.0)
c?.move(to: CGPoint(x: 10.0, y: 10.0))
c?.addLine(to: CGPoint(x: 40.0, y: 40.0))
... lots of complicated drawing here, sundry paths, colors, widths etc ...
c?.strokePath()
}
Of course, it will draw the hell out of your drawing for you.
But say in the same UIView you do this ...
func setup() { // in inits, layout
if nil etc {
thingLayer = CAShapeLayer()
self.layer.insertSublayer(thingLayer, at: 0)
thingLayer.fillColor = UIColor.yellow.cgColor }
thingLayer.frame = bounds
let path = ... some fancy path
thingLayer.path = path.cgPath
}
Indeed, the new yellow layer is drawn over the drawing in draw#rect.
How do you draw - using core graphics commands - either on to thingLayer, or perhaps on to another layer on top of all??
Core graphics commands:
let c = UIGraphicsGetCurrentContext()
c?.setLineWidth(10.0)
c?.move(to: CGPoint(x: 10.0, y: 10.0))
c?.addLine(to: CGPoint(x: 40.0, y: 40.0))
c?.strokePath()
seem to draw to a place directly above or on the main .layer
(Well that appears to be the case, as far as I can see.)
How do you draw - using core graphics commands - either on to thingLayer, or perhaps on to another layer on top of all??
Surely in draw#rect you can specify which cgLayer to draw to?
In draw#rect, can you make another cgLayer and draw to context, that cgLayer?
Bottom line, the draw(_:) is for rendering the root view of a UIView subclass. If you add sublayers, they will be rendered on top of whatever is done in the root view’s draw(_:).
If you have a few paths currently performed in the draw(_:) that you want to render on top of the sublayer that you’ve added, you have a few options:
Move the paths to be stroked in their own CAShapeLayer instances and add them above the other sublayer that you’ve already added.
Consider the main UIView subclass as a container/content view, and add your own private UIView subviews (potentially with the “complicated” draw(_:) methods). You don’t have to expose these private subviews if you don’t want to.
Move the complicated drawing to the draw(in:) of a CALayer subclass and, again, add this sublayer above the existing sublayer that you’ve already created.
Given that the thingLayer is, effectively, only setting the background color, you could just set the background color of the view and not use a layer for this background color at all.
The CALayer hierarchy is a tree, where a root layer is first (e.g. the layer that comprises the background of a window), followed by its sublayers (an array of CALayer objects, stored and drawn in in back-to-front order), followed by their sublayers, and so on, recursively.
Your implementation of UIView.draw(_ rect: CGRect) is defining how the main layer of your view (self.layer) is drawn.
By adding your CAShapeLayer as a sublayer of self.layer, you're making it be draw after self.layer, which has the effect of it being drawn above self.layer, which contains the line you drew in UIView.draw(_ rect: CGRect).
To resolve this, you need to put your stroke in sublayer after your thingLayer.
self.layer.insertSublayer(thingLayer, at: 0)
self.layer.insertSublayer(lineLayer, above: thingLayer)
By the way, you forgot to delegate to super.draw(rect). It's not necessary for direct subclasses of UIView (since the base implementation doesn't do anything), but it is necessary for every other case, and it's generally a good habit to get into (lest you run into really obscure/frustrating drawing bugs).
If your yellow shape layer doesn't change or move around independently, you could get rid of the CAShapeLayer and just draw the shape yourself at the top of your implementation of draw(_:):
let path = // your fancy UIBezierPath
UIColor.yellow.setFill()
path.fill()
Otherwise, as other commenters have said, you'll need to move your drawing code into additional custom subclasses of UIView or CALayer, and stack them in the order you want.
I'm implementing a drawing function for an iPad app. I'm using UIBezierPaths on CAShapeLayers for the drawing. By creating a new CAShapeLayer for each TouchesBegan event, I'm building an array of 'stacked up' CAShapeLayers that allow me to easily implement undo and redo by popping and pushing layers to and from the array. I'm also doing some interesting layer blending techniques by using some CAShapeLayer.compositingFilters. That's all working very well. My challenge is erasing.
I'm attempting to create a second array of CAShapeLayers and use them to mask the first group. I'm able to ADD TO the mask group using the same technique from above while drawing with an opaque color but I am not able to remove opaque areas from the mask group.
I thought I would be able to start the masking technique with a base layer that was opaque (black, white or whatever). Then, I had hoped to draw UIBezierPaths with UIColor.clear.cgColor and combine or composite my drawn clear path with the underlying opaque, base mask. This in effect, should "erase" that area of the mask, and hide the stacked up CAShapeLayers that I draw into. I didn't want to combine the mask layers into an image because I would lose the ability to easily undo and redo by popping and pushing on the mask array.
I've included some pseudo code below. Any pointers, help, or strategies for a solution would be much appreciated! I've been working on this for a number of weeks and I'm really stumped. I can't find any info on the strategy I'm working toward for this. Also, if I'm going at the drawing functionality incorrectly from the start and there's an easier way to draw while maintaining simple undo/redo, and add erase, please let me know. I'm totally open to adjusting my approach! Thanks in advance for any assistance.
// setup the layer hierarchy for drawing
private func setupView() {
self.mainDrawLayer = CAShapeLayer()
self.mainDrawLayer.backgroundColor = UIColor.clear.cgColor
self.layer.addSublayer(self.mainDrawLayer)
// set up the mask. add an opaque background so everything shows through to start
self.maskLayer = CALayer()
let p = UIBezierPath.init(rect: self.bounds)
self.maskShapeLayer = CAShapeLayer()
self.maskShapeLayer?.fillColor = UIColor.black.cgColor
self.maskShapeLayer?.path = p.cgPath
self.maskLayer?.addSublayer(self.maskShapeLayer!)
// apply the mask
self.layer.mask = self.maskLayer
}
override public func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
super.touchesBegan(touches, with: event)
guard let touch = touches.first else {
return
}
var erasing = false
// setup the currentDrawLayer which captures the bezier path
self.currentDrawLayer = CAShapeLayer()
self.currentDrawLayer?.lineCap = CAShapeLayerLineCap.round
self.currentDrawLayer?.fillColor = nil
// set the ink color to use for drawing
if let ink = UserDefaults.standard.string(forKey: "ink") {
self.currentDrawLayer?.strokeColor = UIColor(hex: ink)?.cgColor
} else {
self.currentDrawLayer?.strokeColor = UIColor(hex: Constants.inkBlack3)?.cgColor
}
if UserDefaults.standard.string(forKey: "ink") == Constants.inkBlack5 {
// removing the filter makes white overlay other colors
// this is essentially erasing with a white background
self.currentDrawLayer?.compositingFilter = nil
} else {
// this blend mode ads a whole different feeling to the drawing!
self.currentDrawLayer?.compositingFilter = "darkenBlendMode"
}
// THIS IS THE ERASER COLOR!
if UserDefaults.standard.string(forKey: "ink") == Constants.inkBlack4 {
// trying erasing via drawing with clear
self.currentDrawLayer?.strokeColor = UIColor.clear.cgColor
// is there a compositingFilter available to 'combine' my clear color with the black opaque mask layer created above?
self.currentDrawLayer?.compositingFilter = "iDontHaveADamnClueIfTheresOneThatWillWorkIveTriedThemAllItSeems:)"
erasing = true
}
self.currentDrawLayer?.path = self.mainDrawPath.cgPath
if erasing {
// add the layer to the masks
self.maskLayer!.addSublayer(self.currentDrawLayer!)
} else {
// add the layer to the drawings
self.layer.addSublayer(self.currentDrawLayer!)
}
let location = touch.location(in: self)
self.ctr = 0
self.pts[0] = location
}
I've only been able to accomplish something like this by appending to the path of the mask's shape layer.
CALayer *penStroke = ... the layer that has all the pen strokes ...;
UIBezierPath *eraserStroke = [[UIBezierPath bezierPathWithRect:[self bounds]] bezierPathByReversingPath];
[eraserStroke appendPath:[UIBezierPath bezierPathWithRect:CGRectMake(110, 110, 200, 30)]];
CAShapeLayer *maskFill = [CAShapeLayer layer];
maskFill.path = [eraserStroke CGPath];
maskFill.fillRule = kCAFillRuleEvenOdd;
maskFill.fillColor = [[UIColor blackColor] CGColor];
maskFill.backgroundColor = [[UIColor clearColor] CGColor];
penStroke.mask = maskFill;
the above will mask all pen strokes by all eraser strokes. But a drawing app would want to be able to draw over previous eraser strokes. I believe that'd need to be handled by continually wrapping existing penStroke layer with another new penStroke layer, and always adding penStrokes to the outermost layer.
That would give something like:
- Layer C: most recent pen strokes
- mask: no mask, so that all strokes are pen strokes
- Layer B: sublayer of layer C, contains some previous pen strokes
- Layer B's mask: most recent eraser strokes
- Layer A: sublayer on Layer B, contains pen strokes before the most recent eraser strokes
- Layer A's mask: eraser strokes before layer B's pen strokes
Hopefully that makes sense. Basically, it'd be nesting layers/masks. Every time the user switches from pen -> eraser or eraser -> pen, a new layer would get wrapped around all the existing pen/eraser stroke layers.
I'm trying to use a finger tap or drag to erase part of a UIImageView.
Here's what I have so far:
let panErase = UIPanGestureRecognizer(target: self, action: "erase:")
let tapErase = UITapGestureRecognizer(target: self, action: "erase:")
imageBeingEdited.addGestureRecognizer(panErase)
imageBeingEdited.addGestureRecognizer(tapErase)
I'm not quite sure how to debug graphics context modifications, but this erases the whole image:
let erasurePoint: CGPoint = gesture.locationInView(imageBeingEdited)
println("\(erasurePoint.x) \(erasurePoint.y)")
let image:UIImage = imageBeingEdited.image!
let s = image.size
UIGraphicsBeginImageContext(s)
let g = UIGraphicsGetCurrentContext()
CGContextBeginPath(g);
CGContextAddEllipseInRect(g, CGRectMake(erasurePoint.x, erasurePoint.y, 5, 5))
CGContextEOClip(g)
image.drawAtPoint(CGPointZero)
imageBeingEdited.image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
The goal is to erase a circle at the tap location. What did I miss here? It looks like the image is cropped into a 5x5 circle, not necessarily at the tapped point.
It looks to me like the code you've posted would result in clipping your image to a 5x5 ellipse, exactly as you describe.
Did you write this code, or copy it from somewhere else?
It needs to be rearranged so it first draws the image into the context, then draws your ellipse using a clear color and the drawing mode where the alpha of the source pixels is written to the destination. Don't muck around with the context's clipping path at all.
This doesn't look like very efficient code to me. On every change in the pan gesture you're creating a context, drawing an image into it, and then copying out a new image. Then, presumably, you're drawing the resulting image. That's not likely to be fast enough to keep up with the user's pan gesture.
Instead I would probably add a CAShapeLayer as a mask layer to my image view's layer and modify that mask layer's path, appending an ellipse to the mask path for each point the user touches. Even that might not be fast enough for smooth drawing. You might need to write code that interpolates between a beginning and end touch position and filling the whole segment.
Given:
a CGContextRef (ctx) with frame {0,0,100,100}
and a rect (r), with frame {25,25,50,50}
It's easy to clip the context to that rect:
CGContextClipToRect(ctx, r);
to mask out the red area below (red == mask):
But I want to invert this clipping rect to convert it into a clipping mask. The desired outcome is to mask the red portion below (red == mask):
I want to do this programmatically at runtime.
I do not want to manually prepare a bitmap image to ship statically with my app.
Given ctx and r, how can this be done at runtime most easily/straightforwardly?
Read about fill rules in the “Filling a Path” section of the Quartz 2D Programming Guide.
In your case, the easiest thing to do is use the even-odd fill rule. Create a path consisting of your small rectangle, and a much larger rectangle:
CGContextBeginPath(ctx);
CGContextAddRect(ctx, CGRectMake(25,25,50,50));
CGContextAddRect(ctx, CGRectInfinite);
Then, intersect this path into the clipping path using the even-odd fill rule:
CGContextEOClip(ctx);
You could clip the context with CGContextClipToRects() by passing rects that make up the red frame you've wanted.
Can you just do all your painting as normal, and then do:
CGContextClearRect(ctx, r);
after everything has been done?
Here is a helpful extension for implementing rob's answer
extension UIBezierPath {
func addClipInverse() {
let paths = UIBezierPath()
paths.append(self)
paths.append(.init(rect: .infinite))
paths.usesEvenOddFillRule = true
paths.addClip()
}
}