Erasing a stroke path with Quartz2D is deleting more than just the path - ios

I'm trying to set an eraser tool for a drawing app. I've been following this tutorial for the basics, but the drawings happen in a white background so the deletion part is not covered (they draw in white color to delete)
I've been implementing a method to delete my drawings, and it works pretty well. I draw a circle, set the color and the blend mode to clear and set the path where I want to draw that circle. Then I get an image returned by the method UIGraphicsGetImageFromCurrentImageContext() that updates my current image, with the new drawing over it.
The drawn circle it's doing it's job and it deletes where it has been drawn. But the problem is the image begins to get deleted from the right to the left, covering all it's height, and this is certainly not set anywhere in my code.
I don't know if it's a bug.
I've tried everything and I can't get a new image from context without this new deletion line in the right. And the more I draw, the more the new line grows.
As you can see in the gif, it looks like I'm drawing in gray over a white background, but the white color is an image that has been filled with white color, and it's background color it's gray, so I'm deleting.
The code:
In a UIView (blue background) I initialize and add two white columns as subviews:
func addNewColumn(){
let column : Column = Column.init(frame: CGRect(x: self.frame.size.width*0.7, y: 0, width: self.frame.size.width*0.125, height: self.frame.size.height))
let column2 : Column = Column.init(frame: CGRect(x: self.frame.size.width*0.2, y: 0, width: self.frame.size.width*0.125, height: self.frame.size.height))
self.addSubview(column)
self.addSubview(column2)
}
The initialization of the column in its class:
import UIKit
class Column: UIImageView {
//Touches Positions
var firstPoint : CGPoint?
var lastPoint : CGPoint?
override init(frame: CGRect) {
super.init(frame: frame)
self.isUserInteractionEnabled = true
self.draw(frame)
self.backgroundColor = UIColor.gray
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
}
//Drawing of the white color in the image, over the blue background
override func draw(_ rect: CGRect) {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, false, 0)
let context = UIGraphicsGetCurrentContext()
context?.setFillColor(UIColor.white.cgColor)
context?.setBlendMode(.normal)
context?.fill(self.bounds)
self.image = UIGraphicsGetImageFromCurrentImageContext()
self.alpha = 1
UIGraphicsEndImageContext()
}
//The method called from touchesMoved() to delete the white color
func eraseColumn() {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, false, 0)
let context = UIGraphicsGetCurrentContext()
self.image?.draw(in: self.bounds)
context?.beginPath()
context?.addEllipse(in: CGRect(x: (lastPoint?.x)!, y: (lastPoint?.y)!, width: 10, height: 10))
context?.setLineCap(.round)
context?.setLineWidth(10)
context?.setStrokeColor(UIColor.clear.cgColor)
context?.setBlendMode(.clear)
context?.strokePath()
context?.closePath()
self.image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
}
//Touches handling
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch = touches.first
firstPoint = touch?.location(in: self)
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch = touches.first
lastPoint = touch?.location(in: self)
eraseColumn()
}
}
As you can see in the draw(_ rect: CGRect) method, I fill the image with white color and set it's background color to gray.
And in the eraseColumn() method, I delete the white color that the user touched. No matter what I try, the image is being deleted. I simply don't know why this is happening.
Any help would be very appreciated.
Project is in swift 3, X-Code 9.2.

Finally found the solution
changing
self.image?.draw(in: self.bounds)
to
self.image?.draw(at: self.bounds.origin)
fixes the problem. Hope it will help somebody.
Cheers

Related

Swift ios cut out rounded rect from view allowing colour changes

I'm using this approach to cut out a rounded rect "window" from a background view:
override func draw(_ rect: CGRect) {
guard let rectsArray = rectsArray else {
return
}
for holeRect in rectsArray {
let holeRectIntersection = rect.intersection(holeRect)
if let context = UIGraphicsGetCurrentContext() {
let roundedWindow = UIBezierPath(roundedRect: holeRect, cornerRadius: 15.0)
if holeRectIntersection.intersects(rect) {
context.addPath(roundedWindow.cgPath)
context.clip()
context.clear(holeRectIntersection)
context.setFillColor(UIColor.clear.cgColor)
context.fill(holeRectIntersection)
}
}
}
}
In layoutSubviews() I update the background colour add my "window frame" rect:
override func layoutSubviews() {
super.layoutSubviews()
backgroundColor = self.baseMoodColour
isOpaque = false
self.rectsArray?.removeAll()
self.rectsArray = [dragAreaView.frame]
}
I'm adding the rect here because layoutSubviews() updates the size of the "window frame" (i.e., the rect changes after layoutSubviews() runs).
The basic mechanism works as expected, however, if I change the background colour, the cutout window fills with black. So I'm wondering how I can animate a background colour change with this kind of setup? That is, I want to animate the colour of the area outside the cutout window (the window remains clear).
I've tried updating backgroundColor directly, and also using didSet in the accessor of a custom colour variable in my UIView subclass, but both cause the same filling-in of the "window".
var baseMoodColour: UIColor {
didSet {
self.backgroundColor = baseMoodColour
self.setNeedsDisplay()
}
}
Try to use UIView.animate, you can check it here
UIView.animate(withDuration: 1.0, delay: 0.0, options: [.curveEaseOut], animations: {
self.backgroundColor = someNewColour
//Generally
//myView.backgroundColor = someNewColor
}, nil)
The problem in the short run is that that is simply what clear does if the background color is opaque. Just give your background color some transparency — even a tiny bit of transparency, so tiny that the human eye cannot perceive it — and now clear will cut a hole in the view.
For example, your code works fine if you set the view's background color to UIColor.green.withAlphaComponent(0.99).
By the way, you should delete the lines about UIColor.clear; that's a red herring. You should also cut the lines about the backgroundColor; you should not be repainting the background color into your context. They are two different things.
The problem in the long run is that what you're doing is not how to punch a hole in a view. You should be using a mask instead. That's the only way you're going to get the animation while maintaining the hole.
Answering my own question, based on #matt's suggestion (and linked example), I did it with a CAShapeLayer. There was an extra "hitch" in my requirements, since I have a couple of views on top of the one I needed to mask out. So, I did the masking like this:
func cutOutWindow() {
// maskedBackgroundView is an additional view, inserted ONLY for the mask
let r = self.maskedBackgroundView.bounds
// Adjust frame for dragAreaView's border
var dragSize = self.dragAreaView.frame.size
var dragPosition = self.dragAreaView.frame.origin
dragSize.width -= 6.0
dragSize.height -= 6.0
dragPosition.x += 3.0
dragPosition.y += 3.0
let r2 = CGRect(x: dragPosition.x, y: dragPosition.y, width: dragSize.width, height: dragSize.height)
let roundedWindow = UIBezierPath(roundedRect: r2, cornerRadius: 15.0)
let mask = CAShapeLayer()
let path = CGMutablePath()
path.addPath(roundedWindow.cgPath)
path.addRect(r)
mask.path = path
mask.fillRule = kCAFillRuleEvenOdd
self.maskedBackgroundView.layer.mask = mask
}
Then I had to apply the colour change to maskedBackgroundView.layer.backgroundColor (i.e., to the layer, not the view). With that in place, I get the cutout I need, with animatable colour changes. Thanks #matt for pointing me in the right direction.

How can I change the phase of UIColor patternImage in Swift?

I'm using the very convenient UIColor(patternImage:) to create some CAShapeLayers with tiled patterns in an iOS 10 app with Xcode 8.2. Tiling always starts at the origin of the view, which can be inconvenient if you want it to start somewhere else. To illustrate, here's a screenshot from the simulator (code below):
The CAShapeLayer on the left starts at (0,0), so everything is fine. The one on the right is at (110,50), so it's split in the middle. Here's the code:
let firstBox = CAShapeLayer()
firstBox.fillColor = UIColor(patternImage: UIImage(named: "test-image")!).cgColor
view.layer.addSublayer(firstBox)
firstBox.path = UIBezierPath(rect: CGRect(x: 0, y: 0, width: 100, height: 100)).cgPath
let secondBox = CAShapeLayer()
secondBox.fillColor = UIColor(patternImage: UIImage(named: "test-image")!).cgColor
view.layer.addSublayer(secondBox)
secondBox.path = UIBezierPath(rect: CGRect(x: 110, y: 50, width: 100, height: 100)).cgPath
I want to adjust the phase of the pattern for the right CAShapeLayer so that both tiles show a full face. Apple's documentation for UIColor(patternImage:) helpfully refers to a function for this purpose:
To change the phase, make the color the current color and then use the
setPatternPhase(_:) function to change the phase.
Sounds simple! But I'm having a hard time implementing it. I'm not really sure what "make the color the current color" means. I tried getting the current context and calling setPatternPhase on it, both before and after assigning the fill color to the layer:
UIGraphicsGetCurrentContext()?.setPatternPhase(CGSize(width: 25, height: 25))
No noticeable effect. I tried subclassing the containing UIView and setting the phase in its drawRect: method, as suggested in this answer. But drawRect: doesn't exist in Swift, so I tried both draw(_ rect:) and draw(_ layer:, in:). Both functions get called, but there's no noticeable effect.
class PatternView: UIView {
override func draw(_ rect: CGRect) {
UIGraphicsGetCurrentContext()?.setPatternPhase(CGSize(width: 25, height: 25))
super.draw(rect)
}
override func draw(_ layer: CALayer, in ctx: CGContext) {
ctx.setPatternPhase(CGSize(width: 25, height: 25))
super.draw(layer, in: ctx)
}
}
At Dave Weston's suggestion, I used UIImage's .set() to set the current stroke and fill for the current context before calling setPatternPhase. Unfortunately the output is unaffected. Here's the code I tried:
let secondBoxColor = UIColor(patternImage: UIImage(named: "test-image")!)
secondBoxColor.set()
UIGraphicsGetCurrentContext()?.setPatternPhase(CGSize(width: 50, height: 50))
let secondBox = CAShapeLayer()
secondBox.fillColor = secondBoxColor.cgColor
view.layer.addSublayer(secondBox)
secondBox.path = UIBezierPath(rect: CGRect(x: 110, y: 50, width: 100, height: 100)).cgPath
How can I shift the phase of the pattern that gets drawn into a CAShapeLayer?
To make your pattern the current color, you should call the set() instance method on the UIColor instance that contains your pattern. This configures the color as the current stroke and fill color for the current context.
Then, according to Apple's docs, setPatternPhase should work.
I haven't been able to solve this problem, but I thought I'd share the workaround I'm using in case it's useful to anyone.
As far as I can tell, CAShapeLayer does its rendering in a secret, hidden place, and ignores the normal display() and draw() functions that CALayerDelegates are supposed to use. As a result, you never have access to the CGContext it's using to render, so there's no way to call setPatternPhase().
My specific use case for setPatternPhase() was to have the pattern line up with the top-left of the CAShapeLayer it's drawn in, so I found an alternate way to do that. It does not allow you to set an arbitrary phase.
What I did instead is create a new CALayer subclass called CAPatternLayerthat takes a UIImage to tile and a CGPath to fill. It delegates to a CALayerDelegate class called CAPatternLayerDelegate, which provides a draw(layer: in ctx:) function. When a draw is requested, the delegate creates a temporary UIImageView, fills it with the tiled image, then renders it to the CALayer's context.
A neat side-effect of this is that you can use a UIImage with cap insets, which allows 9-slice scaling with the center slice tiled.
Here's the code for PatternLayer and PatternLayerDelegate:
class CAPatternLayer: CALayer {
var image: UIImage?
var path: CGPath? {
didSet {
if let path = self.path {
self.frame = path.boundingBoxOfPath
// shift the path to 0,0 since you built position into the frame
var translation = CGAffineTransform(translationX: -path.boundingBoxOfPath.origin.x, y: -path.boundingBoxOfPath.origin.y)
let shiftedPath = path.copy(using: &translation)
// use the shifted version
self.path = shiftedPath
self.maskLayer.path = shiftedPath
}
}
}
let maskLayer: CAShapeLayer = CAShapeLayer()
override init() {
super.init()
self.delegate = CAPatternLayerDelegate.sharedInstance
self.setNeedsDisplay()
self.mask = self.maskLayer
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
convenience init(path: CGPath, image: UIImage) {
self.init()
defer {
self.image = image
self.path = path
}
}
}
class CAPatternLayerDelegate: NSObject, CALayerDelegate {
static let sharedInstance = CAPatternLayerDelegate()
func draw(_ layer: CALayer, in ctx: CGContext) {
// cast layer to a CAPatternLayer so you can access properties
if let layer = layer as? CAPatternLayer, let image = layer.image, let path = layer.path {
// create a UIImageView to display the image, then render it to the context
let imageView = UIImageView()
// if a path bounding box was set, use it, otherwise draw over the whole layer
imageView.bounds = path.boundingBoxOfPath
imageView.image = image
imageView.layer.render(in: ctx)
}
}
}
And here's an example in use:
// create a path to fill
let myMaskPath = UIBezierPath(rect: CGRect(x: 50, y: 25, width: 200, height: 100))
// pull the image and set it up as a resizeableImage with cap insets
let patternImage = UIImage(named: "ground")!.resizableImage(withCapInsets: .init(top: 16, left: 16, bottom: 0, right: 16), resizingMode: .tile)
// create the CAPatternLayer and add it to the view
let myPatternLayer = CAPatternLayer(path: myMaskPath.cgPath, image: patternImage)
view.layer.addSublayer(myPatternLayer)
Output:

Add text label to drawn shape

I'm following along with this tutorial for drawing squares where-ever there is a gesture recognized touch on the screen in iOS.
https://www.weheartswift.com/bezier-paths-gesture-recognizers/
I am now wanting to extend the functionality and want to add text labels to my newly drawn shapes indicating their coordinates.
So touching the screen would draw a rectangle, which moves with the pan gesture (so far so good) but I would also like it to show numbers indicating the coordinates.
How can I go about accomplishing this?
class CircularKeyView: UIView {
// a lot of this code came from https://www.weheartswift.com/bezier-paths-gesture-recognizers/
//all thanks goes to we<3swift
let lineWidth: CGFloat = 1.0
let size: CGFloat = 44.0
init(origin: CGPoint) {
super.init(frame: CGRectMake(0.0, 0.0, size, size))
self.center = origin
self.backgroundColor = UIColor.clearColor()
initGestureRecognizers() //start up all the gesture recognizers
}
func initGestureRecognizers() {
let panGR = UIPanGestureRecognizer(target: self, action: "didPan:")
addGestureRecognizer(panGR)
}
//PAN IT LIKE u FRYIN.
func didPan(panGR: UIPanGestureRecognizer) {
self.superview!.bringSubviewToFront(self)
var translation = panGR.translationInView(self)
self.center.x += translation.x
self.center.y += translation.y
panGR.setTranslation(CGPointZero, inView: self)
}
// We need to implement init(coder) to avoid compilation errors
required init(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
override func drawRect(rect: CGRect) {
let path = UIBezierPath(roundedRect: rect, cornerRadius: 7)
//draws awesome curvy rectangle
UIColor.darkGrayColor().setFill()
path.fill()
//draws outline
path.lineWidth = self.lineWidth
UIColor.blackColor().setStroke()
path.stroke()
//////
//probably where I should draw the text label on this thing,
//although it needs to update when the thingy moves.
}
}
In your drawRect implementation you can draw the coordinates of the view with something like:
("\(frame.origin.x), \(frame.origin.y)" as NSString).drawAtPoint(.zero, withAttributes: [
NSFontAttributeName: UIFont.systemFontOfSize(14),
NSForegroundColorAttributeName: UIColor.blackColor()
])
Which simply creates a string of the coordinates, casts it to an NSString and then calls the drawAtPoint method to draw it in the view's context.
You can of course change .zero to any CGPoint depending on where you want to draw the string and can edit the attributes as desired.
To make sure that this gets updated when the user pans around you will want to also add:
self.setNeedsDisplay()
to the bottom of your didPan method.
Hope this helps :)

Drawing on UIImageView within UIScrollView

About my app: The user can view a PDF file within a UIWebView. I have an option for the user to choose whether they want to scroll through the pdf or take notes on it. When they take notes, the scrolling is disabled, and vice-versa. However, when the user is drawing, the lines move up and become hazy as shown:
(The red boxes are text within the pdf)
Here is my code:
Switching between the pen and scroll:
var usingPen = false
#IBAction func usePen(sender: AnyObject) {
usingPen = true
webView.userInteractionEnabled = false
UIView.animateWithDuration(0.3) { () -> Void in
self.popUpView.alpha = 0
}
}
#IBAction func useScroll(sender: AnyObject) {
usingPen = false
webView.userInteractionEnabled = true
UIView.animateWithDuration(0.3) { () -> Void in
self.popUpView.alpha = 0
}
}
The imageView the user draws on (objectView):
var objectView = UIImageView()
override func viewDidAppear(animated: Bool) {
objectView.frame.size = webView.scrollView.contentSize
webView.scrollView.addSubview(objectView)
}
Drawing on the image view:
var start = CGPoint()
let size: CGFloat = 3
var color = UIColor.blackColor()
func draw(start: CGPoint, end: CGPoint) {
if usingPen == true {
UIGraphicsBeginImageContext(self.objectView.frame.size)
let context = UIGraphicsGetCurrentContext()
objectView.image?.drawInRect(CGRect(x: 0, y: 0, width: objectView.frame.width, height: objectView.frame.height))
CGContextSetFillColorWithColor(context, color.CGColor)
CGContextSetStrokeColorWithColor(context, color.CGColor)
CGContextSetLineWidth(context, size)
CGContextBeginPath(context)
CGContextMoveToPoint(context, start.x, start.y)
CGContextAddLineToPoint(context, end.x, end.y)
CGContextStrokePath(context)
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
objectView.image = newImage
}
}
override func touchesBegan(touches: Set<UITouch>, withEvent event: UIEvent?) {
start = (touches.first?.locationInView(self.objectView))!
}
override func touchesMoved(touches: Set<UITouch>, withEvent event: UIEvent?) {
draw(start, end: (touches.first?.locationInView(self.objectView))!)
start = (touches.first?.locationInView(self.objectView))!
}
How can I prevent the haziness and movement of the drawings? Thanks for your help.
This probably occurs due to an image scaling issue. Since you're drawing into an image with a scale factor of 1 (which is the default) and your screen has a scale factor of 2 or 3, lines will continue to blur slightly every time they're copied and drawn. The solution is to specify the screen's scale when you create your image context:
UIGraphicsBeginImageContextWithOptions(self.objectView.frame.size, false, UIScreen.mainScreen().scale)
Note that the way you're drawing the line is fairly inefficient. Instead, you probably want to create a CGBitmapContext and continually draw to that; it'd be much faster and would also eliminate the "generation loss" problem you have.

Allowing users to draw rect on a UIImage, with the intention of cropping the image

I'm sure this has been asked a number of times from various different perspectives, but I'm unable to find an answer on here as yet.
What I want to achieve
What I would like to do is to display a UIImage, and allow the user to draw a rectangle on the image, and eventually crop their selection.
Research so far
I've found previous questions here on SO that handle the cropping, however they often deal with static cropping areas that don't change, this does lead to the following constraints of such mechanism
The area of the image you're interested in may be positioned
anywhere, for example if you're trying to crop a road sign, it may
be centered on one image, but aligned left on another, therefore you
can't predict which area to crop.
The size and scale of the interested area may change, for example
one image may be a close up of that road sign so the cropping area
would be larger, but another image may have been taken from a
distance meaning the cropping area would be smaller.
With the combination of the above two variables, its almost impossible to accurately predict where the area of interest in the image would be, so I'm relying on the user to define this by being able to "draw" a box around the area we're interested in, in this case, a road sign.
This is all peachy on Android, since you can delegate all the hard work out with a nice intent, such as :
Intent intent = new Intent("com.android.camera.action.CROP");
However, I can't find an equivalent for iOS.
I've found this bit of code from this source :
- (UIImage *)imageByDrawingCircleOnImage:(UIImage *)image
{
// begin a graphics context of sufficient size
UIGraphicsBeginImageContext(image.size);
// draw original image into the context
[image drawAtPoint:CGPointZero];
// get the context for CoreGraphics
CGContextRef ctx = UIGraphicsGetCurrentContext();
// set stroking color and draw circle
[[UIColor redColor] setStroke];
// make circle rect 5 px from border
CGRect circleRect = CGRectMake(0, 0,
image.size.width,
image.size.height);
circleRect = CGRectInset(circleRect, 5, 5);
// draw circle
CGContextStrokeEllipseInRect(ctx, circleRect);
// make image out of bitmap context
UIImage *retImage = UIGraphicsGetImageFromCurrentImageContext();
// free the context
UIGraphicsEndImageContext();
return retImage;
}
Which I believe is a good starting point for cropping (it does crop circles however), it does rely on predefining the area you want to crop when calling CGRectMake.
This previous question also details how to do the actual cropping.
I'm assuming that to allow the user to draw the rect, I'd need to integrate with Gestures?
The Question :
How can I allow the user, to draw a rect over an image view, with the intent of cropping that area?
You could give BJImageCropper a try:
A simple UIView subclass that allows a user to crop an image. If you use it, I'd love to know! Twitter: #barrettjacobsen
This post is already 5 years old, but future references, this is how I managed to get it done. Following code is a combination of Rob's answer and some image cropping
Xcode 9 and Swift 4 are being used here
Add 2 ViewControllers
Add ImageView and 2 buttons for first view controller and another image view for last view controller
link all views to source file
View controller
import UIKit
extension UIView {
func snapshot(afterScreenUpdates: Bool = false) -> UIImage {
UIGraphicsBeginImageContextWithOptions(bounds.size, isOpaque, 0)
drawHierarchy(in: bounds, afterScreenUpdates: afterScreenUpdates)
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}
}
extension UIImage {
func crop( rect: CGRect) -> UIImage {
var rect = rect
rect.origin.x*=self.scale
rect.origin.y*=self.scale
rect.size.width*=self.scale
rect.size.height*=self.scale
let imageRef = self.cgImage!.cropping(to: rect)
let image = UIImage(cgImage: imageRef!, scale: self.scale, orientation: self.imageOrientation)
return image
}
}
class ViewController: UIViewController {
var rec: CGRect!
var cropImage: UIImage!
#IBOutlet weak var imageView: UIImageView!
private let shapeLayer: CAShapeLayer = {
let _shapeLayer = CAShapeLayer()
_shapeLayer.fillColor = UIColor.clear.cgColor
_shapeLayer.strokeColor = UIColor.green.cgColor
_shapeLayer.lineWidth = 2
return _shapeLayer
}()
private var startPoint: CGPoint!
override func viewDidLoad() {
super.viewDidLoad()
imageView.layer.addSublayer(shapeLayer)
}
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
clear()
startPoint = touches.first?.location(in: imageView)
}
func clear() {
imageView.layer.sublayers = nil
imageView.image = UIImage(named: "aa")
imageView.layer.addSublayer(shapeLayer)
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let startPoint = startPoint, let touch = touches.first else { return }
let point: CGPoint
if let predictedTouch = event?.predictedTouches(for: touch)?.last {
point = predictedTouch.location(in: imageView)
} else {
point = touch.location(in: imageView)
}
updatePath(from: startPoint, to: point)
}
override func touchesEnded(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let startPoint = startPoint, let touch = touches.first else { return }
let point = touch.location(in: imageView)
updatePath(from: startPoint, to: point)
imageView.image = imageView.snapshot(afterScreenUpdates: true)
shapeLayer.path = nil
}
override func touchesCancelled(_ touches: Set<UITouch>, with event: UIEvent?) {
shapeLayer.path = nil
}
private func updatePath(from startPoint: CGPoint, to point: CGPoint) {
let size = CGSize(width: point.x - startPoint.x, height: point.y - startPoint.y)
rec = CGRect(origin: startPoint, size: size)
shapeLayer.path = UIBezierPath(rect: rec).cgPath
}
#IBAction func btnTapped(_ sender: Any) {
clear()
}
#IBAction func btnCropTapped(_ sender: Any) {
let newRec = CGRect(origin: CGPoint(x: rec.origin.x, y: rec.origin.y), size: rec.size)
cropImage = imageView.image?.crop(rect: newRec)
print(rec)
print(newRec)
self.performSegue(withIdentifier: "toImage", sender: nil)
}
override func prepare(for segue: UIStoryboardSegue, sender: Any?) {
if segue.identifier == "toImage" {
if let destination = segue.destination as? ImageViewController {
destination.croppedImage = cropImage
}
}
}
}

Resources