Drawing on a UIImageView inside a UIScrollView - ios

I have a UIImageView inside a UIScrollView which automatically zooms out to fit the image supplied. The user can zoom as usual with a pinch gesture, and the pan gesture is set to require two touches since the drawing takes precedence.
On launch, everything looks great, but when I invoke my drawing code, this happens:
As you can see, when drawLineFrom(fromPoint:toPoint:) is invoked, the UIImageView shrinks. After that, the drawing appears to work as intended (though it skips the first part of the line on every touch).
My UIPanGestureRecognizer selector:
#objc func onOneFingerDrawing(_ sender: UIPanGestureRecognizer) {
switch sender.state {
case .began:
swiped = false
lastPoint = sender.location(in: drawView.imageView)
case .changed:
swiped = true
let currentPoint = sender.location(in: drawView.imageView)
drawLineFrom(fromPoint: lastPoint, toPoint: currentPoint)
lastPoint = currentPoint
case .ended:
guard drawView.scrollView.frame.contains(sender.location(in: drawView.imageView)) else {
return
}
if let newImage = drawView.imageView.image {
if history.count > historyIndex + 1 {
history.removeLast((history.count - 1) - historyIndex)
}
history.append(newImage)
historyIndex = history.count - 1
}
case .possible,
.cancelled,
.failed:
return
}
}
and my drawLineFrom(fromPoint:toPoint:):
#objc func drawLineFrom(fromPoint: CGPoint, toPoint: CGPoint) {
UIGraphicsBeginImageContextWithOptions(drawView.imageView.frame.size, false, UIScreen.main.scale)
let context = UIGraphicsGetCurrentContext()
context?.interpolationQuality = .none
drawView.imageView.image?.draw(in: CGRect(x: 0, y: 0, width: drawView.imageView.frame.size.width, height: drawView.imageView.frame.size.height))
context?.move(to: fromPoint)
context?.addLine(to: toPoint)
context?.setLineCap(.round)
context?.setLineWidth(lineWidth)
context?.setStrokeColor(lineColor)
context?.setBlendMode(blendMode)
context?.strokePath()
let newImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
drawView.imageView.image = newImage
}

There is issue with image constraint with scrollview. Whenever you start rendering the image then its frame is changing according to content. So, you need to add aspect ratio constraint (as i did) or any size constraint to image view. see gifs for reference.
Before add image view size constraint.
After adding image view size constraint.

The drawing skips the first part of the line because you are using a UIPanGestureRecognizer. They system first determines it is a pan before it sends you a .began event. You should swap it to a generic UIGestureRecognizer to have it started immediately. You'll want to add in additional logic in that case checking for movement and number of fingers.
As far as the resizing, its tough to say without more info. I'd color the background of the ImageView as well. The first question is, is it the whole imageView that is shrinking or is it just the actual image inside it.

Related

UIViewContainer transformation changes after UIButton title is changed

I have the storyboard in the below screenshot.
The gray area is a UIViewContainer. UIGestureRegonizers are enabling the zooming and the moving of the UIViewContainer through Pan-Pinch gestures. The location of the buttons are fixed in the bottom with constraints, the UIViewContainer has flexible size and connected to buttons and the safe are with constraints.
So here is the problem: First, I zoom into a part of the UIViewContainer and then, I change the title of the up-right button. Right after that, for some reason that I need help with, the container moves about 10 pixels to the right. I am sure that no lifecycle function is called after the button.setTitle() function. I think it can be the problem that the container does some kind of a relayout.
Here is a dummy app where I reproduced the same behavior. After you move the container to somewhere with a pan gesture and click the button, the button title changes and the view is moved back to center. I want the view to stay where it is.
How can I disable the layout after the button title is changed?
#IBAction func handlePinchGestures(pinchGestureRecognizer: UIPinchGestureRecognizer) {
if let view = pinchGestureRecognizer.view {
switch pinchGestureRecognizer.state {
case .changed:
let pinchCenter = CGPoint(x: pinchGestureRecognizer.location(in: view).x - view.bounds.midX,
y: pinchGestureRecognizer.location(in: view).y - view.bounds.midY)
let transform = view.transform.translatedBy(x: pinchCenter.x, y: pinchCenter.y)
.scaledBy(x: pinchGestureRecognizer.scale, y: pinchGestureRecognizer.scale)
.translatedBy(x: -pinchCenter.x, y: -pinchCenter.y)
view.transform = transform
pinchGestureRecognizer.scale = 1
imageLayerContainer.subviews.forEach({ (subview: UIView) -> Void in subview.setNeedsDisplay() })
default:
return
}
}
}
#IBAction func handlePanGesturesWithTwoFingers(panGestureRecognizer: UIPanGestureRecognizer) {
let translation = panGestureRecognizer.translation(in: self.view)
if let view = panGestureRecognizer.view {
view.center = CGPoint(x:view.center.x + translation.x,
y:view.center.y + translation.y)
}
panGestureRecognizer.setTranslation(CGPoint.zero, in: self.view)
}

UIBezierpaths Are Not Drawing Where They Were Drawn

Looking for help on this "weird" problem.
I am using a PanGesture to allow the user to draw a line on CAShapeLayer.
I keep track of the path until they end and then store the path in an array of paths, clearing their original path from the CAShapeLayer.path
When I redraw those paths onto a UIImage, the paths are shifted in the Y-axis towards the top of the screen.
I am attaching two images:
1) The drawing of the path. Drawing The Path
2) The UIImage that is put out when I redraw the path. As you can see, I draw along a grid line, however, when the UIImage is created, the line is above the grid line. The Path Redrawn On UIImage
Any suggestions appreciated.
The code for both the drawing and then the rendering are below.
#objc private func drawLine(_ gestureRecognizer: UIPanGestureRecognizer) {
let point = gestureRecognizer.location(in: self)
if point.x < 0 || point.x > self.bounds.width || point.y < 0 || point.y > self.bounds.height {
return
}
switch gestureRecognizer.state {
case .began:
currentLine = UIBezierPath()
currentLine.lineWidth = settings.defaultSpotRadius
currentLine.lineCapStyle = .round
currentLine.lineJoinStyle = .round
currentLine.move(to: point)
break
case .changed:
currentLine.addLine(to: point)
currentLineDrawLayer.path = currentLine.cgPath
break
case .ended:
self.storage.addLine(clear: fowClearMode, path: currentLine, ofWidth: currentLine.lineWidth)
currentLineDrawLayer.path = nil
storedImage = drawPaths()
break
default:
break
}
}
public func drawPaths() -> UIImage? {
if pathsHidden { return nil }
let rect = CGRect(origin: .zero, size: self.imageSize)
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0.0)
storedImage.draw(in: rect)
for (isClear, line, width) in thePaths {
let blendMode = isClear ? settings.colorClear : settings.colorDark
line.lineWidth = width
line.stroke(with: blendMode, alpha: 1.0)
}
let image = UIGraphicsGetImageFromCurrentImageContext()!
UIGraphicsEndImageContext()
return image
}
Tue 25 Sep, 2018
I have created a mini-project that contains only the import bits and pieces. It is available here ... Mini-Project
I think you have a ways to go to get everything working as desired, but, to solve your initial "offset" issue...
In ViewController.swift you are setting the frame of imgDisplay: ImageDisplay to self.view.bounds --- which, in this case, will be 1024 x 768.
Then in ImageDisplay.swift you create aPathImage at a size of 1024 x 791 and setting that image as a sublayer.
Then...
You are tracking pan-gestures based on a view with bounds of 1024 x 768, then re-drawing those lines on an image that is 1024 x 791, resulting in a "shift".
Note that, if you change your image size to 800 x 600 the shift will be very obvious, and it will be a little easier to see how it is related to the size difference.
What you probably want to do is set the size of the frame of imgDisplay to the size of the image.
Try this to see what happens - in ImageDisplay.swift:
override init(frame: CGRect) {
super.init(frame: frame)
// add this line
self.frame = CGRect(origin: .zero, size: aPathImage.sizeCurrent)
self.contentMode = .center
...
Now, your pan coordinates will match the coordinates of the contained image.
As I said, you've still got a ways to go, but hopefully this helps you along the way.

Scroll UIScrollView faster when scrolling it through another view

I have a UIScrollView with large UIImageView with image of very high resolution (about 10,000 x 10,000). In this UIImageView both zooming and scrolling is enabled. I also have a smaller UIImageView with the same image with much smaller resolution (about 100 x 100). I'm showing the visible portion of larger UIImageView on the smaller UIImageView. And the user can navigate to other places on larger UIImageView by panning on smaller UIImageView. The following images show what I'm trying to explain. My issue is the while panning on the smaller UIImageView the scrolling in larger UIScrollView really slow.
// function that handles the pan on green view
func handlePanNavigation(gestureRecognizer: UIPanGestureRecognizer) {
if gestureRecognizer.state == .began || gestureRecognizer.state == .changed {
let translation = gestureRecognizer.translation(in: navigationPanel)
guard let gv = gestureRecognizer.view else { return }
let point = CGPoint(x: gv.center.x + translation.x, y: gv.center.y + translation.y)
gestureRecognizer.view?.center = point
gestureRecognizer.setTranslation(.zero, in: navigationPanelView)
let transform = CGAffineTransform(scaleX: orgSize.width*tiledScrollView.zoomScale/navSize.width, y: orgSize.height*tiledScrollView.zoomScale/navSize.height)
let offset = navigationPanelView.frame.origin.applying(transform)
tiledScrollView.setContentOffset(offset, animated: true)
}
}
You should not animate the change of the content offset while applying a transformation of a user input real-time, since that can easily slow down the feedback.
Change
tiledScrollView.setContentOffset(offset, animated: true)
to
tiledScrollView.setContentOffset(offset, animated: false)
I'm not entirely certain how you want to accomplish this but if you want to slow-down or speed-up a pan gesture translation, add a multiplier.
switch gesture.state {
case .began:
gesture.setTranslation(CGPoint.zero, in: gesture.view)
case .changed:
gesture.setTranslation(CGPoint.zero, in: gesture.view)
if someView.frame.origin.y < someThreshold {
someView.center = CGPoint(x: someView.center.x, y: someView.center.y + (translation.y * 0.25))
}
...
Here, any pan upward beyond someThreshold is 4x slower. In your case, obviously, add a multiplier greater than 1.

swift - draw with finger over UIImageView

In my app user needs to take a picture with the camera and then mark some areas in the image using a finger.
So I created UIImageView which holds the image from the camera, then added UIPanGestureRecognizer which listens for "drawing" gestures:
panGesture = UIPanGestureRecognizer(target: self, action: #selector(AttachmentInputViewController.handlePanGesture(_:)))
imageView.addGestureRecognizer(panGesture!)
handlePanGesture:
func handlePanGesture(_ sender: UIPanGestureRecognizer) {
let point = sender.location(in: sender.view)
switch sender.state {
case .began:
self.startAtPoint(point: point)
case .changed:
self.continueAtPoint(point: point)
case .ended:
self.endAtPoint(point: point)
case .failed:
self.endAtPoint(point: point)
default:
assert(false, "State not handled")
}
}
Then I create UIBezierPath which holds the "drawing" and create a separate image with those markings:
private func startAtPoint(point: CGPoint) {
path = UIBezierPath()
path.lineWidth = 5
path.move(to: point)
}
private func continueAtPoint(point: CGPoint) {
path.addLine(to: point)
}
private func endAtPoint(point: CGPoint) {
path.addLine(to: point)
path.addLine(to: point)
//path.close()
let imageWidth: CGFloat = imageView.image!.size.width
let imageHeight: CGFloat = imageView.image!.size.height
let strokeColor:UIColor = UIColor.red
// Make a graphics context
UIGraphicsBeginImageContextWithOptions(CGSize(width: imageWidth, height: imageHeight), false, 0.0)
let context = UIGraphicsGetCurrentContext()
context!.setStrokeColor(strokeColor.cgColor)
//for path in paths {
path.stroke()
//}
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
In the end I need to save the image with user's markings.
The problem is that image in UIImageView is set to scaleToFit and when I try to combine both camera image and markings image they don't match due to different resolutions and ratios.
I have a feeling there is a better way achieving this and would appreciate if anyone can recommend a best way.
You were very close! The bug was simply in your handlePanGesture() where you computed the point. You obtained the point of the gesture, which is a point in the coordinate space of the view of the gesture, which is your imageView.
You instead want the point in the coordinate space of the UIImage within your imageView. So you just need to convert.
#objc func handlePanGesture(_ sender: UIPanGestureRecognizer) {
guard let image = self.imageView.image else {return}
let point = sender.location(in: sender.view)
let rx = image.size.width / self.imageView.frame.size.width
let ry = image.size.height / self.imageView.frame.size.height
let pointInImage = CGPoint(x: point.x * rx, y: point.y * ry)
switch sender.state {
case .began:
self.startAtPoint(pointInImage)
case .changed:
self.continueAtPoint(pointInImage)
case .ended:
self.endAtPoint(pointInImage)
case .failed:
self.endAtPoint(pointInImage)
default:
assert(false, "State not handled")
}
}
Thanks for posting your question! Your overall solution worked great for allowing user draw a path with their finger atop an image. With the above tweak, and with my choice to close the path during endAtPoint(), I have exactly what I need to let user choose desired area of an image.
Also, one other thing to fix: In your endAtPoint() you called
path.addLine(to: point) twice rather than once.

Move UIImageView Independently from its Mask in Swift

I'm trying to mask a UIImageView in such a way that it would allow the user to drag the image around without moving its mask. The effect would be similar to how one can position an image within the Instagram app essentially allowing the user to define the crop region of the image.
Here's an animated gif to demonstrate what I'm after.
Here's how I'm currently masking the image and repositioning it on drag/pan events.
import UIKit
class ViewController: UIViewController {
var dragDelta = CGPoint()
#IBOutlet weak var imageView: UIImageView!
override func viewDidLoad() {
super.viewDidLoad()
attachMask()
// listen for pan/drag events //
let pan = UIPanGestureRecognizer(target:self, action:#selector(onPanGesture))
pan.maximumNumberOfTouches = 1
pan.minimumNumberOfTouches = 1
self.view.addGestureRecognizer(pan)
}
func onPanGesture(gesture:UIPanGestureRecognizer)
{
let point:CGPoint = gesture.locationInView(self.view)
if (gesture.state == .Began){
print("begin", point)
// capture our drag start position
dragDelta = CGPoint(x:point.x-imageView.frame.origin.x, y:point.y-imageView.frame.origin.y)
} else if (gesture.state == .Changed){
// update image position based on how far we've dragged from drag start
imageView.frame.origin.y = point.y - dragDelta.y
} else if (gesture.state == .Ended){
print("ended", point)
}
}
func attachMask()
{
let mask = CAShapeLayer()
mask.path = UIBezierPath(roundedRect: CGRect(x: 0, y: 100, width: imageView.frame.size.width, height: 400), cornerRadius: 5).CGPath
mask.anchorPoint = CGPoint(x: 0, y: 0)
mask.fillColor = UIColor.redColor().CGColor
view.layer.addSublayer(mask)
imageView.layer.mask = mask;
}
}
This results in both the image and mask moving together as you see below.
Any suggestions on how to "lock" the mask so the image can be moved independently underneath it would be very much appreciated.
Moving a mask and frame separately from each other to reach this effect isn't the best way to go about doing this. Most apps that do this sort of effect do the following:
Add a UIScrollView to the root view (with panning/zooming enabled)
Add a UIImageView to the UIScrollView
Size the UIImageView such that it has a 1:1 ratio with the image
Set the contentSize of the UIScrollView to match that of the UIImageView
The user can now pan around and zoom into the UIImageView as needed.
Next, if you're, say, cropping the image:
Get the visible rectangle (taken from Getting the visible rect of an UIScrollView's content)
CGRect visibleRect = [scrollView convertRect:scrollView.bounds toView:zoomedSubview];
Use whatever cropping method you'd like on the UIImage to get the necessary content.
This is the smoothest way to handle this kind of interaction and the code stays pretty simple!
Just figured it out. Setting the CAShapeLayer's position property to the inverse of the UIImageView's position as it's dragged will lock the CAShapeLayer in its original position however CoreAnimation by default will attempt to animate it whenever its position is reassigned.
This can be disabled by wrapping both position settings within a CATransaction as shown below.
func onPanGesture(gesture:UIPanGestureRecognizer)
{
let point:CGPoint = gesture.locationInView(self.view)
if (gesture.state == .Began){
print("begin", point)
// capture our drag start position
dragDelta = CGPoint(x:point.x-imageView.frame.origin.x, y:point.y-imageView.frame.origin.y)
} else if (gesture.state == .Changed){
// update image & mask positions based on the distance dragged
// and wrap both assignments in a CATransaction transaction to disable animations
CATransaction.begin()
CATransaction.setDisableActions(true)
mask.position.y = dragDelta.y - point.y
imageView.frame.origin.y = point.y - dragDelta.y
CATransaction.commit()
} else if (gesture.state == .Ended){
print("ended", point)
}
}
UPDATE
Here's an implementation of what I believe AlexKoren is suggesting. This approach nests a UIImageView within a UIScrollView and uses the UIScrollView to mask the image.
class ViewController: UIViewController, UIScrollViewDelegate {
#IBOutlet weak var scrollView: UIScrollView!
var imageView:UIImageView = UIImageView()
override func viewDidLoad() {
super.viewDidLoad()
let image = UIImage(named: "point-bonitas")
imageView.image = image
imageView.frame = CGRectMake(0, 0, image!.size.width, image!.size.height);
scrollView.delegate = self
scrollView.contentMode = UIViewContentMode.Center
scrollView.addSubview(imageView)
scrollView.contentSize = imageView.frame.size
let scale = scrollView.frame.size.width / scrollView.contentSize.width
scrollView.minimumZoomScale = scale
scrollView.maximumZoomScale = scale // set to 1 to allow zoom out to 100% of image size //
scrollView.zoomScale = scale
// center image vertically in scrollview //
let offsetY:CGFloat = (scrollView.contentSize.height - scrollView.frame.size.height) / 2;
scrollView.contentOffset = CGPointMake(0, offsetY);
}
func scrollViewDidZoom(scrollView: UIScrollView) {
print("zoomed")
}
func viewForZoomingInScrollView(scrollView: UIScrollView) -> UIView? {
return imageView
}
}
The other, perhaps simpler way would be to put the image view in a scroll view and let the scroll view manage it for you. It handles everything.

Resources