Properly subclassing MKOverlayRenderer - ios

I'm trying to modify the path of a MKPolyline at runtime to avoid it overlaps with another one.
I already managed to get all the overlapping points and what I'm trying to do is in the func createPath() of the MKPolylineRenderer add an offset to does points so, theoretically, it should draw the same path with the little offset I'm adding and it shouldn't overlap anymore, but sadly, this is not happening and the Polyline is drawn in the same way like nothing changed.
I first tried to do this after the addPolyline() function but I read that once you do that, the one way to redraw a Polyline is by removing it and adding it again, so I decided, for testing purposes, to do all of this before adding the Polyline so when I finally add it to the map, it will already have the information about the overlapping points, but this didn't worked either.
Hypothesis:
1. It has something to do that the map works on different threads and the changes are not reflected because of that. This is ok. It should be this way to optimise the rendering.
2. The correct way to accomplish this is not in the createPath() function. Indeed it isn't
I should apply a transform in the draw() function of the renderer. This is it
This is the createPath() function
override func createPath()
{
let poly = polyline as! TransportPolyline
switch poly.id
{
case 1:
let newPath = CGMutablePath()
for index in 0...poly.pointCount
{
let point = poly.points()[index]
let predicate = { MKMapPointEqualToPoint($0, poly.points()[index]) }
//This is the offset I should apply
let offset: CGFloat = overlapsAtPoints.contains(predicate) ? 100000.0 : 0.0
//I tried to use a transform as well, but the result was the same
var transform = CGAffineTransform(translationX: offset, y: offset)
if index == 0
{
//Here I add the offset and/or the transform without success
newPath.moveTo(&transform, x: CGFloat(point.x) + offset, y: CGFloat(point.y) + offset)
}
else
{
//Here as well
newPath.addLineTo(&transform, x: CGFloat(point.x) + offset, y: CGFloat(point.y) + offset)
}
}
//Set the new path to the Renderer path property
self.path = newPath
default: break
}
}
And this is the draw() function
override func draw(_ mapRect: MKMapRect, zoomScale: MKZoomScale, in context: CGContext)
{
let poly = polyline as! TransportPolyline
guard poly.id == 1 else {
super.draw(mapRect, zoomScale: zoomScale, in: context)
return
}
//If I apply this the Polyline does move, obviously it move all the Path and not only the segments I want.
context.translate(x: 1000, y: 1000)
super.draw(mapRect, zoomScale: zoomScale, in: context)
}
Any suggestions are much appreciated.
UPDATE:
I found out that the problem might be in how I'm drawing the context in the draw method.
The documentation says:
The default implementation of this method does nothing. Subclasses are
expected to override this method and use it to draw the overlay’s
contents.
so by calling super.draw() I'm not doing anything.
Any ideas on how to properly override this method? Also taking into consideration this:
To improve drawing performance, the map view may divide your overlay
into multiple tiles and render each one on a separate thread. Your
implementation of this method must therefore be capable of safely
running from multiple threads simultaneously. In addition, you should
avoid drawing the entire contents of the overlay each time this method
is called. Instead, always take the mapRect parameter into
consideration and avoid drawing content outside that rectangle.

So basically I was on the right track but using the wrong tools. The actual way to accomplish this is by overriding the draw() function in you MKPolylineRenderer subclass.
override func draw(_ mapRect: MKMapRect, zoomScale: MKZoomScale, in context: CGContext)
{
//First validate that the Rect you are asked to draw in actually
has some content. See last quote above.
let theMapRect: MKMapRect = self.overlay.boundingMapRect;
guard (MKMapRectIntersectsRect(mapRect, theMapRect)) || self.path != nil else {
return
}
//Do some logic if needed.
//Create and draw your path
let path = CGMutablePath()
path.moveTo(nil, x: self.path.currentPoint.x, y: self.path.currentPoint.y)
path.addLines(nil, between: remainingPoints, count: remainingPoints.count)
context.addPath(path)
//Customise it
context.setStrokeColor(strokeColor!.cgColor)
context.setLineWidth((lineWidth + CGFloat(0.0)) / zoomScale)
//And apply it
context.strokePath()
}
By doing this I was able to successfully draw the path I wanted for each overlay without any troubles.

Related

Change stroke Color to Clear Color CGContext

I want user to draw on Custom UIView with clearcolor as stroke color. The code is working fine for other colors but not clear color.
override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
context.addRect(rect)
draw(inContext: context)
}
func draw(inContext context: CGContext) {
context.setLineWidth(5)
context.setStrokeColor(UIColor.clear.cgColor)
context.setLineCap(.round)
for line in lineArray {
guard let firstPoint = line.first else { continue }
context.beginPath()
context.move(to: firstPoint)
for point in line.dropFirst() {
context.addLine(to: point)
}
context.strokePath()
}
}
So, it sounds like you've a situation like this. You have an image being displayed in an image view:
And you want the image to be hidden until the user draws on top of it, at which point the user's drawing should reveal that part of the image, like this:
If so, what you want is not to draw with clear color; you want a mask. What's actually happening in that second screen shot is that we track the user's drawing and draw black into an otherwise clear mask on the image view.

Is it possible to group images by drawing lines on it in iOS swift?

For an example if i have multiple images on views in random position. Images are selected by drawing lines on it and group images by using gestures. Right now i can able to show images randomly but not able group images by drawing line on it.
Here screenshot 1 is result which i have getting now:
screenshot 2 which is exactly what i want.
For what you are trying to do I would start by creating a custom view (a subclass) that is able to handle gestures and draw paths.
For gesture recognizer I would use UIPanGestureRecognizer. What you do is have an array of points where the gesture was handled which are then used to draw the path:
private var currentPathPoints: [CGPoint] = []
#objc private func onPan(_ sender: UIGestureRecognizer) {
switch sender.state {
case .began: currentPathPoints = [sender.location(in: self)] // Reset current array by only showing a current point. User just started his path
case .changed: currentPathPoints.append(sender.location(in: self)) // Just append a new point
case .cancelled, .ended: endPath() // Will need to report that user lifted his finger
default: break // These extra states are her just to annoy us
}
}
So if this method is used by pan gesture recognizer it should track points where user is dragging. Now these are best drawn in drawRect which needs to be overridden in your view like:
override func draw(_ rect: CGRect) {
super.draw(rect)
// Generate path
let path: UIBezierPath = {
let path = UIBezierPath()
var pointsToDistribute = currentPathPoints
if let first = pointsToDistribute.first {
path.move(to: first)
pointsToDistribute.remove(at: 0)
}
pointsToDistribute.forEach { point in
path.addLine(to: point)
}
return path
}()
let color = UIColor.red // TODO: user your true color
color.setStroke()
path.lineWidth = 3.0
path.stroke()
}
Now this method will be called when you invalidate drawing by calling setNeedsDisplay. In your case that is best done on setter of your path points:
private var currentPathPoints: [CGPoint] = [] {
didSet {
setNeedsDisplay()
}
}
Since this view should be as an overlay to your whole scene you need some way to reporting the events back. A delegate procedure should be created that implements methods like:
func endPath() {
delegate?.myLineView(self, finishedPath: currentPathPoints)
}
So now if view controller is a delegate it can check which image views were selected within the path. For first version it should be enough to just check if any of the points is within any of the image views:
func myLineView(sender: MyLineView, finishedPath pathPoints: [CGPoint]) {
let convertedPoints: [CGPoint] = pathPoints.map { sender.convert($0, to: viewThatContainsImages) }
let imageViewsHitByPath = allImageViews.filter { imageView in
return convertedPoints.contains(where: { imageView.frame.contains($0) })
}
// Use imageViewsHitByPath
}
Now after this basic implementation you can start playing by drawing a nicer line (curved) and with cases where you don't check if a point is inside image view but rather if a line between any 2 neighbor points intersects your image view.

CAShapeLayer tap detection in Swift 3

I have a CAShapeLayer for which I have marked the fill color as clear.
When I tap on the line it does not always detect if the CAShapeLayer cgpath contains the tap point. My code is as follows:
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
let touch = touches.first
guard let point = touch?.location(in: self) else { return }
for sublayer in self.layer.sublayers! {
if let l = sublayer as? CAShapeLayer {
if let path = l.path, path.contains(point) {
print("Tap detected")
}
}
}
}
On some occasions it detects it if I really click on the center on the line.
So I thought of making the line very fat from 6 to 45. Still it did not work. Then I thought of making the fill as gray after this now when i tap on the fill gray color it always detects the tap. I am really confused why it detects tap on fill or very center of the line why not on the whole thickness of line.
Swift 4, answer is based on explanation and link to CGPath Hit Testing - Ole Begemann (2012) by caseynolan in other comment:
From Ole Begemann blog:
contains(point: CGPoint)
This function is helpful if you want to hit test on the entire region
the path covers. As such, contains(point: CGPoint) doesn’t work with
unclosed paths because those don’t have an interior that would be
filled.
copy(strokingWithWidth lineWidth: CGFloat, lineCap: CGLineCap, lineJoin: CGLineJoin, miterLimit: CGFloat, transform: CGAffineTransform = default) -> CGPath
This function creates a mirroring tapTarget object that only covers
the stroked area of the path. When the user taps on the screen, we
iterate over the tap targets rather than the actual shapes.
My solution in code
I use a UITapGestureRecognizer linked to the function tap():
var tappedLayers = [CAShapeLayer]()
#IBAction func tap(_ sender: UITapGestureRecognizer) {
let point = sender.location(in: imageView)
guard let sublayers = imageView.layer.sublayers as? [CAShapeLayer] else {
return
}
for layer in sublayers {
// create tapTarget for path
if let target = tapTarget(for: layer) {
if target.contains(point) {
tappedLayers.append(layer)
}
}
}
}
fileprivate func tapTarget(for layer: CAShapeLayer) -> UIBezierPath? {
guard let path = layer.path else {
return nil
}
let targetPath = path.copy(strokingWithWidth: layer.lineWidth, lineCap: CGLineCap.round, lineJoin: CGLineJoin.round, miterLimit: layer.miterLimit)
return UIBezierPath.init(cgPath: targetPath)
}
I think the problem is that CGPath.contains() doesn't operate the way you expect it to.
From Apple's CGPath docs:
Discussion
A point is contained in a path if it would be inside the painted region when the path is filled.
So the method isn't actually checking if you're intersecting a drawn line, it's checking if you're intersecting the shape (even if you're not explicitly filling the path).
Some basic experiments show that the method returns true if the supplied point:
Sits exactly in the middle of a line/path (not including the line's drawn outline from the CAShapeLayer's lineWidth) or
Is in the middle of where the path would create a solid shape (as if it had been closed and filled).
You might find some workarounds here on Stack Overflow (it seems that many others have had the same problem before). E.g. Hit detection when drawing lines in iOS.
You might also find this blog post useful: CGPath Hit Testing - Ole Begemann.

How to animate a custom property in iOS

I have a custom UIView that draws its contents using Core Graphics calls. All working well, but now I want to animate a change in value that affects the display. I have a custom property to achieve this in my custom UView:
var _anime: CGFloat = 0
var anime: CGFloat {
set {
_anime = newValue
for(gauge) in gauges {
gauge.animate(newValue)
}
setNeedsDisplay()
}
get {
return _anime
}
}
And I have started an animation from the ViewController:
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
self.emaxView.anime = 0.5
UIView.animate(withDuration: 4) {
DDLogDebug("in animations")
self.emaxView.anime = 1.0
}
}
This doesn't work - the animated value does change from 0.5 to 1.0 but it does so instantly. There are two calls to the anime setter, once with value 0.5 then immediately a call with 1.0. If I change the property I'm animating to a standard UIView property, e.g. alpha, it works correctly.
I'm coming from an Android background, so this whole iOS animation framework looks suspiciously like black magic to me. Is there any way of animating a property other than predefined UIView properties?
Below is what the animated view is supposed to look like - it gets a new value about every 1/2 second and I want the pointer to move smoothly over that time from the previous value to the next. The code to update it is:
open func animate(_ progress: CGFloat) {
//DDLogDebug("in animate: progress \(progress)")
if(dataValid) {
currentValue = targetValue * progress + initialValue * (1 - progress)
}
}
And calling draw() after it's updated will make it redraw with the new pointer position, interpolating between initialValue and targetValue
Short answer: use CADisplayLink to get called every n frames. Sample code:
override func viewDidAppear(_ animated: Bool) {
super.viewDidAppear(animated)
let displayLink = CADisplayLink(target: self, selector: #selector(animationDidUpdate))
displayLink.preferredFramesPerSecond = 50
displayLink.add(to: .main, forMode: .defaultRunLoopMode)
updateValues()
}
var animationComplete = false
var lastUpdateTime = CACurrentMediaTime()
func updateValues() {
self.emaxView.animate(0);
lastUpdateTime = CACurrentMediaTime()
animationComplete = false
}
func animationDidUpdate(displayLink: CADisplayLink) {
if(!animationComplete) {
let now = CACurrentMediaTime()
let interval = (CACurrentMediaTime() - lastUpdateTime)/animationDuration
self.emaxView.animate(min(CGFloat(interval), 1))
animationComplete = interval >= 1.0
}
}
}
The code could be refined and generalised but it's doing the job I needed.
You will need to call layoufIfNeeded() instead of setNeedsDisplay() if you modify any auto layout constraints in your gauge.animate(newValue) function.
https://stackoverflow.com/a/12664093/255549
If that is drawn entirely with CoreGraphics there is a pretty simple way to animate this if you want to do a little math. Fortunately you have a scale there that tells you the number of radians exactly to rotate, so the math is minimal and no trigonometry is involved. The advantage of this is you won't have to redraw the entire background, or even the pointer. It can be a bit tricky to get angles and stuff right, I can help out if the following doesn't work.
Draw the background of the view normally in draw(in rect). The pointer you should put into a CALayer. You can pretty much just move the draw code for the pointer, including the centre dark gray circle into a separate method that returns a UIImage. The layer will be sized to the frame of the view (in layout subviews), and the anchor point has to be set to (0.5, 0.5), which is actually the default so you should be ok leaving that line out. Then your animate method just changes the layer's transform to rotate according to what you need. Here's how I would do it. I'm going to change the method and variable names because anime and animate were just a bit too obscure.
Because layer properties implicitly animate with a duration of 0.25 you might be able to get away without even calling an animation method. It's been a while since I've worked with CoreAnimation, so test it out obviously.
The advantage here is that you just set the RPM of the dial to what you want, and it will rotate over to that speed. And no one will read your code and be like WTF is _anime! :) I have included the init methods to remind you to change the contents scale of the layer (or it renders in low quality), obviously you may have other things in your init.
class SpeedDial: UIView {
var pointer: CALayer!
var pointerView: UIView!
var rpm: CGFloat = 0 {
didSet {
pointer.setAffineTransform(rpm == 0 ? .identity : CGAffineTransform(rotationAngle: rpm/25 * .pi))
}
}
override init(frame: CGRect) {
super.init(frame: frame)
pointer = CALayer()
pointer.contentsScale = UIScreen.main.scale
pointerView = UIView()
addSubview(pointerView)
pointerView.layer.addSublayer(pointer)
}
required init?(coder aDecoder: NSCoder) {
super.init(coder: aDecoder)
pointer = CALayer()
pointer.contentsScale = UIScreen.main.scale
pointerView = UIView()
addSubview(pointerView)
pointerView.layer.addSublayer(pointer)
}
override func draw(_ rect: CGRect) {
guard let context = UIGraphicsGetCurrentContext() else { return }
context.saveGState()
//draw background with values
//but not the pointer or centre circle
context.restoreGState()
}
override func layoutSubviews() {
super.layoutSubviews()
pointerView.frame = bounds
pointer.frame = bounds
pointer.anchorPoint = CGPoint(x: 0.5, y: 0.5)
pointer.contents = drawPointer(in: bounds)?.cgImage
}
func drawPointer(in rect: CGRect) -> UIImage? {
UIGraphicsBeginImageContextWithOptions(rect.size, false, 0)
guard let context = UIGraphicsGetCurrentContext() else { return nil }
context.saveGState()
// draw the pointer Image. Make sure to draw it pointing at zero. ie at 8 o'clock
// I'm not sure what your drawing code looks like, but if the pointer is pointing
// vertically(at 12 o'clock), you can get it pointing at zero by rotating the actual draw context like so:
// perform this context rotation before actually drawing the pointer
context.translateBy(x: rect.width/2, y: rect.height/2)
context.rotate(by: -17.5/25 * .pi) // the angle judging by the dial - remember .pi is 180 degrees
context.translateBy(x: -rect.width/2, y: -rect.height/2)
context.restoreGState()
let pointerImage = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return pointerImage
}
}
The pointer's identity transform has it pointing at 0 RPM, so every time you up the RPM to what you want, it will rotate up to that value.
edit: tested it, it works. Except I made a couple errors - you don't need to change the layers position, I updated the code accordingly. Also, changing the layer's transform triggers layoutSubviews in the immediate parent. I forgot about this. The easiest way around this is to put the pointer layer into a UIView that is a subview of SpeedDial. I've updated the code. Good luck! Maybe this is overkill, but its a bit more reusable than animating the entire rendering of the view, background and all.

Image Warp in IOS

i am new bie in IOS Image editing work.
I want to make functionality of Image Warp in ios or swift (any one).
I search lots of googling but not getting exact what i want
below link i am searf
How can you apply distortions to a UIImage using OpenGL ES?
https://github.com/BradLarson/GPUImage
https://github.com/Ciechan/BCMeshTransformView
Here is image what i want (When i touch the grid point image should we wrap and if i place the grid point at original place its should be original like wise)
I've written an extension for BCMeshTransformView that allows you to apply warp transform in response to user's touch input.
extension BCMutableMeshTransform {
static func warpTransform(from startPoint:CGPoint,
to endPoint:CGPoint, in size:CGSize) -> BCMutableMeshTransform {
let resolution:UInt = 30
let mesh = BCMutableMeshTransform.identityMeshTransform(withNumberOfRows: resolution,
numberOfColumns: resolution)!
let _startPoint = CGPoint(x: startPoint.x/size.width, y: startPoint.y/size.height)
let _endPoint = CGPoint(x: endPoint.x/size.width, y: endPoint.y/size.height)
let dragDistance = _startPoint.distance(to: _endPoint)
for i in 0..<mesh.vertexCount {
var vertex = mesh.vertex(at: i)
let myDistance = _startPoint.distance(to: vertex.from)
let hEdgeDistance = min(vertex.from.x, 1 - vertex.from.x)
let vEdgeDistance = min(vertex.from.y, 1 - vertex.from.y)
let hProtection = min(100, pow(hEdgeDistance * 100, 1.5))/100
let vProtection = min(100, pow(vEdgeDistance * 100, 1.5))/100
if (myDistance < dragDistance) {
let maxDistort = CGPoint(x:(_endPoint.x - _startPoint.x) / 2,
y:(_endPoint.y - _startPoint.y) / 2)
let normalizedDistance = myDistance/dragDistance
let normalizedImpact = (cos(normalizedDistance * .pi) + 1) / 2
vertex.to.x += maxDistort.x * normalizedImpact * hProtection
vertex.to.y += maxDistort.y * normalizedImpact * vProtection
mesh.replaceVertex(at: i, with: vertex)
}
}
return mesh
}
}
Then just set the property to your transformView.
fileprivate var startPoint:CGPoint?
override func touchesBegan(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let touch = touches.first else { return }
startPoint = touch.location(in: self)
}
override func touchesMoved(_ touches: Set<UITouch>, with event: UIEvent?) {
guard let startPoint_ = startPoint else { return }
guard let touch = touches.first else { return }
let position = touch.location(in: self)
transformView.meshTransform = BCMutableMeshTransform.warpTransform(from: startPoint_,
to: position, in: bounds.size)
}
The warp code is not perfect, but it gets the job done in my case. You can play with it, but the general idea stays the same.
Take a look at my answer to this question:
Warp \ bend effect on a UIView?
You might also look at this git library:
https://github.com/Ciechan/BCMeshTransformView
That might be a good starting point for what you want to do, but you'll need to learn about OpenGL, transformation matrices, at lots of other things.
What you are asking about is fairly straightforward OpenGL. You just need to set up a triangle strip that describes the modified grid points. You'd load your image as a texture, and then render the texture using the triangle strips.
However, "straightforward OpenGL" is sort of like straightforward rocket science. The high-level concepts may be straightforward, but there end up being lots of very fussy details you have to get right in order to make it work.
Take a look at this short video I created with my app Face Dancer
Face Dancer video with grid lines

Resources