How to draw a CLLocationCoordinate2Ds on MKMapSnapshotter (drawing on mapView printed image) - ios

I have mapView with array of CLLocationCoordinate2D. I use these locations to draw lines on my mapView by using MKPolyline. Now i want to store it as a UIimage. I found that theres class MKMapSnapshotter but unfortunately i can't draw overlays on it "Snapshotter objects do not capture the visual representations of any overlays or annotations that your app creates." So i get only blank map image. Is there any way to get image with my overlays?
private func generateImageFromMap() {
let mapSnapshotterOptions = MKMapSnapshotter.Options()
guard let region = mapRegion() else { return }
mapSnapshotterOptions.region = region
mapSnapshotterOptions.size = CGSize(width: 200, height: 200)
mapSnapshotterOptions.showsBuildings = false
mapSnapshotterOptions.showsPointsOfInterest = false
let snapShotter = MKMapSnapshotter(options: mapSnapshotterOptions)
snapShotter.start() { snapshot, error in
guard let snapshot = snapshot else {
//do something with image ....
let mapImage = snapshot...
}
}
}
How can i put overlays on this image? Or maybe theres other way for that problem.

Unfortunately, you have to draw them yourself. Fortunately, MKSnapshot has a convenient point(for:) method to convert a CLLocationCoordinate2D into a CGPoint within the snapshot.
For example, assume you had an array of CLLocationCoordinate2D:
private var coordinates: [CLLocationCoordinate2D]?
private func generateImageFromMap() {
guard let region = mapRegion() else { return }
let options = MKMapSnapshotter.Options()
options.region = region
options.size = CGSize(width: 200, height: 200)
options.showsBuildings = false
options.showsPointsOfInterest = false
MKMapSnapshotter(options: options).start() { snapshot, error in
guard let snapshot = snapshot else { return }
let mapImage = snapshot.image
let finalImage = UIGraphicsImageRenderer(size: mapImage.size).image { _ in
// draw the map image
mapImage.draw(at: .zero)
// only bother with the following if we have a path with two or more coordinates
guard let coordinates = self.coordinates, coordinates.count > 1 else { return }
// convert the `[CLLocationCoordinate2D]` into a `[CGPoint]`
let points = coordinates.map { coordinate in
snapshot.point(for: coordinate)
}
// build a bezier path using that `[CGPoint]`
let path = UIBezierPath()
path.move(to: points[0])
for point in points.dropFirst() {
path.addLine(to: point)
}
// stroke it
path.lineWidth = 1
UIColor.blue.setStroke()
path.stroke()
}
// do something with finalImage
}
}
Then the following map view (with the coordinates, as MKPolyline, rendered by mapView(_:rendererFor:), like usual):
The above code will create the this finalImage:

Related

Why is my programmatic screenshot capturing out of date view?

I have a map view that allows users to draw perimeter lines, and I need to capture a screenshot when they are done to save for recording purposes. For some reason, my code is capturing the view state before the new overlay is added, even though I add the overlay before attempting the screenshot.
I use a separate view to capture gesture, and then covert the points to an overlay and add it to the map here where points are the points gathered from the gesture tracking
func convertFragments() {
var coordinates: [CLLocationCoordinate2D] = []
for point in points {
let coordinate = mapView.convert(point, toCoordinateFrom: drawingView)
coordinates.append(coordinate)
}
let polyline = MKPolyline(coordinates: coordinates, count: coordinates.count)
polyline.title = selectedTool.name
removeLines(for: selectedTool)
points = []
mapView.addOverlay(polyline)
incidentManager.update(for: selectedTool, value: polyline)
}
Then incidentManager.update(value: polyline) makes a network call to save the information and calls a second method to capture the screenshot and update a log in log(eventType: eventType)
func update(for tool: MapTool, value: Any) {
func saveLine(_ line: MKPolyline, for key: String) {
incidentReference.setData([
key: IncidentManager.convertCLPoints(line.coordinates)
], merge: true)
}
switch tool {
case .hotZone, .innerPerimeter, .outerPerimeter:
let line = value as! MKPolyline
saveLine(line, for: tool.rawValue)
case .commandPost, .stagingArea:
let point = value as! CLLocationCoordinate2D
let geoPoint = GeoPoint(latitude: point.latitude, longitude: point.longitude)
incidentReference.setData([
tool.rawValue: geoPoint
], merge: true)
case .poi:
let annotation = value as! PerimeterMapAnnotation
let point = annotation.coordinate
let geoPoint = GeoPoint(latitude: point.latitude, longitude: point.longitude)
incidentReference.setData([
"pointsOfInterest": [annotation.title: geoPoint]
], merge: true)
default:
return
}
updateAddress(defaultValue: value)
let eventType = tool.eventType(didSet: true)
log(eventType: eventType)
}
I have an extension on UIView that captures the view, but for some reason, its not the most updated version of the map view.
private func log(eventType: LogEventType) {
guard let mapImage = mapView.asImage() else { return }
let storageRef = Storage.storage().reference()
let imageRef = storageRef.child("\(incident.id)/\(UUID().uuidString).jpg")
let eventTime = Date()
let eventTimeString = eventTime.apiDateString
incidentReference.collection("eventLog").document(eventTimeString).setData([
"logEventType": eventType.rawValue,
"imageReference": imageRef.fullPath,
"date": Timestamp(date: eventTime)
])
upload(image: mapImage, at: imageRef)
if shouldUpdateCover {
shouldUpdateCover = false
incidentReference.setData([
"coverPhotoRef": imageRef.fullPath
], merge: true)
}
}
func asImage() -> UIImage? {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, self.isOpaque, 0.0)
defer { UIGraphicsEndImageContext() }
if let context = UIGraphicsGetCurrentContext() {
self.layer.render(in: context)
let image = UIGraphicsGetImageFromCurrentImageContext()
return image
}
return nil
}
I have tried many variations this "screenshot" method that I have found online, and no luck. When I debug, I can check the map and its overlays, and see that they are updating before hand.
Any ideas why the image captured here does not capture the changes made?

Unable to draw boundBox while detecting object with arkit2

I am developing iOS app which will highlight objects ( not specific one that we do with .arobject files) with outlined box in real world. Idea is to implement only boundBox drawing from this documentation / example.
Got some idea from this stackoverflow answer but still unable to draw outlined boundBox to scanned object.
// Declaration
let configuration = ARObjectScanningConfiguration()
let augmentedRealitySession = ARSession()
// viewWillAppear(_ animated: Bool)
configuration.planeDetection = .horizontal
sceneView.session.run(configuration, options: .resetTracking)
// renderer
func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
//print("\(self.detectionObjects.debugDescription)")
guard let objectAnchor = anchor as? ARObjectAnchor else { return }
//2. Create A Bounding Box Around Our Object
let scale = CGFloat(objectAnchor.referenceObject.scale.x)
let boundingBoxNode = BlackMirrorzBoundingBox(points: objectAnchor.referenceObject.rawFeaturePoints.points, scale: scale)
node.addChildNode(boundingBoxNode)
}
// BlackMirrorzBoundingBox class
init(points: [float3], scale: CGFloat, color: UIColor = .cyan) {
super.init()
var localMin = float3(repeating: Float.greatestFiniteMagnitude)
var localMax = float3(repeating: -Float.greatestFiniteMagnitude)
for point in points {
localMin = min(localMin, point)
localMax = max(localMax, point)
}
self.simdPosition += (localMax + localMin) / 2
let extent = localMax - localMin
let wireFrame = SCNNode()
let box = SCNBox(width: CGFloat(extent.x), height: CGFloat(extent.y), length: CGFloat(extent.z), chamferRadius: 0)
box.firstMaterial?.diffuse.contents = color
box.firstMaterial?.isDoubleSided = true
wireFrame.geometry = box
setupShaderOnGeometry(box)
self.addChildNode(wireFrame)
}
func setupShaderOnGeometry(_ geometry: SCNBox) {
guard let path = Bundle.main.path(forResource: "wireframe_shader", ofType: "metal", inDirectory: "art.scnassets"),
let shader = try? String(contentsOfFile: path, encoding: .utf8) else {
return
}
geometry.firstMaterial?.shaderModifiers = [.surface: shader]
}
With above logic i am getting box only on plane surface instead of outlined box as in this picture.
You are probably missing the wireframe_shader shader file.
Make sure you add it to the project into the art.scnassets, then try to reload the app.
You can find a similar shader in this repository, don't forget to change the name of the shader file or the resource name in the code.

Why is my custom MKTileOverlayRenderer drawing the same tile in multiple places?

So I'm writing a MapKit-based app which draws an overlay over the map. However, a lot of the overlay drawing is dynamic, such that tile which gets drawn is frequently changing, so I've implemented a custom MKTileOverlay and a custom MKTileOverlayRenderer. The first one to handle the url-scheme for where the tile images are stored, and the second to handle the custom drawMapRect implementation.
The issue I'm running into is that I seem to be drawing the same tile image in multiple locations. Here's a screenshot to help you visualize what I mean: (I know the tiles are upside-down and backwards and I can fix that)
iOS Simulator Screenshot
I've changed certain tile images such that they're a different color and have their tile path included. What you'll notice is that many of the tile images are repeated over different areas.
I've been trying to figure out why that might be happening, so following my code path, the overlay starting point is pretty standard--the ViewController sets the addOverlay() call, which calls the delegates' mapView(rendererForOverlay:) which returns my custom MKTileOverlayRenderer class, which then attempts to call my drawMapRect(mapRect:, zoomScale:, context). It then takes the given map_rect and calculates which tile that map_rect belongs to, calls the custom MKTileOverlay class's loadTileAtPath() and then draws the resulting tile image data. And that's exactly what it looks like my code is doing as well, so I'm not really sure where I'm going wrong. That said, it works perfectly fine if I'm not trying to implement custom drawing and use a default MKTileOverlayRenderer. Unfortunately, that's also the crux of the app so not really a viable solution.
For reference, here's the relevant code from my custom classes:
My custom MKTileOverlay class
class ExploredTileOverlay: MKTileOverlay {
var base_path: String
//var tile_path: String?
let cache: NSCache = NSCache()
var point_buffer: ExploredSegment
var last_tile_path: MKTileOverlayPath?
var tile_buffer: ExploredTiles
init(URLTemplate: String?, startingLocation location: CLLocation, city: City) {
let paths = NSSearchPathForDirectoriesInDomains(NSSearchPathDirectory.DocumentDirectory, NSSearchPathDomainMask.UserDomainMask, true)
let documentsDirectory: AnyObject = paths[0]
self.base_path = documentsDirectory.stringByAppendingPathComponent("/" + city.name + "_tiles")
if (!NSFileManager.defaultManager().fileExistsAtPath(base_path)) {
try! NSFileManager.defaultManager().createDirectoryAtPath(base_path, withIntermediateDirectories: false, attributes: nil)
}
let new_point = MKMapPointForCoordinate(location.coordinate)
self.point_buffer = ExploredSegment(fromPoint: new_point, inCity: city)
self.tile_buffer = ExploredTiles(startingPoint: ExploredPoint(mapPoint: new_point, r: 50))
self.last_tile_path = Array(tile_buffer.edited_tiles.values).last!.path
super.init(URLTemplate: URLTemplate)
}
override func URLForTilePath(path: MKTileOverlayPath) -> NSURL {
let filled_template = String(format: "%d_%d_%d.png", path.z, path.x, path.y)
let tile_path = base_path + "/" + filled_template
//print("fetching tile " + filled_template)
if !NSFileManager.defaultManager().fileExistsAtPath(tile_path) {
return NSURL(fileURLWithPath: "")
}
return NSURL(fileURLWithPath: tile_path)
}
override func loadTileAtPath(path: MKTileOverlayPath, result: (NSData?, NSError?) -> Void) {
let url = URLForTilePath(path)
let filled_template = String(format: "%d_%d_%d.png", path.z, path.x, path.y)
let tile_path = base_path + "/" + filled_template
if (url != NSURL(fileURLWithPath: tile_path)) {
print("creating tile at " + String(path))
let img_data: NSData = UIImagePNGRepresentation(UIImage(named: "small")!)!
let filled_template = String(format: "%d_%d_%d.png", path.z, path.x, path.y)
let tile_path = base_path + "/" + filled_template
img_data.writeToFile(tile_path, atomically: true)
cache.setObject(img_data, forKey: url)
result(img_data, nil)
return
} else if let cachedData = cache.objectForKey(url) as? NSData {
print("using cache for " + String(path))
result(cachedData, nil)
return
} else {
print("loading " + String(path) + " from directory")
let img_data: NSData = UIImagePNGRepresentation(UIImage(contentsOfFile: tile_path)!)!
cache.setObject(img_data, forKey: url)
result(img_data, nil)
return
}
}
My custom MKTileOverlayRenderer class:
class ExploredTileRenderer: MKTileOverlayRenderer {
let tile_overlay: ExploredTileOverlay
var zoom_scale: MKZoomScale?
let cache: NSCache = NSCache()
override init(overlay: MKOverlay) {
self.tile_overlay = overlay as! ExploredTileOverlay
super.init(overlay: overlay)
NSNotificationCenter.defaultCenter().addObserver(self, selector: #selector(saveEditedTiles), name: "com.Coder.Wander.reachedMaxPoints", object: nil)
}
// There's some weird cache-ing thing that requires me to recall it
// whenever I re-draw over the tile, I don't really get it but it works
override func canDrawMapRect(mapRect: MKMapRect, zoomScale: MKZoomScale) -> Bool {
self.setNeedsDisplayInMapRect(mapRect, zoomScale: zoomScale)
return true
}
override func drawMapRect(mapRect: MKMapRect, zoomScale: MKZoomScale, inContext context: CGContext) {
zoom_scale = zoomScale
let tile_path = self.tilePathForMapRect(mapRect, andZoomScale: zoomScale)
let tile_path_string = stringForTilePath(tile_path)
//print("redrawing tile: " + tile_path_string)
self.tile_overlay.loadTileAtPath(tile_path, result: {
data, error in
if error == nil && data != nil {
if let image = UIImage(data: data!) {
let draw_rect = self.rectForMapRect(mapRect)
CGContextDrawImage(context, draw_rect, image.CGImage)
var path: [(CGMutablePath, CGFloat)]? = nil
self.tile_overlay.point_buffer.readPointsWithBlockAndWait({ points in
let total = self.getPathForPoints(points, zoomScale: zoomScale, offset: MKMapPointMake(0.0, 0.0))
path = total.0
//print("number of points: " + String(path!.count))
})
if ((path != nil) && (path!.count > 0)) {
//print("drawing path")
for segment in path! {
CGContextAddPath(context, segment.0)
CGContextSetBlendMode(context, .Clear)
CGContextSetLineJoin(context, CGLineJoin.Round)
CGContextSetLineCap(context, CGLineCap.Round)
CGContextSetLineWidth(context, segment.1)
CGContextStrokePath(context)
}
}
}
}
})
}
And my helper functions that handle converting between zoomScale, zoomLevel, tile path, and tile coordinates:
func tilePathForMapRect(mapRect: MKMapRect, andZoomScale zoom: MKZoomScale) -> MKTileOverlayPath {
let zoom_level = self.zoomLevelForZoomScale(zoom)
let mercatorPoint = self.mercatorTileOriginForMapRect(mapRect)
//print("mercPt: " + String(mercatorPoint))
let tilex = Int(floor(Double(mercatorPoint.x) * self.worldTileWidthForZoomLevel(zoom_level)))
let tiley = Int(floor(Double(mercatorPoint.y) * self.worldTileWidthForZoomLevel(zoom_level)))
return MKTileOverlayPath(x: tilex, y: tiley, z: zoom_level, contentScaleFactor: UIScreen.mainScreen().scale)
}
func stringForTilePath(path: MKTileOverlayPath) -> String {
return String(format: "%d_%d_%d", path.z, path.x, path.y)
}
func zoomLevelForZoomScale(zoomScale: MKZoomScale) -> Int {
let real_scale = zoomScale / UIScreen.mainScreen().scale
var z = Int((log2(Double(real_scale))+20.0))
z += (Int(UIScreen.mainScreen().scale) - 1)
return z
}
func worldTileWidthForZoomLevel(zoomLevel: Int) -> Double {
return pow(2, Double(zoomLevel))
}
func mercatorTileOriginForMapRect(mapRect: MKMapRect) -> CGPoint {
let map_region: MKCoordinateRegion = MKCoordinateRegionForMapRect(mapRect)
var x : Double = map_region.center.longitude * (M_PI/180.0)
var y : Double = map_region.center.latitude * (M_PI/180.0)
y = log10(tan(y) + 1.0/cos(y))
x = (1.0 + (x/M_PI)) / 2.0
y = (1.0 - (y/M_PI)) / 2.0
return CGPointMake(CGFloat(x), CGFloat(y))
}
This is a pretty obscure error, I think, so haven't had a whole lot of luck finding other people facing similar issues. Anything would help!

How do you add MKPolylines to MKSnapShotter in swift 3?

Is there a way to take a screenshot of mapView and include the polyline? I believe I need to draw CGPoint's on the image that the MKSnapShotter returns, but I am unsure on how to do so.
Current code
func takeSnapshot(mapView: MKMapView, withCallback: (UIImage?, NSError?) -> ()) {
let options = MKMapSnapshotOptions()
options.region = mapView.region
options.size = mapView.frame.size
options.scale = UIScreen.main().scale
let snapshotter = MKMapSnapshotter(options: options)
snapshotter.start() { snapshot, error in
guard snapshot != nil else {
withCallback(nil, error)
return
}
if let image = snapshot?.image{
withCallback(image, nil)
for coordinate in self.area {
image.draw(at:snapshot!.point(for: coordinate))
}
}
}
}
I had the same problem today. After several hours of research, here is how I solve it.
The following codes are in Swift 3.
1. Init your polyline coordinates array
// initial this array with your polyline coordinates
var yourCoordinates = [CLLocationCoordinate2D]()
yourCoorinates.append( coordinate 1 )
yourCoorinates.append( coordinate 2 )
...
// you can use any data structure you like
2. take the snapshot as usual, but set the region based on your coordinates:
func takeSnapShot() {
let mapSnapshotOptions = MKMapSnapshotOptions()
// Set the region of the map that is rendered. (by polyline)
let polyLine = MKPolyline(coordinates: &yourCoordinates, count: yourCoordinates.count)
let region = MKCoordinateRegionForMapRect(polyLine.boundingMapRect)
mapSnapshotOptions.region = region
// Set the scale of the image. We'll just use the scale of the current device, which is 2x scale on Retina screens.
mapSnapshotOptions.scale = UIScreen.main.scale
// Set the size of the image output.
mapSnapshotOptions.size = CGSize(width: IMAGE_VIEW_WIDTH, height: IMAGE_VIEW_HEIGHT)
// Show buildings and Points of Interest on the snapshot
mapSnapshotOptions.showsBuildings = true
mapSnapshotOptions.showsPointsOfInterest = true
let snapShotter = MKMapSnapshotter(options: mapSnapshotOptions)
snapShotter.start() { snapshot, error in
guard let snapshot = snapshot else {
return
}
// Don't just pass snapshot.image, pass snapshot itself!
self.imageView.image = self.drawLineOnImage(snapshot: snapshot)
}
}
3. Use snapshot.point() to draw Polylines on Snapshot Image
func drawLineOnImage(snapshot: MKMapSnapshot) -> UIImage {
let image = snapshot.image
// for Retina screen
UIGraphicsBeginImageContextWithOptions(self.imageView.frame.size, true, 0)
// draw original image into the context
image.draw(at: CGPoint.zero)
// get the context for CoreGraphics
let context = UIGraphicsGetCurrentContext()
// set stroking width and color of the context
context!.setLineWidth(2.0)
context!.setStrokeColor(UIColor.orange.cgColor)
// Here is the trick :
// We use addLine() and move() to draw the line, this should be easy to understand.
// The diificult part is that they both take CGPoint as parameters, and it would be way too complex for us to calculate by ourselves
// Thus we use snapshot.point() to save the pain.
context!.move(to: snapshot.point(for: yourCoordinates[0]))
for i in 0...yourCoordinates.count-1 {
context!.addLine(to: snapshot.point(for: yourCoordinates[i]))
context!.move(to: snapshot.point(for: yourCoordinates[i]))
}
// apply the stroke to the context
context!.strokePath()
// get the image from the graphics context
let resultImage = UIGraphicsGetImageFromCurrentImageContext()
// end the graphics context
UIGraphicsEndImageContext()
return resultImage!
}
That's it, hope this helps someone.
References
How do I draw on an image in Swift?
MKTile​Overlay,MKMap​Snapshotter & MKDirections
Creating an MKMapSnapshotter with an MKPolylineRenderer
Render a Map as an Image using MapKit
What is wrong with:
snapshotter.start( completionHandler: { snapshot, error in
guard snapshot != nil else {
withCallback(nil, error)
return
}
if let image = snapshot?.image {
withCallback(image, nil)
for coordinate in self.area {
image.draw(at:snapshot!.point(for: coordinate))
}
}
})
If you just want a copy of the image the user sees in the MKMapView, remember that it's a UIView subclass, and so you could do this...
public extension UIView {
public var snapshot: UIImage? {
get {
UIGraphicsBeginImageContextWithOptions(self.bounds.size, false, UIScreen.main.scale)
self.drawHierarchy(in: self.bounds, afterScreenUpdates: true)
let image = UIGraphicsGetImageFromCurrentImageContext()
UIGraphicsEndImageContext()
return image
}
}
}
// ...
if let img = self.mapView.snapshot {
// Do something
}

Check if user location is inside a shape

I made this method to check if an user location is inside a polygon on a map view (mapkit).
I pass to the method the current user location (CLLocationCoordinate2D) and return a boolean just to know if the user is in a polygon or not.
func userInsidePolygon(userlocation: CLLocationCoordinate2D ) -> Bool{
// get every overlay on the map
let o = self.mapView.overlays
// loop every overlay on map
for overlay in o {
// handle only polygon
if overlay is MKPolygon{
let polygon:MKPolygon = overlay as! MKPolygon
let polygonPath:CGMutablePathRef = CGPathCreateMutable()
// get points of polygon
let arrPoints = polygon.points()
// create cgpath
for (var i:Int=0; i < polygon.pointCount; i++){
let mp:MKMapPoint = arrPoints[i]
if (i == 0){
CGPathMoveToPoint(polygonPath, nil, CGFloat(mp.x), CGFloat(mp.y))
}
else{
CGPathAddLineToPoint(polygonPath, nil, CGFloat(mp.x), CGFloat(mp.y))
}
}
let mapPointAsCGP:CGPoint = self.mapView.convertCoordinate(userlocation, toPointToView: self.mapView)
return CGPathContainsPoint(polygonPath , nil, mapPointAsCGP, false)
}
}
return false
}
I don't really understand why, but the user is never inside a polygon after this test. (and i'm pretty sure he is)
I think it's possible that i have a logic problem with lat/long against x,y.
Does anybody already have work with like this ?
Thanks in advance for all suggestions.
Cheers
The problem is that you are converting the userLocation from the map coordinates to the view coordinates but when you build the path, you don't convert the points to the views coordinates.
You'll need to convert the MKMapPoint to a CLLocationCoordinate2D then to a CGPoint.
let polygonMapPoint: MKMapPoint = arrPoints[i]
let polygonCoordinate = MKCoordinateForMapPoint(polygonPoint)
let polygonPoint self.mapView.convertCoordinate(polygonPointAsCoordinate, toPointToView: self.mapView)
Then use polygonPoint when building the path
CGPathMoveToPoint(polygonPath, nil, polygonPoint.x, polygonPoint.y)
Swift 3, Xcode 8 answer:
func userInsidePolygon(userlocation: CLLocationCoordinate2D ) -> Bool {
var containsPoint: Bool = false
// get every overlay on the map
let o = self.mapView.overlays
// loop every overlay on map
for overlay in o {
// handle only polygon
if overlay is MKPolygon{
let polygon:MKPolygon = overlay as! MKPolygon
let polygonPath:CGMutablePath = CGMutablePath()
// get points of polygon
let arrPoints = polygon.points()
// create cgpath
for i in 0..<polygon.pointCount {
let polygonMapPoint: MKMapPoint = arrPoints[i]
let polygonCoordinate = MKCoordinateForMapPoint(polygonMapPoint)
let polygonPoint = self.mapView.convert(polygonCoordinate, toPointTo: self.mapView)
if (i == 0){
polygonPath.move(to: CGPoint(x: polygonPoint.x, y: polygonPoint.y))
}
else{
polygonPath.addLine(to: CGPoint(x: polygonPoint.x, y: polygonPoint.y))
}
}
let mapPointAsCGP:CGPoint = self.mapView.convert(userlocation, toPointTo: self.mapView)
containsPoint = polygonPath.contains(mapPointAsCGP)
if containsPoint {
return true
}
}
}
return containsPoint
}

Resources