Google Map Cluster not working zoom in/out iOS - ios

I am trying to work with Google map clustering in iOS. I am fetching a massive chunk of data from the database in the lot of 100 per call and then rendering the set on google Maps by sending the lat-long array to the marker cluster. When I try to zoom in cluster expands into further small groups. But the cluster counts not updating automatically.
ScreenShot:-
Zoom out:-
Zoom In:-
**Code:-**
//MARK: - Setup Map Cluster
func setClusterMap() {
// Set up the cluster manager with default icon generator and renderer.
clusterArr.removeAll()
for i in 0..<self.isLicensedArr.count {
if self.isLicensedArr[i] == 0 {
clusterArr.append(self.isLicensedArr[i])
}
}
let clusterCount = NSNumber(value: clusterArr.count)
let iconGenerator = GMUDefaultClusterIconGenerator(buckets: [clusterCount], backgroundColors: [UIColor(red: 33/255, green: 174/255, blue: 108/255, alpha: 1.0)])
let algorithm = GMUNonHierarchicalDistanceBasedAlgorithm()
let renderer = GMUDefaultClusterRenderer(mapView: mapView, clusterIconGenerator: iconGenerator)
renderer.animatesClusters = true
clusterManager = GMUClusterManager(map: mapView, algorithm: algorithm, renderer: renderer)
//Register self to listen to GMSMapViewDelegate events.
clusterManager.setMapDelegate(self)
clusterManager.setDelegate(self, mapDelegate: self)
// Generate and add random items to the cluster manager.
self.generateClusterItems()
// Call cluster() after items have been added to perform the clustering and rendering on map.
self.clusterManager.cluster()
}
Following url for map clustering :- https://developers.google.com/maps/documentation/ios-sdk/utility/marker-clustering,
Can someone please explain to me how to update cluster count when zooming in and zooming out. I've tried to implement by above code but no results yet.
Any help would be greatly appreciated.
Thanks in advance.

Related

Assertion failure in GMUNonHierarchicalDistanceBasedAlgorithm clustersAtZoom

So my app keep crashing giving me this error [Assertion failure in GMUNonHierarchicalDistanceBasedAlgorithm clustersAtZoom] , then after while of reaching I found that the itemToClusterDistanceMap and itemToClusterMap alway one item less than the _items.count ,but I do not the reason for this behaviour
NSAssert(itemToClusterDistanceMap.count == _items.count,
#"All items should be mapped to a distance");
NSAssert(itemToClusterMap.count == _items.count,
#"All items should be mapped to a cluster");
func initMapMarkersWithClustering(){
let iconGenerator = GMUDefaultClusterIconGenerator()
let algorithm = GMUNonHierarchicalDistanceBasedAlgorithm()
let renderer = CustomClusterRenderer(mapView: mapView, clusterIconGenerator: iconGenerator)
clusterManager = GMUClusterManager(map: mapView, algorithm: algorithm, renderer: renderer)
generateClusterItems()
clusterManager.cluster()
clusterManager.setDelegate(self, mapDelegate: self)
}
for anyone how may face this issue in the feature I found the issue the position.latitude and position.longitude should be in the rage of [-85,85] for latitude and [-180,180] for longitude and not 0 for both before adding them

MapBox - detect zoomLevel changes

How can I simply detect zoom level changes? Is it possible?
I simply need to hide my annotation views when zoom level is not enough.
regionDidChange:animated: is not intended to use for me. Any other way?
I need to hide my labels here:
and show them here:
This is what I currently do with my labels:
class CardAnnotation: MGLPointAnnotation {
var card: Card
init(card: Card) {
self.card = card
super.init()
let coordinates = card.border.map { $0.coordinate }
let sumLatitudes = coordinates.map { $0.latitude }.reduce(0, +)
let sumLongitudes = coordinates.map { $0.longitude }.reduce(0, +)
let averageLatitude = sumLatitudes / Double(coordinates.count)
let averageLongitude = sumLongitudes / Double(coordinates.count)
coordinate = CLLocationCoordinate2D(latitude: averageLatitude, longitude: averageLongitude)
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
var annotations = [CardAnnotation]()
mapView.addAnnotations(annotations)
Of the two main ways to add overlays to an MGLMapView, the runtime styling API is better suited for text labels and also for varying the appearance based on the zoom level. While you’re at it, you might as well create the polygons using the same API too.
Start by creating polygon features for the areas you want shaded in:
var cards: [MGLPolygonFeature] = []
var coordinates: [CLLocationCoordinate2D] = […]
let card = MGLPolygonFeature(coordinates: &coordinates, count: UInt(coordinates.count))
card.attributes = ["address": 123]
// …
cards.append(card)
Within any method that runs after the map finishes loading, such as MGLMapViewDelegate.mapView(_:didFinishLoading:), add a shape source containing these features to the current style:
let cardSource = MGLShapeSource(identifier: "cards", features: cards, options: [:])
mapView.style?.addSource(cardSource)
With the shape source in place, create a style layer that renders the polygon features as mauve fills:
let fillLayer = MGLFillStyleLayer(identifier: "card-fills", source: cardSource)
fillLayer.fillColor = NSExpression(forConstantValue: #colorLiteral(red: 0.9098039216, green: 0.8235294118, blue: 0.9647058824, alpha: 1))
mapView.style?.addLayer(fillLayer)
Then create another style layer that renders labels at each polygon feature’s centroid. (MGLSymbolStyleLayer automatically calculates the centroids, accounting for irregularly shaped polygons.)
// Same source as the fillLayer.
let labelLayer = MGLSymbolStyleLayer(identifier: "card-labels", source: cardSource)
// Each feature’s address is an integer, but text has to be a string.
labelLayer.text = NSExpression(format: "CAST(address, 'NSString')")
// Smoothly interpolate from transparent at z16 to opaque at z17.
labelLayer.textOpacity = NSExpression(format: "mgl_interpolate:withCurveType:parameters:stops:($zoomLevel, 'linear', nil, %#)",
[16: 0, 17: 1])
mapView.style?.addLayer(labelLayer)
As you customize these style layers, pay particular attention to the options on MGLSymbolStyleLayer that control whether nearby symbols are automatically hidden due to collision. You may find that the automatic collision detection makes it unnecessary to specify the textOpacity property.
When you create the source, one of the options you can pass into the MGLShapeSource initializer is MGLShapeSourceOption.clustered. However, in order to use that option, you’d have to create MGLPointFeatures, not MGLPolygonFeatures. Fortunately, MGLPolygonFeature has a coordinate property that lets you find the centroid without manual calculations:
var cardCentroids: [MGLPointFeature] = []
var coordinates: [CLLocationCoordinate2D] = […]
let card = MGLPolygonFeature(coordinates: &coordinates, count: UInt(coordinates.count))
let cardCentroid = MGLPointFeature()
cardCentroid.coordinate = card.coordinate
cardCentroid.attributes = ["address": 123]
cardCentroids.append(cardCentroid)
// …
let cardCentroidSource = MGLShapeSource(identifier: "card-centroids", features: cardCentroids, options: [.clustered: true])
mapView.style?.addSource(cardCentroidSource)
This clustered source can only be used with MGLSymbolStyleLayer or MGLCircleStyleLayer, not MGLFillStyleLayer. This example shows how to work with clustered points in more detail.
One option is to add the labels as a MGLSymbolStyleLayer, then determine the textOpacity based on zoom level.
If you are using the current version of the Maps SDK for iOS, you could try something like:
symbols.textOpacity = NSExpression(format: "mgl_interpolate:withCurveType:parameters:stops:($zoomLevel, 'linear', nil, %#)", [16.9: 0, 17: 1])
The dynamically styled interactive points example shows one approach to this.
Is the problem that when you zoom out, your annotations are too close together? If so, it is better to group them together than to hide them entirely. See Decluttering a Map with MapKit Annotation Clustering.

ARKit hide objects behind walls

How can I use the horizontal and vertical planes tracked by ARKit to hide objects behind walls/ behind real objects? Currently the 3D added objects can be seen through walls when you leave a room and/ or in front of objects that they should be behind. So is it possible to use the data ARKit gives me to provide a more natural AR experience without the objects appearing through walls?
You have two issues here.
(And you didn't even use regular expressions!)
How to create occlusion geometry for ARKit/SceneKit?
If you set a SceneKit material's colorBufferWriteMask to an empty value ([] in Swift), any objects using that material won't appear in the view, but they'll still write to the z-buffer during rendering, which affects the rendering of other objects. In effect, you'll get a "hole" shaped like your object, through which the background shows (the camera feed, in the case of ARSCNView), but which can still obscure other SceneKit objects.
You'll also need to make sure that an occluded renders before any other nodes it's supposed to obscure. You can do this using node hierarchy ( I can't remember offhand whether parent nodes render before their children or the other way around, but it's easy enough to test). Nodes that are peers in the hierarchy don't have a deterministic order, but you can force an order regardless of hierarchy with the renderingOrder property. That property defaults to zero, so setting it to -1 will render before everything. (Or for finer control, set the renderingOrders for several nodes to a sequence of values.)
How to detect walls/etc so you know where to put occlusion geometry?
In iOS 11.3 and later (aka "ARKit 1.5"), you can turn on vertical plane detection. (Note that when you get vertical plane anchors back from that, they're automatically rotated. So if you attach models to the anchor, their local "up" direction is normal to the plane.) Also new in iOS 11.3, you can get a more detailed shape estimate for each detected plane (see ARSCNPlaneGeometry), regardless of its orientation.
However, even if you have the horizontal and the vertical, the outer limits of a plane are just estimates that change over time. That is, ARKit can quickly detect where part of a wall is, but it doesn't know where the edges of the wall are without the user spending some time waving the device around to map out the space. And even then, the mapped edges might not line up precisely with those of the real wall.
So... if you use detected vertical planes to occlude virtual geometry, you might find places where virtual objects that are supposed to be hidden show through, either by being not quite hiding right at the edge of the wall, or being visible through places where ARKit hasn't mapped the entire real wall. (The latter issue you might be able to solve by assuming a larger extent than ARKit does.)
For creating an occlusion material (also known as blackhole material or blocking material) you have to use the following instance properties: .colorBufferWriteMask, .readsFromDepthBuffer, .writesToDepthBuffer and .renderingOrder.
You can use them this way:
plane.geometry?.firstMaterial?.isDoubleSided = true
plane.geometry?.firstMaterial?.colorBufferWriteMask = .alpha
plane.geometry?.firstMaterial?.writesToDepthBuffer = true
plane.geometry?.firstMaterial?.readsFromDepthBuffer = true
plane.renderingOrder = -100
...or this way:
func occlusion() -> SCNMaterial {
let occlusionMaterial = SCNMaterial()
occlusionMaterial.isDoubleSided = true
occlusionMaterial.colorBufferWriteMask = []
occlusionMaterial.readsFromDepthBuffer = true
occlusionMaterial.writesToDepthBuffer = true
return occlusionMaterial
}
plane.geometry?.firstMaterial = occlusion()
plane.renderingOrder = -100
In order to create an occlusion material it's really simple
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
// Define a occlusion material
let occlusionMaterial = SCNMaterial()
occlusionMaterial.colorBufferWriteMask = []
boxGeometry.materials = [occlusionMaterial]
self.box = SCNNode(geometry: boxGeometry)
// Set rendering order to present this box in front of the other models
self.box.renderingOrder = -1
Great solution:
GitHub: arkit-occlusion
Worked for me.
But in my case i wanted to set the walls by code. So if you don't want to set the Walls by user -> use the plane detection to detect walls and set the walls by code.
Or in a range of 4 meters the iphone depht sensor works and you can detect obstacles with ARHitTest.
ARKit 6.0 and LiDAR scanner
You can hide any object behind a virtual invisible wall that replicates real wall geometry. iPhones and iPads Pro equipped with a LiDAR scanner help us reconstruct a 3d topological map of surrounding environment. LiDAR scanner greatly improves a quality of Z channel that allows occlude or remove humans from AR scene.
Also LiDAR improves such feature as Object Occlusion, Motion Tracking and Raycasting. With LiDAR scanner you can reconstruct a scene even in a unlit environment or in a room having white walls with no features at all. 3d reconstruction of surrounding environment has become possible in ARKit 6.0 thanks to sceneReconstruction instance property. Having a reconstructed mesh of your walls it's now super easy to hide any object behind real walls.
To activate a sceneReconstruction instance property in ARKit 6.0 use the following code:
#IBOutlet var arView: ARView!
arView.automaticallyConfigureSession = false
guard ARWorldTrackingConfiguration.supportsSceneReconstruction(.mesh)
else { return }
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
arView.debugOptions.insert([.showSceneUnderstanding])
arView.environment.sceneUnderstanding.options.insert([.occlusion])
arView.session.run(config)
Also if you're using SceneKit try the following approach:
#IBOutlet var sceneView: ARSCNView!
func renderer(_ renderer: SCNSceneRenderer,
nodeFor anchor: ARAnchor) -> SCNNode? {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return nil }
let geometry = SCNGeometry(arGeometry: meshAnchor.geometry)
geometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
‍
let node = SCNNode()
node.name = "Node_\(meshAnchor.identifier)"
node.geometry = geometry
return node
}
func renderer(_ renderer: SCNSceneRenderer,
didUpdate node: SCNNode,
for anchor: ARAnchor) {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return }
let newGeometry = SCNGeometry(arGeometry: meshAnchor.geometry)
newGeometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
node.geometry = newGeometry
}
And here are SCNGeometry and SCNGeometrySource extensions:
extension SCNGeometry {
convenience init(arGeometry: ARMeshGeometry) {
let verticesSource = SCNGeometrySource(arGeometry.vertices,
semantic: .vertex)
let normalsSource = SCNGeometrySource(arGeometry.normals,
semantic: .normal)
let faces = SCNGeometryElement(arGeometry.faces)
self.init(sources: [verticesSource, normalsSource], elements: [faces])
}
}
extension SCNGeometrySource {
convenience init(_ source: ARGeometrySource, semantic: Semantic) {
self.init(buffer: source.buffer, vertexFormat: source.format,
semantic: semantic,
vertexCount: source.count,
dataOffset: source.offset,
dataStride: source.stride)
}
}
...and SCNGeometryElement and SCNGeometryPrimitiveType extensions:
extension SCNGeometryElement {
convenience init(_ source: ARGeometryElement) {
let pointer = source.buffer.contents()
let byteCount = source.count *
source.indexCountPerPrimitive *
source.bytesPerIndex
let data = Data(bytesNoCopy: pointer,
count: byteCount,
deallocator: .none)
self.init(data: data, primitiveType: .of(source.primitiveType),
primitiveCount: source.count,
bytesPerIndex: source.bytesPerIndex)
}
}
extension SCNGeometryPrimitiveType {
static func of(type: ARGeometryPrimitiveType) -> SCNGeometryPrimitiveType {
switch type {
case .line: return .line
case .triangle: return .triangles
}
}
}

MapView with Polyline or Annotation losing tile on zoom

I have a MKMapView in an IOS app using cached local tiles that works great. Can zoom, move around, etc...
When, however, I add either annotation or a polyline, it still works great until zoom gets to a certain zoom level, then the tiles under the annotations and polylines don't show up, but all others do fine.
zoomed out at the right level
Zoomed in one two many levels.
If I remove the annotations/lines, the map zooms in correctly and works great for the area the annotations/lines would have been in.
Any ideas?
I reduced this to the smallest test case. This runs fine until you zoom in, then any tiles under the polyline disappear. Zoom out and they re-appear.
import Foundation
import UIKit
import MapKit
class MyController: UIViewController, MKMapViewDelegate {
#IBOutlet weak var mapView: MKMapView!
var overlay:MKTileOverlay = MKTileOverlay(URLTemplate: "https://services.arcgisonline.com/ArcGIS/rest/services/USA_Topo_Maps/MapServer/tile/{z}/{y}/{x}.jpg");
override func viewDidLoad() {
mapView.delegate = self
mapView.showsUserLocation = true;
overlay.maximumZ = 15;
overlay.minimumZ = 12;
overlay.canReplaceMapContent = true
mapView.addOverlay(overlay)
var points: [CLLocationCoordinate2D] = [CLLocationCoordinate2D]()
points.append(CLLocationCoordinate2D(latitude: 40.7608, longitude: -111.8910));
points.append(CLLocationCoordinate2D(latitude: 40.8894, longitude: -111.8808));
var polyline = MKPolyline(coordinates: &points, count: points.count)
mapView.addOverlay(polyline)
let region = MKCoordinateRegion(center: points[0], span: MKCoordinateSpan(latitudeDelta: 0.05, longitudeDelta: 0.05))
mapView.setRegion(region, animated: false)
}
func mapView(mapView: MKMapView!, rendererForOverlay overlay: MKOverlay!) -> MKOverlayRenderer! {
if overlay is MKPolyline {
var polylineRenderer = MKPolylineRenderer(overlay: overlay)
polylineRenderer.strokeColor = UIColor.blueColor()
polylineRenderer.lineWidth = 5
return polylineRenderer
} else if (overlay is MKTileOverlay) {
let renderr = MKTileOverlayRenderer(overlay: overlay)
return renderr
}
return nil
}
}
I see that it's been three months since you posed this question, but in case it's still of interest I'll share what I've found.
I've added the capability to download map tiles from Apple using an MKMapSnapshotter and store them into a set of tiles directories for use when you're out of cellular service areas, which works pretty well. The directories cover 3 to 6 different zoom levels depending on a user selection, with the tile images arranged in sub-directories under directories named after their zoom levels, per the iOS standard.
When displaying these using an MKTileOverlayRenderer, it switches appropriately among the images for zoom levels between the min and max zoom levels specified in the MKMapRectMake that I used to set up for the MKTileOverlay object.
The map displays all of the tiles appropriately at all of the specified zoom levels, repainting the tiles as sizes change. When the display goes above the max or below the min zoom levels, it continues to display the tiles from either the max or the min zoom levels for a while since it has no tile images for the new sizes. This works OK when there are no polylines, but the system redraws the squares that lie under polylines to resize the lines and when it does so it clears the tile images and looks for the tiles for the new zoom level. Since it doesn't have those tiles, those squares wind up empty on the tiles level, though squares not overlapped by polylines are not cleared and so retain the old images.
BTW, you may have already run into this but in case you haven't, you need to subclass MKTileOverlay so that you can set the location and size of the area mapped by the tiles.
Hope all of this helps explain things, even if it doesn't necessarily solve the problem of what you'll want to do about it.

Add shape to sphere surface in SceneKit

I'd like to be able to add shapes to the surface of a sphere using SceneKit. I started with a simple example where I'm just trying to color a portion of the sphere's surface another color. I'd like this to be an object that can be tapped, selected, etc... so my thought was to add shapes as SCNNodes using custom SCNShape objects for the geometry.
What I have now is a blue square that I'm drawing from a series of points and adding to the scene containing a red sphere. It basically ends up tangent to a point on the sphere, but the real goal is to draw it on the surface. Is there anything in SceneKit that will allow me to do this? Do I need to do some math/geometry to make it the same shape as the sphere or map to a sphere's coordinates? Is what I'm trying to do outside the scope of SceneKit?
If this question is way too broad I'd be glad if anyone could point me towards books or resources to learn what I'm missing. I'm totally new to SceneKit and 3D in general, just having fun playing around with some ideas.
Here's some playground code for what I have now:
import UIKit
import SceneKit
import XCPlayground
class SceneViewController: UIViewController {
let sceneView = SCNView()
private lazy var sphere: SCNSphere = {
let sphere = SCNSphere(radius: 100.0)
sphere.materials = [self.surfaceMaterial]
return sphere
}()
private lazy var testScene: SCNScene = {
let scene = SCNScene()
let sphereNode: SCNNode = SCNNode(geometry: self.sphere)
sphereNode.addChildNode(self.blueChildNode)
scene.rootNode.addChildNode(sphereNode)
//scene.rootNode.addChildNode(self.blueChildNode)
return scene
}()
private lazy var surfaceMaterial: SCNMaterial = {
let material = SCNMaterial()
material.diffuse.contents = UIColor.redColor()
material.specular.contents = UIColor(white: 0.6, alpha: 1.0)
material.shininess = 0.3
return material
}()
private lazy var blueChildNode: SCNNode = {
let node: SCNNode = SCNNode(geometry: self.blueGeometry)
node.position = SCNVector3(0, 0, 100)
return node
}()
private lazy var blueGeometry: SCNShape = {
let points: [CGPoint] = [
CGPointMake(0, 0),
CGPointMake(50, 0),
CGPointMake(50, 50),
CGPointMake(0, 50),
CGPointMake(0, 0)]
var pathRef: CGMutablePathRef = CGPathCreateMutable()
CGPathAddLines(pathRef, nil, points, points.count)
let bezierPath: UIBezierPath = UIBezierPath(CGPath: pathRef)
let shape = SCNShape(path: bezierPath, extrusionDepth: 1)
shape.materials = [self.blueNodeMaterial]
return shape
}()
private lazy var blueNodeMaterial: SCNMaterial = {
let material = SCNMaterial()
material.diffuse.contents = UIColor.blueColor()
return material
}()
override func viewDidLoad() {
super.viewDidLoad()
sceneView.frame = self.view.bounds
sceneView.backgroundColor = UIColor.blackColor()
self.view.addSubview(sceneView)
sceneView.autoenablesDefaultLighting = true
sceneView.allowsCameraControl = true
sceneView.scene = testScene
}
}
XCPShowView("SceneKit", view: SceneViewController().view)
If you want to map 2D content into the surface of a 3D SceneKit object, and have the 2D content be dynamic/interactive, one of the easiest solutions is to use SpriteKit for the 2D content. You can set your sphere's diffuse contents to an SKScene, and create/position/decorate SpriteKit nodes in that scene to arrange them on the face of the sphere.
If you want to have this content respond to tap events... Using hitTest in your SceneKit view gets you a SCNHitTestResult, and from that you can get texture coordinates for the hit point on the sphere. From texture coordinates you can convert to SKScene coordinates and spawn nodes, run actions, or whatever.
For further details, your best bet is probably Apple's SceneKitReel sample code project. This is the demo that introduced SceneKit for iOS at WWDC14. There's a "slide" in that demo where paint globs fly from the camera at a spinning torus and leave paint splashes where they hit it — the torus has a SpriteKit scene as its material, and the trick for leaving splashes on collisions is basically the same hit test -> texture coordinate -> SpriteKit coordinate approach outlined above.
David Rönnqvist's SceneKit book (available as an iBook) has an example (the EarthView example, a talking globe, chapter 5) that is worth looking at. That example constructs a 3D pushpin, which is then attached to the surface of a globe at the location of a tap.
Your problem is more complicated because you're constructing a shape that covers a segment of the sphere. Your "square" is really a spherical trapezium, a segment of the sphere bounded by four great circle arcs. I can see three possible approaches, depending on what you're ultimately looking for.
The simplest way to do it is to use an image as the material for the sphere's surface. That approach is well illustrated in the Ronnqvist EarthView example, which uses several images to show the earth's surface. Instead of drawing continents, you'd draw your square. This approach isn't suitable for interactivity, though. Look at SCNMaterial.
Another approach would be to use hit test results. That's documented on SCNSceneRenderer (which SCNView complies with) and SCNHitTest. Using the hit test results, you could pull out the face that was tapped, and then its geometry elements. This won't get you all the way home, though, because SceneKit uses triangles for SCNSphere, and you're looking for quads. You will also be limited to squares that line up with SceneKit's underlying wireframe representation.
If you want full control of where the "square" is drawn, including varying its angle relative to the equator, I think you'll have to build your own geometry from scratch. That means calculating the latitude/longitude of each corner point, then generating arcs between those points, then calculating a bunch of intermediate points along the arcs. You'll have to add a fudge factor, to raise the intermediate points slightly above the sphere's surface, and build up your own quads or triangle strips. Classes here are SCNGeometry, SCNGeometryElement, and SCNGeometrySource.

Resources