Assertion failure in GMUNonHierarchicalDistanceBasedAlgorithm clustersAtZoom - ios

So my app keep crashing giving me this error [Assertion failure in GMUNonHierarchicalDistanceBasedAlgorithm clustersAtZoom] , then after while of reaching I found that the itemToClusterDistanceMap and itemToClusterMap alway one item less than the _items.count ,but I do not the reason for this behaviour
NSAssert(itemToClusterDistanceMap.count == _items.count,
#"All items should be mapped to a distance");
NSAssert(itemToClusterMap.count == _items.count,
#"All items should be mapped to a cluster");
func initMapMarkersWithClustering(){
let iconGenerator = GMUDefaultClusterIconGenerator()
let algorithm = GMUNonHierarchicalDistanceBasedAlgorithm()
let renderer = CustomClusterRenderer(mapView: mapView, clusterIconGenerator: iconGenerator)
clusterManager = GMUClusterManager(map: mapView, algorithm: algorithm, renderer: renderer)
generateClusterItems()
clusterManager.cluster()
clusterManager.setDelegate(self, mapDelegate: self)
}

for anyone how may face this issue in the feature I found the issue the position.latitude and position.longitude should be in the rage of [-85,85] for latitude and [-180,180] for longitude and not 0 for both before adding them

Related

Google Map Cluster not working zoom in/out iOS

I am trying to work with Google map clustering in iOS. I am fetching a massive chunk of data from the database in the lot of 100 per call and then rendering the set on google Maps by sending the lat-long array to the marker cluster. When I try to zoom in cluster expands into further small groups. But the cluster counts not updating automatically.
ScreenShot:-
Zoom out:-
Zoom In:-
**Code:-**
//MARK: - Setup Map Cluster
func setClusterMap() {
// Set up the cluster manager with default icon generator and renderer.
clusterArr.removeAll()
for i in 0..<self.isLicensedArr.count {
if self.isLicensedArr[i] == 0 {
clusterArr.append(self.isLicensedArr[i])
}
}
let clusterCount = NSNumber(value: clusterArr.count)
let iconGenerator = GMUDefaultClusterIconGenerator(buckets: [clusterCount], backgroundColors: [UIColor(red: 33/255, green: 174/255, blue: 108/255, alpha: 1.0)])
let algorithm = GMUNonHierarchicalDistanceBasedAlgorithm()
let renderer = GMUDefaultClusterRenderer(mapView: mapView, clusterIconGenerator: iconGenerator)
renderer.animatesClusters = true
clusterManager = GMUClusterManager(map: mapView, algorithm: algorithm, renderer: renderer)
//Register self to listen to GMSMapViewDelegate events.
clusterManager.setMapDelegate(self)
clusterManager.setDelegate(self, mapDelegate: self)
// Generate and add random items to the cluster manager.
self.generateClusterItems()
// Call cluster() after items have been added to perform the clustering and rendering on map.
self.clusterManager.cluster()
}
Following url for map clustering :- https://developers.google.com/maps/documentation/ios-sdk/utility/marker-clustering,
Can someone please explain to me how to update cluster count when zooming in and zooming out. I've tried to implement by above code but no results yet.
Any help would be greatly appreciated.
Thanks in advance.

CoreML Memory Leak in iOS 14.5

In my application, I used VNImageRequestHandler with a custom MLModel for object detection.
The app works fine with iOS versions before 14.5.
When iOS 14.5 came, it broke everything.
Whenever try handler.perform([visionRequest]) throws an error (Error Domain=com.apple.vis Code=11 "encountered unknown exception" UserInfo={NSLocalizedDescription=encountered unknown exception}), the pixelBuffer memory is held and never released, it made the buffers of AVCaptureOutput full then new frame not came.
I have to change the code as below, by copy the pixelBuffer to another var, I solved the problem that new frame not coming, but memory leak problem is still happened.
Because of memory leak, the app crashed after some times.
Notice that before iOS version 14.5, detection works perfectly, try handler.perform([visionRequest]) never throws any error.
Here is my code:
private func predictWithPixelBuffer(sampleBuffer: CMSampleBuffer) {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
// Get additional info from the camera.
var options: [VNImageOption : Any] = [:]
if let cameraIntrinsicMatrix = CMGetAttachment(sampleBuffer, kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, nil) {
options[.cameraIntrinsics] = cameraIntrinsicMatrix
}
autoreleasepool {
// Because of iOS 14.5, there is a bug that when perform vision request failed, pixel buffer memory leaked so the AVCaptureOutput buffers is full, it will not output new frame any more, this is a temporary work around to copy pixel buffer to a new buffer, this currently make the memory increased a lot also. Need to find a better way
var clonePixelBuffer: CVPixelBuffer? = pixelBuffer.copy()
let handler = VNImageRequestHandler(cvPixelBuffer: clonePixelBuffer!, orientation: orientation, options: options)
print("[DEBUG] detecting...")
do {
try handler.perform([visionRequest])
} catch {
delegate?.detector(didOutputBoundingBox: [])
failedCount += 1
print("[DEBUG] detect failed \(failedCount)")
print("Failed to perform Vision request: \(error)")
}
clonePixelBuffer = nil
}
}
Has anyone experienced the same problem? If so, how did you fix it?
iOS 14.7 Beta available on the developer portal seems to have fixed this issue.
I have a partial fix for this using #Matthijs Hollemans CoreMLHelpers library.
The model I use has 300 classes and 2363 anchors. I used a lot of the code Matthijs provided here to convert the model to MLModel.
In the last step a pipeline is built using the 3 sub models: raw_ssd_output, decoder, and nms. For this workaround you need to remove the nms model from the pipeline, and output raw_confidence and raw_coordinates.
In your app you need to add the code from CoreMLHelpers.
Then add this function to decode the output from your MLModel:
func decodeResults(results:[VNCoreMLFeatureValueObservation]) -> [BoundingBox] {
let raw_confidence: MLMultiArray = results[0].featureValue.multiArrayValue!
let raw_coordinates: MLMultiArray = results[1].featureValue.multiArrayValue!
print(raw_confidence.shape, raw_coordinates.shape)
var boxes = [BoundingBox]()
let startDecoding = Date()
for anchor in 0..<raw_confidence.shape[0].int32Value {
var maxInd:Int = 0
var maxConf:Float = 0
for score in 0..<raw_confidence.shape[1].int32Value {
let key = [anchor, score] as [NSNumber]
let prob = raw_confidence[key].floatValue
if prob > maxConf {
maxInd = Int(score)
maxConf = prob
}
}
let y0 = raw_coordinates[[anchor, 0] as [NSNumber]].doubleValue
let x0 = raw_coordinates[[anchor, 1] as [NSNumber]].doubleValue
let y1 = raw_coordinates[[anchor, 2] as [NSNumber]].doubleValue
let x1 = raw_coordinates[[anchor, 3] as [NSNumber]].doubleValue
let width = x1-x0
let height = y1-y0
let x = x0 + width/2
let y = y0 + height/2
let rect = CGRect(x: x, y: y, width: width, height: height)
let box = BoundingBox(classIndex: maxInd, score: maxConf, rect: rect)
boxes.append(box)
}
let finishDecoding = Date()
let keepIndices = nonMaxSuppressionMultiClass(numClasses: raw_confidence.shape[1].intValue, boundingBoxes: boxes, scoreThreshold: 0.5, iouThreshold: 0.6, maxPerClass: 5, maxTotal: 10)
let finishNMS = Date()
var keepBoxes = [BoundingBox]()
for index in keepIndices {
keepBoxes.append(boxes[index])
}
print("Time Decoding", finishDecoding.timeIntervalSince(startDecoding))
print("Time Performing NMS", finishNMS.timeIntervalSince(finishDecoding))
return keepBoxes
}
Then when you receive the results from Vision, you call the function like this:
if let rawResults = vnRequest.results as? [VNCoreMLFeatureValueObservation] {
let boxes = self.decodeResults(results: rawResults)
print(boxes)
}
This solution is slow because of the way I move the data around and formulate my list of BoundingBox types. It would be much more efficient to process the MLMultiArray data using underlying pointers, and maybe use Accelerate to find the maximum score and best class for each anchor box.
In my case it helped to disable neural engine by forcing CoreML to run on CPU and GPU only. This is often slower but doesn't throw the exception (at least in our case). At the end we implemented a policy to force some of our models to not run on neural engine for certain iOS devices.
See MLModelConfiguration.computeUntis to constraint the hardware coreml model can use.

Very slow scrolling/zooming experience with GMUClusterRenderer (Google Maps Clustering) on iOS

I will try to explain my issue, and what I have done so far.
Introduction:
I am using the iOS Utils Library from Google Maps in order to display around 300 markers on the map.
The algorithm used for the Clustering is the GMUNonHierarchicalDistanceBasedAlgorithm.
Basically, our users can send us the weather they observe through their window, so that we can display the real time weather around the world.
It enables us to improve and/or adjust the weather forecasts.
But my scrolling/zooming experience isn't smooth at all. By the way I am testing it with an iPhone X ...
Let's get to the heart of the matter:
Here is how I configure the ClusterManager
private func configureCluster(array: [Observation]) -> Void {
let iconGenerator = GMUDefaultClusterIconGenerator()
let algorithm = GMUNonHierarchicalDistanceBasedAlgorithm()
let renderer = GMUDefaultClusterRenderer(mapView: mapView,
clusterIconGenerator: iconGenerator)
renderer.delegate = self
clusterManager = GMUClusterManager(map: mapView, algorithm: algorithm,
renderer: renderer)
clusterManager.add(array)
clusterManager.cluster()
clusterManager.setDelegate(self, mapDelegate: self)
}
Here is my Observation class, I tried to keep it simple :
class Observation : NSObject, GMUClusterItem {
static var ICON_SIZE = 30
let timestamp: Double
let idObs: String
let position: CLLocationCoordinate2D
let idPicto: [Int]
let token: String
let comment: String
let altitude: Double
init(timestamp: Double, idObs: String, coordinate: CLLocationCoordinate2D, idPicto: [Int], token: String, comment: String, altitude: Double) {
self.timestamp = timestamp
self.idObs = idObs
self.position = coordinate
self.idPicto = idPicto
self.token = token
self.comment = comment
self.altitude = altitude
}
}
And finally, the delegate method for the rendering :
func renderer(_ renderer: GMUClusterRenderer, willRenderMarker marker: GMSMarker) {
if let cluster = marker.userData as? GMUCluster {
if let listObs = cluster.items as? [Observation] {
if listObs.count > 1 {
let sortedObs = listObs.sorted(by: { $0.timestamp > $1.timestamp })
if let mostRecentObs = sortedObs.first {
DispatchQueue.main.async {
self.setIconViewForMarker(marker: marker, obs: mostRecentObs)
}
}
} else {
if let obs = listObs.last {
DispatchQueue.main.async {
self.setIconViewForMarker(marker: marker, obs: obs)
}
}
}
}
}
}
Users can only send one observation, but this observation can be composed with various weather phenomenoms (like Clouds + Rain + Wind) or only Rain if they want.
To differenciate them, if it's only 1 phenomenom, the marker.iconView property will be set directly.
On the other hand, if it's an observation with multiple phenomenoms, I will create a View containing all the images representing the phenomenoms.
func setIconViewForMarker(marker: GMSMarker, obs: Observation) -> Void {
let isYourObs = Observation.isOwnObservation(id: obs.idObs) ? true : false
if isYourObs {
marker.iconView = Observation.viewForPhenomenomArray(ids: obs.idPicto, isYourObs: isYourObs)
} else {
// Observation with more than 1 phenomenom
if obs.idPicto.count > 1 {
marker.iconView = Observation.viewForPhenomenomArray(ids: obs.idPicto, isYourObs: isYourObs)
// Observation with only 1 phenomenom
} else if obs.idPicto.count == 1 {
if let id = obs.idPicto.last {
marker.iconView = Observation.setImageForPhenomenom(id: id)
}
}
}
}
And the last piece of code, to show you how I build this custom view (I think my issue is probably here)
class func viewForPhenomenomArray(ids: [Int], isYourObs: Bool) -> UIView {
let popupView = UIView()
popupView.frame = CGRect.init(x: 0, y: 0, width: (ICON_SIZE * ids.count) + ((ids.count + 1) * 5) , height: ICON_SIZE)
if (isYourObs) {
popupView.backgroundColor = UIColor(red:0.25, green:0.61, blue:0.20, alpha:1)
} else {
popupView.backgroundColor = UIColor(red:0.00, green:0.31, blue:0.57, alpha:1)
}
popupView.layer.cornerRadius = 12
for (index, element) in ids.enumerated() {
let imageView = UIImageView(image: Observation.getPictoFromID(id: element))
imageView.frame = CGRect(x: ((index + 1) * 5) + index * ICON_SIZE, y: 0, width: ICON_SIZE, height: ICON_SIZE)
popupView.addSubview(imageView)
}
return popupView
}
I also tried with very small image, to understand if the issue comes from rendering a lot of PNGs on the map, but seriously, it's an iPhone X, it should be able to render some simple weather icon on a map.
Do you think I am doing something wrong ? Or is it a known issue in the Google Maps SDK ? (I have read that it is fixed at 30 fps)
Do you think rendering a lot of images (as marker.image) on a map takes that much GPU? To a point where the experience isn't acceptable at all?
If you have any advice, I'll take them all.
I was facing the same issue. After debugging a lot and checking google's code even, i come to the conclusion that, issue was from GMUDefaultClusterIconGenerator. This class is creating images at runtime for given cluster size that you are displaying. So, when you zoom in or zoom out the map, the cluster size is going to update, and this class creates new image for new number(Even it keep images cached, if same number get repeated).
So, the solution that i found is to use buckets. You will get surprised by seeing this new term. Let me explain the bucket concept by giving simple example.
suppose you kept bucket sizes as 10, 20, 50, 100, 200, 500, 1000.
Now, if your cluster is 3, then it will show 3.
If cluster size = 8, show = 8.
If cluster size = 16, show = 10+.
If cluster size = 22, show = 20+.
If cluster size = 48, show = 20+.
If cluster size = 91, show = 50+.
If cluster size = 177, show = 100+.
If cluster size = 502, show = 500+.
If cluster size = 1200004, show = 1000+.
Now here, for any cluster size, the marker images that are going to be rendered will be from 1, 2, 3, 4, 5, 6, 7, 8, 9, 10+, 20+, 50+, 100+, 200+, 500+, 1000+. As it caches the images, so this images is going to be reused. So, the time+cpu that it was using for creating new images is lowered(only few images required to be created).
You must have got the idea, about buckets now. As, if cluster is having very small number, then cluster size matters, but if increases, then bucket size is enough to get idea about cluster size.
Now, question is how to achieve this.
Actually, GMUDefaultClusterIconGenerator class has already this functionality implemented, you just need to change its initialisation to this:
let iconGenerator = GMUDefaultClusterIconGenerator(buckets: [ 10, 20, 50, 100, 200, 500, 1000])
GMUDefaultClusterIconGenerator class have other init methods, by using which you can give different background colors to different buckets, different background images to to different buckets and many more.
Let me know, if any further help required.

ARKit hide objects behind walls

How can I use the horizontal and vertical planes tracked by ARKit to hide objects behind walls/ behind real objects? Currently the 3D added objects can be seen through walls when you leave a room and/ or in front of objects that they should be behind. So is it possible to use the data ARKit gives me to provide a more natural AR experience without the objects appearing through walls?
You have two issues here.
(And you didn't even use regular expressions!)
How to create occlusion geometry for ARKit/SceneKit?
If you set a SceneKit material's colorBufferWriteMask to an empty value ([] in Swift), any objects using that material won't appear in the view, but they'll still write to the z-buffer during rendering, which affects the rendering of other objects. In effect, you'll get a "hole" shaped like your object, through which the background shows (the camera feed, in the case of ARSCNView), but which can still obscure other SceneKit objects.
You'll also need to make sure that an occluded renders before any other nodes it's supposed to obscure. You can do this using node hierarchy ( I can't remember offhand whether parent nodes render before their children or the other way around, but it's easy enough to test). Nodes that are peers in the hierarchy don't have a deterministic order, but you can force an order regardless of hierarchy with the renderingOrder property. That property defaults to zero, so setting it to -1 will render before everything. (Or for finer control, set the renderingOrders for several nodes to a sequence of values.)
How to detect walls/etc so you know where to put occlusion geometry?
In iOS 11.3 and later (aka "ARKit 1.5"), you can turn on vertical plane detection. (Note that when you get vertical plane anchors back from that, they're automatically rotated. So if you attach models to the anchor, their local "up" direction is normal to the plane.) Also new in iOS 11.3, you can get a more detailed shape estimate for each detected plane (see ARSCNPlaneGeometry), regardless of its orientation.
However, even if you have the horizontal and the vertical, the outer limits of a plane are just estimates that change over time. That is, ARKit can quickly detect where part of a wall is, but it doesn't know where the edges of the wall are without the user spending some time waving the device around to map out the space. And even then, the mapped edges might not line up precisely with those of the real wall.
So... if you use detected vertical planes to occlude virtual geometry, you might find places where virtual objects that are supposed to be hidden show through, either by being not quite hiding right at the edge of the wall, or being visible through places where ARKit hasn't mapped the entire real wall. (The latter issue you might be able to solve by assuming a larger extent than ARKit does.)
For creating an occlusion material (also known as blackhole material or blocking material) you have to use the following instance properties: .colorBufferWriteMask, .readsFromDepthBuffer, .writesToDepthBuffer and .renderingOrder.
You can use them this way:
plane.geometry?.firstMaterial?.isDoubleSided = true
plane.geometry?.firstMaterial?.colorBufferWriteMask = .alpha
plane.geometry?.firstMaterial?.writesToDepthBuffer = true
plane.geometry?.firstMaterial?.readsFromDepthBuffer = true
plane.renderingOrder = -100
...or this way:
func occlusion() -> SCNMaterial {
let occlusionMaterial = SCNMaterial()
occlusionMaterial.isDoubleSided = true
occlusionMaterial.colorBufferWriteMask = []
occlusionMaterial.readsFromDepthBuffer = true
occlusionMaterial.writesToDepthBuffer = true
return occlusionMaterial
}
plane.geometry?.firstMaterial = occlusion()
plane.renderingOrder = -100
In order to create an occlusion material it's really simple
let boxGeometry = SCNBox(width: 0.1, height: 0.1, length: 0.1, chamferRadius: 0)
// Define a occlusion material
let occlusionMaterial = SCNMaterial()
occlusionMaterial.colorBufferWriteMask = []
boxGeometry.materials = [occlusionMaterial]
self.box = SCNNode(geometry: boxGeometry)
// Set rendering order to present this box in front of the other models
self.box.renderingOrder = -1
Great solution:
GitHub: arkit-occlusion
Worked for me.
But in my case i wanted to set the walls by code. So if you don't want to set the Walls by user -> use the plane detection to detect walls and set the walls by code.
Or in a range of 4 meters the iphone depht sensor works and you can detect obstacles with ARHitTest.
ARKit 6.0 and LiDAR scanner
You can hide any object behind a virtual invisible wall that replicates real wall geometry. iPhones and iPads Pro equipped with a LiDAR scanner help us reconstruct a 3d topological map of surrounding environment. LiDAR scanner greatly improves a quality of Z channel that allows occlude or remove humans from AR scene.
Also LiDAR improves such feature as Object Occlusion, Motion Tracking and Raycasting. With LiDAR scanner you can reconstruct a scene even in a unlit environment or in a room having white walls with no features at all. 3d reconstruction of surrounding environment has become possible in ARKit 6.0 thanks to sceneReconstruction instance property. Having a reconstructed mesh of your walls it's now super easy to hide any object behind real walls.
To activate a sceneReconstruction instance property in ARKit 6.0 use the following code:
#IBOutlet var arView: ARView!
arView.automaticallyConfigureSession = false
guard ARWorldTrackingConfiguration.supportsSceneReconstruction(.mesh)
else { return }
let config = ARWorldTrackingConfiguration()
config.sceneReconstruction = .mesh
arView.debugOptions.insert([.showSceneUnderstanding])
arView.environment.sceneUnderstanding.options.insert([.occlusion])
arView.session.run(config)
Also if you're using SceneKit try the following approach:
#IBOutlet var sceneView: ARSCNView!
func renderer(_ renderer: SCNSceneRenderer,
nodeFor anchor: ARAnchor) -> SCNNode? {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return nil }
let geometry = SCNGeometry(arGeometry: meshAnchor.geometry)
geometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
‍
let node = SCNNode()
node.name = "Node_\(meshAnchor.identifier)"
node.geometry = geometry
return node
}
func renderer(_ renderer: SCNSceneRenderer,
didUpdate node: SCNNode,
for anchor: ARAnchor) {
guard let meshAnchor = anchor as? ARMeshAnchor
else { return }
let newGeometry = SCNGeometry(arGeometry: meshAnchor.geometry)
newGeometry.firstMaterial?.diffuse.contents =
colorizer.assignColor(to: meshAnchor.identifier)
node.geometry = newGeometry
}
And here are SCNGeometry and SCNGeometrySource extensions:
extension SCNGeometry {
convenience init(arGeometry: ARMeshGeometry) {
let verticesSource = SCNGeometrySource(arGeometry.vertices,
semantic: .vertex)
let normalsSource = SCNGeometrySource(arGeometry.normals,
semantic: .normal)
let faces = SCNGeometryElement(arGeometry.faces)
self.init(sources: [verticesSource, normalsSource], elements: [faces])
}
}
extension SCNGeometrySource {
convenience init(_ source: ARGeometrySource, semantic: Semantic) {
self.init(buffer: source.buffer, vertexFormat: source.format,
semantic: semantic,
vertexCount: source.count,
dataOffset: source.offset,
dataStride: source.stride)
}
}
...and SCNGeometryElement and SCNGeometryPrimitiveType extensions:
extension SCNGeometryElement {
convenience init(_ source: ARGeometryElement) {
let pointer = source.buffer.contents()
let byteCount = source.count *
source.indexCountPerPrimitive *
source.bytesPerIndex
let data = Data(bytesNoCopy: pointer,
count: byteCount,
deallocator: .none)
self.init(data: data, primitiveType: .of(source.primitiveType),
primitiveCount: source.count,
bytesPerIndex: source.bytesPerIndex)
}
}
extension SCNGeometryPrimitiveType {
static func of(type: ARGeometryPrimitiveType) -> SCNGeometryPrimitiveType {
switch type {
case .line: return .line
case .triangle: return .triangles
}
}
}

Get nearby places from Google API sorted by distance

I am currently stuck on this functionality where user need nearby places results on the basis of distance.
For e.g :-
If i search "Kochi" and i am in India
Then kochi is in India as well as in Japan
So result should be like
1. Kochi, India
2. Kochi, Japan
This is just an example, user can also search landmarks, city, streets etc.. But i want all results to be sorted by distance.Closer results will display first and then far places. But not able to get results according to requirement.
Similar functionality is done on android and they are using it by passing radius (Like 500 km from current location)
What i have tried so far :-
Using GMSPlacesClient.autocompleteQuery and passing bounds in it for current location
GMSPlacesClient().autocompleteQuery(txtLocation.text!, bounds: bounds, filter: filter, callback: {(results, error) -> Void in
if let error = error {
print("Autocomplete error \(error)")
return
}
if let results = results {
}
})
Using GooglePlaces.placeAutocomplete
GooglePlaces.placeAutocomplete(forInput: self.txtLocation.text!, offset: 0, locationCoordinate: nil, radius: nil, language: nil, types: [GooglePlaces.PlaceType.Regions], components: nil, completion: { (response, error) in
// To do
})
I also used this url (https://maps.googleapis.com/maps/api/place/autocomplete/json?input=pan&location=30.704316,76.712106&radius=50000&components=country:IN) for Google API but for these kind of alternatives i have to do custom parsing.
I am playing around with the same GoogleMaps API. I do the same as you requested but through a different way code is attached, I do this on a button press for 'Search' you can either set the bounds to coordinates or the map view frame. In the below it is currently using the users current location for the bounds and it then works its way out from that I think this works near enough perfectly:
let autoCompleteController = GMSAutocompleteViewController()
autoCompleteController.delegate = self as! GMSAutocompleteViewControllerDelegate
// Set bounds to map viewable region
let visibleRegion = googleMaps.projection.visibleRegion()
let bounds = GMSCoordinateBounds(coordinate: visibleRegion.farLeft, coordinate: visibleRegion.nearRight)
// New Bounds now set to User Location so choose closest destination to user.
let predictBounds = GMSCoordinateBounds(coordinate: userLocation.coordinate, coordinate: userLocation.coordinate)
autoCompleteController.autocompleteBounds = predictBounds
// Set autocomplete filter to no filter to include all types of destinations.
let addressFilter = GMSAutocompleteFilter()
addressFilter.type = .noFilter
autoCompleteController.autocompleteFilter = addressFilter
// Change text color
UISearchBar.appearance().setTextColor(color: UIColor.black)
self.present(autoCompleteController, animated: true, completion: nil)
I am having a problem with Google Directions and getting the calculated distance and miles if you have any code for this that may be of help for me!

Resources