Very slow scrolling/zooming experience with GMUClusterRenderer (Google Maps Clustering) on iOS - ios

I will try to explain my issue, and what I have done so far.
Introduction:
I am using the iOS Utils Library from Google Maps in order to display around 300 markers on the map.
The algorithm used for the Clustering is the GMUNonHierarchicalDistanceBasedAlgorithm.
Basically, our users can send us the weather they observe through their window, so that we can display the real time weather around the world.
It enables us to improve and/or adjust the weather forecasts.
But my scrolling/zooming experience isn't smooth at all. By the way I am testing it with an iPhone X ...
Let's get to the heart of the matter:
Here is how I configure the ClusterManager
private func configureCluster(array: [Observation]) -> Void {
let iconGenerator = GMUDefaultClusterIconGenerator()
let algorithm = GMUNonHierarchicalDistanceBasedAlgorithm()
let renderer = GMUDefaultClusterRenderer(mapView: mapView,
clusterIconGenerator: iconGenerator)
renderer.delegate = self
clusterManager = GMUClusterManager(map: mapView, algorithm: algorithm,
renderer: renderer)
clusterManager.add(array)
clusterManager.cluster()
clusterManager.setDelegate(self, mapDelegate: self)
}
Here is my Observation class, I tried to keep it simple :
class Observation : NSObject, GMUClusterItem {
static var ICON_SIZE = 30
let timestamp: Double
let idObs: String
let position: CLLocationCoordinate2D
let idPicto: [Int]
let token: String
let comment: String
let altitude: Double
init(timestamp: Double, idObs: String, coordinate: CLLocationCoordinate2D, idPicto: [Int], token: String, comment: String, altitude: Double) {
self.timestamp = timestamp
self.idObs = idObs
self.position = coordinate
self.idPicto = idPicto
self.token = token
self.comment = comment
self.altitude = altitude
}
}
And finally, the delegate method for the rendering :
func renderer(_ renderer: GMUClusterRenderer, willRenderMarker marker: GMSMarker) {
if let cluster = marker.userData as? GMUCluster {
if let listObs = cluster.items as? [Observation] {
if listObs.count > 1 {
let sortedObs = listObs.sorted(by: { $0.timestamp > $1.timestamp })
if let mostRecentObs = sortedObs.first {
DispatchQueue.main.async {
self.setIconViewForMarker(marker: marker, obs: mostRecentObs)
}
}
} else {
if let obs = listObs.last {
DispatchQueue.main.async {
self.setIconViewForMarker(marker: marker, obs: obs)
}
}
}
}
}
}
Users can only send one observation, but this observation can be composed with various weather phenomenoms (like Clouds + Rain + Wind) or only Rain if they want.
To differenciate them, if it's only 1 phenomenom, the marker.iconView property will be set directly.
On the other hand, if it's an observation with multiple phenomenoms, I will create a View containing all the images representing the phenomenoms.
func setIconViewForMarker(marker: GMSMarker, obs: Observation) -> Void {
let isYourObs = Observation.isOwnObservation(id: obs.idObs) ? true : false
if isYourObs {
marker.iconView = Observation.viewForPhenomenomArray(ids: obs.idPicto, isYourObs: isYourObs)
} else {
// Observation with more than 1 phenomenom
if obs.idPicto.count > 1 {
marker.iconView = Observation.viewForPhenomenomArray(ids: obs.idPicto, isYourObs: isYourObs)
// Observation with only 1 phenomenom
} else if obs.idPicto.count == 1 {
if let id = obs.idPicto.last {
marker.iconView = Observation.setImageForPhenomenom(id: id)
}
}
}
}
And the last piece of code, to show you how I build this custom view (I think my issue is probably here)
class func viewForPhenomenomArray(ids: [Int], isYourObs: Bool) -> UIView {
let popupView = UIView()
popupView.frame = CGRect.init(x: 0, y: 0, width: (ICON_SIZE * ids.count) + ((ids.count + 1) * 5) , height: ICON_SIZE)
if (isYourObs) {
popupView.backgroundColor = UIColor(red:0.25, green:0.61, blue:0.20, alpha:1)
} else {
popupView.backgroundColor = UIColor(red:0.00, green:0.31, blue:0.57, alpha:1)
}
popupView.layer.cornerRadius = 12
for (index, element) in ids.enumerated() {
let imageView = UIImageView(image: Observation.getPictoFromID(id: element))
imageView.frame = CGRect(x: ((index + 1) * 5) + index * ICON_SIZE, y: 0, width: ICON_SIZE, height: ICON_SIZE)
popupView.addSubview(imageView)
}
return popupView
}
I also tried with very small image, to understand if the issue comes from rendering a lot of PNGs on the map, but seriously, it's an iPhone X, it should be able to render some simple weather icon on a map.
Do you think I am doing something wrong ? Or is it a known issue in the Google Maps SDK ? (I have read that it is fixed at 30 fps)
Do you think rendering a lot of images (as marker.image) on a map takes that much GPU? To a point where the experience isn't acceptable at all?
If you have any advice, I'll take them all.

I was facing the same issue. After debugging a lot and checking google's code even, i come to the conclusion that, issue was from GMUDefaultClusterIconGenerator. This class is creating images at runtime for given cluster size that you are displaying. So, when you zoom in or zoom out the map, the cluster size is going to update, and this class creates new image for new number(Even it keep images cached, if same number get repeated).
So, the solution that i found is to use buckets. You will get surprised by seeing this new term. Let me explain the bucket concept by giving simple example.
suppose you kept bucket sizes as 10, 20, 50, 100, 200, 500, 1000.
Now, if your cluster is 3, then it will show 3.
If cluster size = 8, show = 8.
If cluster size = 16, show = 10+.
If cluster size = 22, show = 20+.
If cluster size = 48, show = 20+.
If cluster size = 91, show = 50+.
If cluster size = 177, show = 100+.
If cluster size = 502, show = 500+.
If cluster size = 1200004, show = 1000+.
Now here, for any cluster size, the marker images that are going to be rendered will be from 1, 2, 3, 4, 5, 6, 7, 8, 9, 10+, 20+, 50+, 100+, 200+, 500+, 1000+. As it caches the images, so this images is going to be reused. So, the time+cpu that it was using for creating new images is lowered(only few images required to be created).
You must have got the idea, about buckets now. As, if cluster is having very small number, then cluster size matters, but if increases, then bucket size is enough to get idea about cluster size.
Now, question is how to achieve this.
Actually, GMUDefaultClusterIconGenerator class has already this functionality implemented, you just need to change its initialisation to this:
let iconGenerator = GMUDefaultClusterIconGenerator(buckets: [ 10, 20, 50, 100, 200, 500, 1000])
GMUDefaultClusterIconGenerator class have other init methods, by using which you can give different background colors to different buckets, different background images to to different buckets and many more.
Let me know, if any further help required.

Related

CoreML Memory Leak in iOS 14.5

In my application, I used VNImageRequestHandler with a custom MLModel for object detection.
The app works fine with iOS versions before 14.5.
When iOS 14.5 came, it broke everything.
Whenever try handler.perform([visionRequest]) throws an error (Error Domain=com.apple.vis Code=11 "encountered unknown exception" UserInfo={NSLocalizedDescription=encountered unknown exception}), the pixelBuffer memory is held and never released, it made the buffers of AVCaptureOutput full then new frame not came.
I have to change the code as below, by copy the pixelBuffer to another var, I solved the problem that new frame not coming, but memory leak problem is still happened.
Because of memory leak, the app crashed after some times.
Notice that before iOS version 14.5, detection works perfectly, try handler.perform([visionRequest]) never throws any error.
Here is my code:
private func predictWithPixelBuffer(sampleBuffer: CMSampleBuffer) {
guard let pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer) else {
return
}
// Get additional info from the camera.
var options: [VNImageOption : Any] = [:]
if let cameraIntrinsicMatrix = CMGetAttachment(sampleBuffer, kCMSampleBufferAttachmentKey_CameraIntrinsicMatrix, nil) {
options[.cameraIntrinsics] = cameraIntrinsicMatrix
}
autoreleasepool {
// Because of iOS 14.5, there is a bug that when perform vision request failed, pixel buffer memory leaked so the AVCaptureOutput buffers is full, it will not output new frame any more, this is a temporary work around to copy pixel buffer to a new buffer, this currently make the memory increased a lot also. Need to find a better way
var clonePixelBuffer: CVPixelBuffer? = pixelBuffer.copy()
let handler = VNImageRequestHandler(cvPixelBuffer: clonePixelBuffer!, orientation: orientation, options: options)
print("[DEBUG] detecting...")
do {
try handler.perform([visionRequest])
} catch {
delegate?.detector(didOutputBoundingBox: [])
failedCount += 1
print("[DEBUG] detect failed \(failedCount)")
print("Failed to perform Vision request: \(error)")
}
clonePixelBuffer = nil
}
}
Has anyone experienced the same problem? If so, how did you fix it?
iOS 14.7 Beta available on the developer portal seems to have fixed this issue.
I have a partial fix for this using #Matthijs Hollemans CoreMLHelpers library.
The model I use has 300 classes and 2363 anchors. I used a lot of the code Matthijs provided here to convert the model to MLModel.
In the last step a pipeline is built using the 3 sub models: raw_ssd_output, decoder, and nms. For this workaround you need to remove the nms model from the pipeline, and output raw_confidence and raw_coordinates.
In your app you need to add the code from CoreMLHelpers.
Then add this function to decode the output from your MLModel:
func decodeResults(results:[VNCoreMLFeatureValueObservation]) -> [BoundingBox] {
let raw_confidence: MLMultiArray = results[0].featureValue.multiArrayValue!
let raw_coordinates: MLMultiArray = results[1].featureValue.multiArrayValue!
print(raw_confidence.shape, raw_coordinates.shape)
var boxes = [BoundingBox]()
let startDecoding = Date()
for anchor in 0..<raw_confidence.shape[0].int32Value {
var maxInd:Int = 0
var maxConf:Float = 0
for score in 0..<raw_confidence.shape[1].int32Value {
let key = [anchor, score] as [NSNumber]
let prob = raw_confidence[key].floatValue
if prob > maxConf {
maxInd = Int(score)
maxConf = prob
}
}
let y0 = raw_coordinates[[anchor, 0] as [NSNumber]].doubleValue
let x0 = raw_coordinates[[anchor, 1] as [NSNumber]].doubleValue
let y1 = raw_coordinates[[anchor, 2] as [NSNumber]].doubleValue
let x1 = raw_coordinates[[anchor, 3] as [NSNumber]].doubleValue
let width = x1-x0
let height = y1-y0
let x = x0 + width/2
let y = y0 + height/2
let rect = CGRect(x: x, y: y, width: width, height: height)
let box = BoundingBox(classIndex: maxInd, score: maxConf, rect: rect)
boxes.append(box)
}
let finishDecoding = Date()
let keepIndices = nonMaxSuppressionMultiClass(numClasses: raw_confidence.shape[1].intValue, boundingBoxes: boxes, scoreThreshold: 0.5, iouThreshold: 0.6, maxPerClass: 5, maxTotal: 10)
let finishNMS = Date()
var keepBoxes = [BoundingBox]()
for index in keepIndices {
keepBoxes.append(boxes[index])
}
print("Time Decoding", finishDecoding.timeIntervalSince(startDecoding))
print("Time Performing NMS", finishNMS.timeIntervalSince(finishDecoding))
return keepBoxes
}
Then when you receive the results from Vision, you call the function like this:
if let rawResults = vnRequest.results as? [VNCoreMLFeatureValueObservation] {
let boxes = self.decodeResults(results: rawResults)
print(boxes)
}
This solution is slow because of the way I move the data around and formulate my list of BoundingBox types. It would be much more efficient to process the MLMultiArray data using underlying pointers, and maybe use Accelerate to find the maximum score and best class for each anchor box.
In my case it helped to disable neural engine by forcing CoreML to run on CPU and GPU only. This is often slower but doesn't throw the exception (at least in our case). At the end we implemented a policy to force some of our models to not run on neural engine for certain iOS devices.
See MLModelConfiguration.computeUntis to constraint the hardware coreml model can use.

SCNView not refreshing but after tap on screen

I am stuck on a problem. I need to apply transformation (scale, rotation, position) right after i add model to my rootNode. Right after when i apply transformation on child model added to rootNode it shows fine on screen but when i apply transformation on rootNode it doesn't refresh. i experimented that as soon i touch screen UI updates. I also tried putting delay of 2,3 secs.
expected
UIView should update as soon i apply transformation to rootNode.
let res = SCNAction.repeatForever(SCNAction.rotateBy(x: 0, y: 0.5, z: 0, duration: 1))
// let res = SCNAction.sequence([SCNAction.wait(duration: 2000), SCNAction.rotateTo(x: CGFloat(180), y: CGFloat(90), z: CGFloat(0), duration: 1.0)])
self.rootNode.runAction(res)
i tried putting code in
RunLoop.main.perform {}
i tried using
scnView.preferredFramesPerSecond = 30
scnView.rendersContinuously = true
But none works. i am using sdk IOS 13.2. Any help please.
Edit:
var rootNode = SCNNode()
viewDidload(){
scnScene.rootNode.addChildNode(rootNode)
....
}
func initSceneWithModel(modelURL: URL) {
do {
try personModel = addModel(url: modelURL)
menuButton.setImage(UIImage.fontAwesomeIcon(name: .bars, style: .solid, textColor: .white, size: XConstants.FONT_AWSOME_SIZE), for: .normal)
selectedModel = personModel
centerPivot(for: personModel!)
moveNodeToCenter(node: personModel!)
setupEyeBlocker()
// selectedModel = eyeBlocker
updateFieldUI()
DispatchQueue.main.asyncAfter(deadline: .now() + 3) {
self.applyInitTransformations()
}
} catch let error {
Utilities.xalert(inView: self.view, desc: error.localizedDescription)
}
}
func applyInitTransformations() {
if let info = vm.physicialFile.extraInfo {
// personModel?.position = info.person.position
// personModel?.scale = info.person.scale
// personModel?.eulerAngles = info.person.rotation
var valueRotPos = SCNMatrix4Mult(SCNMatrix4MakeRotation(0,0,0,0), SCNMatrix4MakeTranslation(0,0,0))
var valueScale = SCNMatrix4MakeScale(7.0,7.0,7.0) // scales to 0.1 of original size
rootNode.transform = SCNMatrix4Mult(valueRotPos, valueScale)
// rootNode.position = info.root.position
// rootNode.scale = info.root.scale
// rootNode.eulerAngles = info.root.rotation
}
else {
applyEyeBlockerDefaultPosition()
}
}
Apple clearly says:
...
You should not modify the transform property of the root node.
...
(https://developer.apple.com/documentation/scenekit/scnscene/1524029-rootnode)
This might be causing the issues you have with your scene. Avoid SCNActions to be run on the rootNode. They are designed to run on the content of the rootNode (any SCNNode added to the rootNode).
You could probably take a common SCNNode, call it like myRootNode, add it to the real rootNode and add all your other content to myRootNode. Transformations should then apply correctly to all your sub-content, if this is your goal.
BTW: scnView.preferredFramesPerSecond = 30 never gave me more performence or any benefits. Leave it default. Scenekit switches automatically to lower framerates if required.
EDIT:
apply transformation like so:
// Precalculate the Rotation the Position and the Scale
var valueRotPos = SCNMatrix4Mult(SCNMatrix4MakeRotation(0,0,0,0), SCNMatrix4MakeTranslation(0,0,0))
var valueScale = SCNMatrix4MakeScale(0.1,0.1,0.1) // scales to 0.1 of original size
then you do:
myRootNode.transform = SCNMatrix4Mult(valueRotPos, valueScale)
(you could also try to use the worldTransform of the node or the other transform properties of the nodes presentation node-object)

ARKit How to draw measurement scale

I wonder how this to be done . Please do not close this question I need suggestion or any hint to implement it.
Source https://www.youtube.com/watch?v=4uHyHRKmxZk
I have created A node which is SCNCylinder and drawn from target to the centre of the screen from updateAtTime method
Firstly I thought it was drawn with TextNode. SO I tried following
Inside that class I have method help to draw a node at each Unit I pass to UnitLength
func drawOtherBaseUnit(height:Float,intoThe unitType:UnitLength,toTheNode zAxisNode:SCNNode) {
print("---------------------------------------------------------------------")
let distanceInTarget = Converter.convert(value: Double(height), to: unitType).output + 1
print("DISTANCE",distanceInTarget)
print("---------------------------------------------------------------------")
var text = ""
switch unitType {
case .inches:
text = "INCH"
case .centimeters:
text = "CM"
default:
return
}
if !distanceInTarget.isNaN {
var distance = Int(distanceInTarget)
var i = 0
while distance > 0 {
print("distance ",distance)
let node = UnitNode(withPosition: SCNVector3Make(0, 0, 0), radius: CGFloat(Converter.convert(value: 0.1, from: unitType, to: UnitLength.meters).output),text:"\(text) \(i)",forType:unitType)
let valueIncreaseOnAsPerTarget = Int(distanceInTarget) - distance
let valueToIncreaseMeter = Converter.convert(value: Double(valueIncreaseOnAsPerTarget), from: unitType, to: UnitLength.meters).output
node.position.y = -Float(valueToIncreaseMeter)
node.position.x = 0
zAxisNode.addChildNode(node)
distance -= 1
i += 1
}
}
}
The UnitNode class is draw SCNText and some other nodes.
This is working fine. I can see node each provided unit.
But it if I use drawOtherBaseUnit for Inch as well as CM UI is Lagging it is not smooth.
Is it correct way to implement desired output ?

MapBox - detect zoomLevel changes

How can I simply detect zoom level changes? Is it possible?
I simply need to hide my annotation views when zoom level is not enough.
regionDidChange:animated: is not intended to use for me. Any other way?
I need to hide my labels here:
and show them here:
This is what I currently do with my labels:
class CardAnnotation: MGLPointAnnotation {
var card: Card
init(card: Card) {
self.card = card
super.init()
let coordinates = card.border.map { $0.coordinate }
let sumLatitudes = coordinates.map { $0.latitude }.reduce(0, +)
let sumLongitudes = coordinates.map { $0.longitude }.reduce(0, +)
let averageLatitude = sumLatitudes / Double(coordinates.count)
let averageLongitude = sumLongitudes / Double(coordinates.count)
coordinate = CLLocationCoordinate2D(latitude: averageLatitude, longitude: averageLongitude)
}
required init?(coder aDecoder: NSCoder) {
fatalError("init(coder:) has not been implemented")
}
}
var annotations = [CardAnnotation]()
mapView.addAnnotations(annotations)
Of the two main ways to add overlays to an MGLMapView, the runtime styling API is better suited for text labels and also for varying the appearance based on the zoom level. While you’re at it, you might as well create the polygons using the same API too.
Start by creating polygon features for the areas you want shaded in:
var cards: [MGLPolygonFeature] = []
var coordinates: [CLLocationCoordinate2D] = […]
let card = MGLPolygonFeature(coordinates: &coordinates, count: UInt(coordinates.count))
card.attributes = ["address": 123]
// …
cards.append(card)
Within any method that runs after the map finishes loading, such as MGLMapViewDelegate.mapView(_:didFinishLoading:), add a shape source containing these features to the current style:
let cardSource = MGLShapeSource(identifier: "cards", features: cards, options: [:])
mapView.style?.addSource(cardSource)
With the shape source in place, create a style layer that renders the polygon features as mauve fills:
let fillLayer = MGLFillStyleLayer(identifier: "card-fills", source: cardSource)
fillLayer.fillColor = NSExpression(forConstantValue: #colorLiteral(red: 0.9098039216, green: 0.8235294118, blue: 0.9647058824, alpha: 1))
mapView.style?.addLayer(fillLayer)
Then create another style layer that renders labels at each polygon feature’s centroid. (MGLSymbolStyleLayer automatically calculates the centroids, accounting for irregularly shaped polygons.)
// Same source as the fillLayer.
let labelLayer = MGLSymbolStyleLayer(identifier: "card-labels", source: cardSource)
// Each feature’s address is an integer, but text has to be a string.
labelLayer.text = NSExpression(format: "CAST(address, 'NSString')")
// Smoothly interpolate from transparent at z16 to opaque at z17.
labelLayer.textOpacity = NSExpression(format: "mgl_interpolate:withCurveType:parameters:stops:($zoomLevel, 'linear', nil, %#)",
[16: 0, 17: 1])
mapView.style?.addLayer(labelLayer)
As you customize these style layers, pay particular attention to the options on MGLSymbolStyleLayer that control whether nearby symbols are automatically hidden due to collision. You may find that the automatic collision detection makes it unnecessary to specify the textOpacity property.
When you create the source, one of the options you can pass into the MGLShapeSource initializer is MGLShapeSourceOption.clustered. However, in order to use that option, you’d have to create MGLPointFeatures, not MGLPolygonFeatures. Fortunately, MGLPolygonFeature has a coordinate property that lets you find the centroid without manual calculations:
var cardCentroids: [MGLPointFeature] = []
var coordinates: [CLLocationCoordinate2D] = […]
let card = MGLPolygonFeature(coordinates: &coordinates, count: UInt(coordinates.count))
let cardCentroid = MGLPointFeature()
cardCentroid.coordinate = card.coordinate
cardCentroid.attributes = ["address": 123]
cardCentroids.append(cardCentroid)
// …
let cardCentroidSource = MGLShapeSource(identifier: "card-centroids", features: cardCentroids, options: [.clustered: true])
mapView.style?.addSource(cardCentroidSource)
This clustered source can only be used with MGLSymbolStyleLayer or MGLCircleStyleLayer, not MGLFillStyleLayer. This example shows how to work with clustered points in more detail.
One option is to add the labels as a MGLSymbolStyleLayer, then determine the textOpacity based on zoom level.
If you are using the current version of the Maps SDK for iOS, you could try something like:
symbols.textOpacity = NSExpression(format: "mgl_interpolate:withCurveType:parameters:stops:($zoomLevel, 'linear', nil, %#)", [16.9: 0, 17: 1])
The dynamically styled interactive points example shows one approach to this.
Is the problem that when you zoom out, your annotations are too close together? If so, it is better to group them together than to hide them entirely. See Decluttering a Map with MapKit Annotation Clustering.

How to load sprite sheets from web API in SpriteKit

I'm new to SpriteKit, and my question is how to load sprite sheets from web API.
Currently, I have an API returns a big PNG image, which contains all sprite sheets, and a json about individual frame information. (file and json are generated by TexturePacker) The API looks like this:
The format just likes a .atlasc folder, which contains a big image and a plist (XML) file.
I was thinking about downloading image and plist file and save it in the disk to load. However, SKTextureAtlas.init(named: String) can only load from app bundle.
In one word, I want to load a sprite animation from the web at runtime.
I have control of the API, so I can update the API to accomplish my goal.
The way I've figured out is downloading image, create a sourceTexture, like: let sourceTexture = SKTexture(image: image)
Then use the frame information in json to create individual textures with method init(rect rect: CGRect, inTexture texture: SKTexture)
Sample code is:
var textures: [SKTexture] = []
let sourceTexture = SKTexture(image: image)
for frame in spriteSheet.frames {
let rect = CGRect(
x: frame.frame.origin.x / spriteSheet.size.width,
y: 1.0 - (frame.frame.size.height / spriteSheet.size.height) - (frame.frame.origin.y / spriteSheet.size.height),
width: frame.frame.size.width / spriteSheet.size.width,
height: frame.frame.size.height / spriteSheet.size.height
)
let texture = SKTexture(rect: rect, inTexture: sourceTexture)
textures.append(texture)
}
basically same way with #Honghao Zhang 's answer, but I was little bit confused about the whole structure at first glance.
so I share my code snippet for later readers.
Happy coding :)
func getSpriteTextures() -> [SKTexture]? {
guard let spriteSheet = loadSpriteJson(name: "sprite_json_file", codable: SpriteJson.self) else { return nil }
let sourceImage = UIImage(named: "sprite_img_file.png")!
let sourceTexture = SKTexture(image: sourceImage)
var textures: [SKTexture] = []
let sourceWidth = spriteSheet.meta.size.w
let sourceHeight = spriteSheet.meta.size.h
let orderedFrameImgNames = spriteSheet.frames.keys.sorted()
for frameImgName in orderedFrameImgNames {
let frameMeta = spriteSheet.frames[frameImgName]!
let rect = CGRect(x: frameMeta.frame.x / sourceWidth,
y: 1.0
- (frameMeta.sourceSize.h / sourceHeight)
- (frameMeta.frame.y / sourceHeight),
width: frameMeta.frame.w / sourceWidth,
height: frameMeta.frame.h / sourceHeight)
let texture = SKTexture(rect: rect, in: sourceTexture)
textures.append(texture)
}
return textures
}

Resources