Is it better in terms of memory management, battery life and processor usage to remove currently not visible MKAnnotations from MKMapView or not?
I'm talking about BIG amounts of annotations (say ~1000) with only about 20 visible in current region at any given time. Should I let MapKit do his job with hiding of pins or should I handle adding/removing of annotations depending on visible region myself?
The annotations that are offscreen will not cause a large memory usage. MKAnnotation is designed to be very lightweight and you should try to keep them small. The memory hog is the associated view (the MKAnnotationView). If you have more than a few hundred visible it will cause a slowdown and large memory use with a possible crash on older devices. 1000 will slow down any device but probably won't crash it.
To sum up, the offscreen annotations don't matter. The problems arise when you have too many onscreen. For that you have to remove annotations to get better performance. How you decide to remove and replace annotations is a much more difficult question.
I'd make sure to show not more than 100 annotations simultaneously. When you let MKMapView handle it, how do you prevent the user from zooming out to the whole world causing mapView to display all your 1000 annotations?
Related
I have an iOS 8 app that uses MapKit. I recently discovered a performance problem with the app when running a video decompression in addition to displaying a map. The app was unable to keep up with the flow of incoming data when using the satellite view tile set. However, this problem vanished the moment I swapped to the default MapKit tile set. The app is not CPU bottlenecked when the problem is occurring. It makes sense to me that the default (vector) map tile set is easier to display, but I am confused about why the issue is happening in the first place.
The problem seems strange to me because there is no movement or manipulation of the map when the problem is occurring. I would understand the issue better if it happened when manipulating the map in addition to rendering video to the screen, but the problem exists even with no user input. I am constrained in analyzing the system because we use a hardware accessory, so some Instruments are not available over wireless performance analysis. I am not using a high number of annotations, overlays, or other objects. We have a few custom annotations and overlays in use. There are existing apps that do this exact combination of decoding and maps, without the performance problem, so I suspect it's a configuration issue.
Are there certain attributes on the MKMapView that I can set to improve performance? I am at a loss as to what to investigate further since I cannot make the problem happen with the GPU Instrument active and the CPU doesn't appear to be the constraint.
I'm developing an enterprise map-based application and it needs to display info gathered from a large workforce and display it all on each worker's iPad. So the number of markers on the map can grow very very large (several thousands) quickly. In addition, each marker is going to be backed by an NSManagedObject subclass that's held in memory while the marker exists.
I'm using Google Maps iOS SDK, and the problem is, even without any markers, just panning and zooming around causes really large increases in memory usage. The application's dirty memory size is 100MB (using Allocations tool) upon launch. Little bit of panning and zooming quickly makes it shoot to up to 300, and the issue is when I stop panning and zooming, the memory never goes down. Similarly if I have a lot of markers on and I remove them, again, no drop in memory (when I remove markers, I make sure to not hold any references to any of the objects too). The only time memory goes down is when I change map types. If I pan/zoom a lot in street view, then switch to satellite view, there's a sudden 50MB+ drop in dirty memory.
So I was wondering if anyone has any tips in handling memory when using Google Maps, or any info on how Google Maps manages/releases memory?
More specifically... Does iOS's MapKit framework have a built-in generalization of the MKPolyline and MKPolygon overlay before rendering?
The simulator appears quite rough with the display of several polylines composed of hundreds of points. Am I reaching the peak of the iPhone's draw performance or is MapKit not programmed to automatically generalize the data thereby peaking out the device's draw performance?
I know I could make a test case to compare, but creating/integrating such an algorithm for a test case is quite intensive. I am hoping someone has some inside on this before I need to resort to that.
thanks!
I don't know about the generalize part but you might be able to keep performance up by using another approach:
Multiple MKPolygons etc. cause heavy memory-usage when drawing on the map. This is due the NOT reusing of overlays by MapKit. As annotations have the reuseWithIdentifier, overlays however don't. Each overlay creates a new layer as a MKOverlayView on the map with the overlay in it. In that way memory-usage will rise quite fast and scrolling becomes... let's say sluggish to almost impossible.
Therefore there is a work-around: Instead of plotting each overlay individually, you can add all of the MKOverlays (in your case MKPolygons and MKPolylines) to one MKOverlayView. This way you're in fact creating only one MKOverlayView and thus there's no need to reuse.
The answer in this question contains a link to the Apple Developer Forum with the work around.
I've used this approach in a way with multiple MKPolygons and it works great. Also I'm planning to use MKPolylines to in the future for my app. I believe it is possible to draw them all in one MKOverlayView...
Using this approach you might not need to generalize the drawing of the MKPolygon overlay. Also it is a lot easier to implement and test IMO ;-)
After updating to iOS 6 I have noticed sever performance decreases when panning or zooming a MKMapView with multiple overlays. An app I created has approximately 600 polygon overlays of various colours, and ran lag-free (even on older iOS devices) on iOS 5, now runs extremely laggily (when zooming & panning) on iOS 6, even on the latest devices.
My hunch is that this is due to the fact the device has to actually dynamically create the map (since its vector based) rather than just display tiles onscreen.
Has anyone got any ideas to reduce the lag experienced when panning or zooming the map?
Some extra info, this low frame rate also occurs whilst zooming or panning areas where the overlays are not displayed on screen at all, so it is not to do with the creation of the overlays as they come onscreen.
You can try combining all of your overlays into a single one. This can dramatically boost performance.
The idea is to create an overlay with a bounding box that encompasses all of your polygons. This way your mapView: viewForOverlay will always be called. Create a property for your overlay that holds all of your polygons. Then in the drawMapRect: method of your overlay view, test all of your polygons for intersection with mapRect and draw them if necessary. This is important since you don't want to be drawing polygons that are off screen.
This strategy is based on Apple's own MapKit example projects. Check out HazardMap for an example of drawing several objects in a single MKOverlayView and check out BreadCrumb for an example of how to efficiently test polygons for intersection with your current mapRect in the drawMapRect method
I have a minimalistic MapKit tech demo and it's lagging noticeably as I run it on an iPad 3 with iOS6. Profiling reveals that it's CPU bound, but only 0.2% is from my own code. The big culprits in my case are rendering roads, followed by rendering labels - both done by MapKit. I am showing downtown San Francisco at a 5KM scale, so there are a lot of roads and labels to render.
So the moral of the story is: iOS6 maps are SLOW. Can't tell you how this compares to iOS5 or to an iPad 2, though. But it's lagging, and I am barely doing any work of my own at all.
P.S:
Open Instruments and use the Time Profiler. Make a recording + drill down to find your culprits. Then check 'hide system libraries' to find out how much of the lag is your responsibility vs MapKit's. Then optimize only as needed.
I'm currently trying to create an endlessly scrolling background with a character who jumps up and down and collects items that come along the way.
My problem lies with the items that need to be created and then moved.
I've looked at CCSpriteBatchNode and NSMutableArray but I'm not sure which to use.
I reviewed Steffen Itterheim's example from his book regarding creating bullets while initializing and then using them when needed.
I thought that this would be inefficient and taxing on the iPhone. Also, aren't all the bullets continuously updated even if they are not visible, using up even more of the iPhone's limited memory and CPU?
On the other hand, if I had a NSMutableArray and added the items as I needed them and updated a selective few that currently exist, would this be more efficient.
Thus, my main problem is choosing between NSMutableArray or CCSpriteBatchNode and finding out which is the most efficient in creating numerous, continuously updating objects.
Thank you!
If you are using Cocos, CCSpriteBatchNode is much better if you plan to have many of your objects on the screen at the same time. CCSpriteBatchNode only "draws" the object once, then propogates it repeatedly in your view. This saves a lot of precious CPU resources. This is why CCSpriteBatchNode is used with bullets because usually there are many on the screen.
Also, if your objects appear frequently, even if there are just one or two on the screen at once, CCSpriteBatchNode will use the cached drawing rather than redrawing, still saving CPU resources.
I recommend sticking with the Cocos objects when you can, as they are designed to improve performance for reasons like this, over the native Apple objects like NSMutableArray.
However, if you insist on using NSMutableArray, still consider CCArray instead if you are using Cocos. But CCSpriteBatchNode is probably going to be your best bet.
Nodes entirely outside the screen are quickly dismissed, and if you set a node's visible property to NO it will be dismissed right away, which means there's basically no performance penalty for invisible nodes.
Caching many objects is actually faster than creating and releasing them at runtime, even if that means you'll always have 400 or so of them in memory. That requires maybe 200 KB of memory at most and avoids frequent alloc/dealloc cycles which you will want to avoid as much as possible, particularly on 1st/2nd gen devices.