The application we are developing needs to show historic movement data consisting of hundreds of thousands of points. Performance degrades as the number of points per polyline increases, or the number of polylines increases. What is the best way of improving performance in this instance?
Perhaps somehow baking the polylines into maptiles and including these as additional layers? We are currently targeting iOS9+, using ios-v3.2.0.
Related
I am working with the Uber H3 library. Using the poly fill function, I have populated an area with H3 indexes for a specific resolution. But I don’t need all the indexes. I want to identify and remove those indexes which are getting plotted on isolated areas like jungles, lakes, ponds, etc.
Any thoughts on how that can be achieved?
I thought that if I can map all the buildings in a city in their respective indexes, I can easily identify those indexes in which no buildings are mapped.
I’d maintain a Hashmap of h3 index as the key and a list of coordinates which lie in that index as the value.
In order to address this, you'll need some other dataset(s). Where to find this data depends largely on the city you're looking at, but a simple Google search for footprint data should provide some options.
Once you have footprint data, there are several options depending on the resolution of the grid that you're using and your performance requirements.
You could polyfill each footprint and keep the resulting hexagons
For coarser data, just using geoToH3 to get the hexagon for each vertex in each building polygon would be faster
If the footprints are significantly smaller than your hex size, you could probably just take a single coordinate from each building.
Once you have the hexagons for each building, you can simply do a set intersection with your polygon hexes and your building hexes to get the "good" set. But it may be easier in many cases to remove bad hexagons rather than including good ones - in this case you'd need a dataset of non-building features, e.g. water and park features, and do the reverse: polyfill the undesired features, and subtract these hexagons from your set.
I wonder if there any downsides of using satellite mode in MKMapView?
If it performing as good as the standard map type? Maybe it devours more RAM or downloads more data?
I'm asking because this would be a much better solution in my app to use only satelite view, but I'd like to know if there are any consequences in advance.
As I check it right now, I cannot see any performance decrease comparing to standard mapView type. However, I believe that my use case is pretty basic at the moment and probably some issues I cannot detect this way.
So my questions is about known issues with performance using satelite view.
EDIT
I played(zoomed, jump all over the world etc) with both satelite and standard map and it turns out that satelite consumes less memory than standard one. How come?
Based on doing map tile (256 X 256) captures for offline use, satellite and hybrid map tiles average around 90K Bytes each in rural areas while standard map tiles average about 10K bytes each in those same areas, so there is a major impact on the volume of data downloaded and therefore on the time required. Note that there is fairly wide variance in the sizes from tile to tile depending on content, though the ratio stays pretty close.
I'm looking to incorporate 4 real time scatter-plots into a graph and it has been requested that they be separated (at least in pairs) to make it easier to pick out signals. Would it be less resource intensive to have multiple plotspaces on my graph, or shift a new set of axes and plots on the same plotspace? Is this still the case if I add 2-4 more scatter-plots (for 6-8 total)?
FYI, I'm currently using CorePlot 1.6 (haven't had time to make the jump to 2.0).
If all of the plots are in the same graph, use multiple plot spaces. A plot space just defines a coordinate mapping between the data and the screen so it does't use any video memory or other system resources (just a small amount of memory for the plot space object itself). Each plot and axis are CALayer objects, so those will be the primary drivers of resource usage.
I am building an app where I visualise a rather large dataset (~5 million polygons) evenly distributed over a geographic area.
Roughly 2000 polygons are displayed at once at the appropriate zoom level. When zoomed out, the data is simply hidden.
To speed up drawing of the polygons I've implemented an R*-tree that returns the polygons that overlap the area in question.
-(void)drawMapRect:(MKMapRect)mapRect zoomScale:(MKZoomScale)zoomScale inContext:(CGContextRef)context
MKCoordinateRegion region = MKCoordinateRegionForMapRect(mapRect);
NSArray *polygons = [[Polygons sharedPolygons] polygonsInRegion:region];
for(Polygon *p in polygons) {
// Draw polygon
}
}
The actual sorting once the polygons are loaded into memory seems solvable by fetching and storing only the polygons that the user sees into the R-tree. The user is only interested in features close by or in specific regions.
I have tried SQLite but it does not seem like the right choice in this case, considering the dataset in question quickly becomes fairly large (>1gb) and maybe SQLite isn't optimal for doing queries of features within specific regions?
What are some clever ways I can store this dataset in the bundle?
Are there any specific technologies you suggest I try out for this?
You will not be able to load the entire 1 GB dataset into memory.
You should store the data in an R-tree in the database so that you can make region queries directly when you load the data.
I have a pool of CCSprites numbering 1200 in each of two arrays, displayGrid1 and displayGrid2. I turn them visible or invisible when showing walls or floors. Floors have a number of different textures and are not z-order dependent. Walls also have several textures and are z-order dependent.
I am getting about 6-7 frames when moving which is okay because its a turn based isometric rogue-like. However, I am also getting a small amount of flicker, which I think is performance related, because there is no flicker on the simulator.
I would like to improve performance. I am considering using an array CCSpriteBatchNodes for the floor which is not z-order dependent but am concerned with the cost of adding and removing sprites frequently between the elements of this array, which would be necessary I think.
Can anyone please advise as to how I can improve performance?
As mentioned in the comments, you're using multiple small sprite files loaded individually which can cause performance issues as there is wasted memory being used to store excess pixel data around each of the individual sprites. Each row of pixel data in an OpenGL texture must have a number of bytes totaling a power of 2 for performance reasons. Although I believe OpenGL ES under iOS does this automatically, it can come with a big performance hit. Grouping sprites together into a single texture that is correctly sized can be a tremendous boon to rendering performance.
You can use an App like Zwoptex to group all these smaller sprite files into a larger, more manageable sprite sheets/texture atlas and utilize one CCSpriteBatchNode for each sprite sheet/texture atlas.
Cocos2D has pretty good support for utilizing sprite sheets with texture atlases and converting your code to using these instead of individual files can be done with little effort. Creating individual sprites from a texture atlas is easy, you just call the sprite by name instead of from the file.
CCSpriteBatchNodes group OpenGL calls for their sprites together, a process known as batching, which causes the operating system and OpenGL to have to make less round trips to the GPU which greatly improves performance. Unfortunately, CCSpriteBatchNodes are limited to only being able to draw sprites for the texture that backs them (enter sprite sheets/texture atlases).