Improving the performance of MKOverlayViews - ios

I asked a similar question here a while ago about boosting the speed at which MKOverlays are added to an MKMapView by using threading during their creation, but I soon realized that the part of the process that was really dragging me down was not the creation of the overlays, but their addition to the map. Creating many overlays (even 3000+) takes an acceptable amount of time, but adding them all to the map takes far too long (15 seconds).
I know 'what are your favorite' questions usually aren't considered 'right' for Stack Overflow, but I think this question is okay because although it is subjective in a way, there is still a 'right' answer- the one that provides significant changes in the performance of an MKMapView with lots of MKOverlayViews.
Basically, I'd love to know if anyone has any tips or tricks (any at all) for tuning the speed of the addition of many different MKOverlays to a map view. Right now my alternative is combining them all into one big line, which is much faster, but then I lose the ability to treat each segment as an individual line (i.e. being able to show a callout for each segment), which is one of the cooler features in the app, so I'd really like to find a way to make this work. Right now, all of the lines load, given enough time, but even after, scrolling is a nightmare.
I'd really like to hear your thoughts! Thanks!

Related

Which is a better option for displaying irregular shapes in Swift?

let me start off by showing that I have this UIImageView set up in my ViewController:
Each one of the lines contains a UIButton for a body part. If I select a particular button, it will segue me appropriately.
What'd I like to do is, when the user taps (but doesn't release) the button, I'd like the appropriate body part to show like this:
I can achieve this using 2 options:
UIBuzierPath class to draw, but would take a lot of trial and error and many overlapping shapes per body part to get fitting nicely as similiar in a previous question: Create clickable body diagram with Swift (iOS)
Crop out the highlighted body parts from the original image and position it over the UIImageView depending on which UIButton selected. However there would only be one image per body part, but still less cumbersome then option 1.
Now, my question is not HOW to do it, but which would be a BETTER option for achieving this in terms of cpu processing and memory allocation?
In other words, I'm just concerned about my app lagging as well as taking up app size storage. I'm not concerned about how much time it takes to do it, I want to just make sure my app doesn't stutter when it tries to draw all the shapes.
Thanks.
It is very very very unlikely that either of those approaches would have any significant impact on CPU or memory. Particularly if in option 2, you just use the alpha channels of the cutout images and make them semitransparent tinted overlays. CPU/GPU-wise, neither of the approaches would drop you below the max screen refresh rate of 60fps (which is how users would notice a performance problem). Memory-wise, loading a dozen bezier paths or single-channel images into RAM should be a drop in the bucket compared to what you have available, particularly on any iOS device released in the last 5 years unless it's the Apple Watch.
Keep in mind that "premature optimization is the root of all evil". Unless you have seen performance issues or have good reason to believe they would exist, your time is probably better spent on other concerns like making the code more readable, concise, reusable, etc. See this brief section in Wikipedia on "When to Optimize": https://en.wikipedia.org/wiki/Program_optimization#When_to_optimize
Xcode have tests functionality built in(and performance tests too), so the best way is to try both methods for one body part and compare the results.
You may find the second method to be a bit slower, but not enough to be noticed by the user and at the same time a lot more easier to implement.
For quick start on tests here.
Performance tests here.

CATextLayer changing text fast

I am working on an app that needs to change a lot of CATextLayers strings, but, only one or two characters of it(in general though, the strings are in length of about 2-5 characters).
At first I went with UILabels which were extremely slow, and because of that I tried out CATextLayer, which was a lot faster, but not fast enough, I am updating about 150 CATextLayers quite often, all at once , and it just doesn't cut it, I feel a lag.
I then tried out to do it even more low-level with CoreText, I tried drawing it with a CTLine, which had about the same performance of CATextLayer, so I got back to the CATextLayers because my positioning code for CoreText wasn't perfect.
I started thinking about caching for each string the first two characters(which are always constant), and only changing the other 3 characters, with smaller bounds, which I assume will be a bit faster, but, will it be faster? After all it will have it to composite it with the other text-layer, and it will have to be update all of the 150 text-layers.
Does anybody have any advice? How would you approach it?
Attached is a screenshot from instruments showing that the problem lies in the performance of CATextLayer:
Bitmap Fonts are probably the best way to solve this problem, as they're far and away more performant than anything else in terms of font drawing for something of this nature. But you need to pre-render them to the scale you desire to get the best out of them both visually and in terms of performance.
And you might be best off using Sprite Kit, as it has native handling of them. Here's a github repo with a useful thing to make it easier to use rendered bitmaps from a common tool for creating them: https://github.com/tapouillo/BMGlyphLabel

Drag , Pinch and zoom images in UIView

I am adding multiple UIImageView to a UIView to perform operations such as drag,pinch and zoom images.I have added gesture recogniser to all the UIImageViews.Since i'm adding multiple images(UIImageViews) it has brought down the performance of my app.Does any one have a better solution to perform this? Thanks
The adding of many images should not generally, cause enough of a problem that your app would slow down. For example, to illustrate the point with an absurd example, I added 250 (!) image views each with three gestures, and it works fine on an iPad 3, including the animating of the images into their final resting place/size/rotation.
Two observations:
Are you doing anything computationally intensive with your image views? For example:
Simply adding shadows with Quartz 2D has a huge performance impact because it's actually quite computationally expensive. In the unlikely even that you're using layer shadows, you can try using shouldRasterize, which can mitigate the problem, but not solve it. There are other (kludgy) techniques for doing computationally efficient shadows if that's the problem.
Another surprising computationally intensive process is if your images are (for example) PNGs with transparency settings or if you have reduced the alpha/opacity for your views.
What is the resolution/size of the images being loaded? If the images are very large, the image view will render them according to the contentMode, but it can be very slow if you're taking large images and scaling them down. You should use screen resolution images if possible.
These are just a few examples of things that seem so innocuous, but are really quite computationally expensive. If you're doing any Quartz embellishments on your image views, I'd suggest temporarily paring them back and see if you see any changes.
In terms of diagnosing the performance problems yourself, I'd suggest watching the following two WWDC videos:
WWDC 2012 - #211 - Building Concurrent User Interfaces on iOS includes a fairly pragmatic demonstration of Instruments to identify the source of performance problems. This video is clearly focused on one particular solution (the moving of computationally expensive processes into the background and implementing a concurrent UI), which may or may not apply in this case, but I like the Instruments demonstration.
WWDC 2012 - #235 - iOS App Performance: Responsiveness is a more focused discussion on how one measures responsiveness in apps and techniques to address problems. I don't find the instruments tutorial to be quite as good as the prior video, but it does go into more detail.
Hopefully this can get you going. If you are still stumped, you should share some relevant code regarding how the views are being added/configured and what the gestures are doing. Perhaps you can also clarify the nature of the performance problem (e.g. is it in the initial rendition, is it a low frame rate while the gestures take place, etc.).

Procedurally animating the growing of a 2D plant

I'm trying to figure out the best way to procedurally animate the growing of a 2D plant in iOS.
I want the plant to animate to give an encroaching feeling to the user.
Basically, to animate the growing of a branch, with little buds that will eventually animate into full grown leaves.
To breathe a little life into it, I'd also like the plant to sway a bit as it grows, rather than feeling hand painted on the screen.
One way I've thought of is to use CGPaths and Bezier curves to create the shape of the stalk and the leaves, but I'm not entirely sure how to animate the drawing of the paths. Once I get the "drawing" of the stalk, i'd like to "plant" little buds at certain points on the stalk, as the line is growing/animating and these buds will also start to grow outwards from the plant.
Any suggestions on what route to take to accomplish this task? I'd prefer to procedurally animate as opposed to hand drawing each frame and animating that way. My reasoning is that I imagine procedurally animating will be less time consuming, give me more control over different aspects of the animation, and be reusable in other projects (not to mention, it will be fun to program!)
I've come across this blog posting for the drawing of animated lines:
http://oleb.net/blog/2010/12/animating-drawing-of-cgpath-with-cashapelayer/
Perhaps this would be a starting approach for achieving the results I want, I need to sit down and go through the code he posted.
Also, maybe this is something that would be easier to do using cocos2d or something similar? Or perhaps quartzcore and core animation will work fine.
Thanks for any suggestions you might have, any information is helpful at this point.
(Great question! Posting this as a "community wiki" since it is not an answer but just some references and I didn't want the links to get screwed up in comments. Perhaps people want to add to this?)
I did a simple search on "procedural tree branching code" and there were lots of interesting hits - really rich area.
A post on gameDev.stackExhange pointed to this great resource: Algorithmic Botany
Also Snappy Tree is pretty amazing and the source code is available.
These two also sound interesting:
TReal is a program capable of generating realistic 3D tree models.
Arbaro is an implementation of the tree generating algorithm described in Jason Weber & Joseph Penn: "Creation and Rendering of Realistic Trees" written in Java.
Perhaps more accessible to the OP and with a less complex result are these actionScript tutorials on fractal trees. ActionScript drawing code is pretty easily translated to Core Graphics.

OpenCV tracking people from overhead view

I have a broad but interesting OpenCV question and I'm wondering where to start.
I am looking for any strategies or white papers that might help.
I need to get the position of people sitting at a conference table from a fixed overhead view. Ideally, I will assign a persistent ID to each person, and maintain a list of people with ID and coordinates. This problem could be easy in a specific case - for example, if designed for a single conference room table - but it gets harder in the general case, especially with people entering and leaving the scene.
My first question: is it a detection or a motion tracking problem? Or some combination of the two?
Well it seems like both to me. I would think you would need to take a long average of the visible area which becomes the background. Then based on your background information you can track movement of other objects.
Assigning an ID may become difficult if objects merge together (at least as far as the camera is concerned) and then separate again, say someone removing a hat placing it down and placing it back on.
But all that in mind it is possible even if it presents a challenge. I once saw a similar project tracking people in a train station using a similar approach (it was in a lecture so I can't provide a link sorry)

Resources