Is deeply nested view hierarchy in iOS ok? - ios

In Android it is recommended that the view hierarchy depth be kept under 10, and strictly under 20 or else your app is very likely to perform poorly or crash (UI thread only a tiny 8-16kB of stack space)
Does this hold true for iOS, with or without autolayout (why or why not) ?

Views in iOS are backed by OpenGL and have amazing performance. I've personality gone as deep as 50 views with no problems. I don't recommend making the depth bigger than necessary, but when the situation calls for it, it is viable.

Update:
It is actually quite bad if you are using autolayout, as constraints solving has polynomial complexity (cassowary is linear in the sense that it solves linear equations)
http://floriankugler.com/2013/04/22/auto-layout-performance-on-ios/
http://pilky.me/36/
UIView itself is very fast as posts here suggest, however. I ended up replacing autolayout with an alternative system.
TLDR: UIView/CALayer is fast. Autolayout is not.

I've never seen any such recommendation for iOS. As always, you should implement your code and view hierarchy as straightforward as possible. After that, measure your performance and tweak if needed. Avoid premature optimization as much as possible.

Related

SKView in UICollectionView terrible performance

Adding empty SKView to UICollectionView cell makes the scrolling almost impossible on iPhone 6 (iOS 9.x). Lets say collection view contains 6 items of which first 3 are visible, scrolling horizontally for next 3 items takes 3sec with jerkiness.
Here's the relating part of code:
func collectionView(collectionView: UICollectionView, cellForItemAtIndexPath indexPath: NSIndexPath) -> UICollectionViewCell
{
let cell = collectionView.dequeueReusableCellWithReuseIdentifier(ACDFilterCollectionViewCellConstants.CellIdentifier, forIndexPath: indexPath) as! ACDFilterCollectionViewCell
let filterType = filters[indexPath.row]
cell.titleLabel.text = filterType.rawValue
let skview = SKView(frame: cell.frame)
cell.filterView.addSubview(skview)
return cell
}
What should I do to have a smooth scrolling with SKViews inside my UICollectionViewCells?
On iOS 8.x I'm getting the exception on rendering this view:
I don't know what the specific answer is w.r.t. the cause of the poor performance, but I can give you some pointers as to how to proceed.
The first thing to do is to profile the code. Use Instruments to figure out where the app is spending the most time when you start scrolling. Is setting up the SKViews as they come onto the screen the expensive part? Or is it drawing all those sprite views over and over? Is it something in the sprite views that's taking a long time to draw? I hear that SKShapeNode can be a drag on performance. What happens if you remove all the subnodes from the SKViews?
Once you know where the bottleneck is, you can decide how to proceed. From your comment it sounds like you're only using SKView to render some paths, and maybe you don't need those to animate once they're drawn? If so, perhaps you can use a single offscreen SKView to render your path into an image, and then insert that image into the collection cell. People use collections full of images all the time, and they scroll beautifully.
My initial suggestion would be simply this: Scale back your expectations of what's possible in terms of framework interactions and relationship building between them.
Look into CAShapeLayer and see if it can do enough of what you're attempting to do with SKShapeNode and shaders. There is a "shouldRasterize" feature of of CAShapeLayers, and you can then apply things like Core Image filters, which might get you close to the desired result without using SKViews, SKShapeNode and SKShaders.
The problem you're up against is artificially inflated expectations of quality and performance within Apple's newer APIs and Frameworks, and their ability to interact and contain one another.
This is not your fault.
Apple has mastered the art of Showmanship and Salesmanship, long ago.
For a very long time these arts were reserved for physical product releases - the pitching and promoting of the devices people buy.
Sometime around the ailing and death of the late great Mr Jobs (may he RIP) there was a transition underway to promote far more actively the rapid development and early release of software to the App Store.
Most of this was part of the war on Android and Google. There must have been some consensus (probably amongst the supplier focused channels of the company that now lead) that getting as many apps into the store as possible was a way to nullify and/or beat Android.
To this end iOS 7 was created, UICollectionViews, AutoLayout and all manner of other "wonderful" goodies designed to remove the need (and concern) for design and designers for most app creation.
These facilities give the impression that anything and everything can be done with APIs, and that there's little to no need to consider design, nor even technical design.
So Developers plod into the frameworks, bright eyed, bushy tailed, and optimistic about their chances or realising anything they conceive... if they simply use a blend of the frameworks and APIs available to them, via the relationships they perceive to exist between frameworks from their understanding of the WWDC videos and other Apple promotional materials.
Unfortunately the aforementioned salesmanship and showmanship means that what you perceive to be possible is actually limited to the literal demonstrations you're shown. The inferred and suggested possible potential of the frameworks for interwoven relationships and success are just that... POSSIBLE POTENTIAL future functionality.
Most everything other than what is literally demonstrated is broken, or in some other way massively constrained and ill-performing. In some cases that which is demonstrated is also subsequently broken or doesn't make it to the release.
So if you're taking the time to imagine something like combining SKViews within UICollectionViews, and that this relationship should work, that assumption is your problem.
This gets far worse when you start thinking about SKShapeNodes and Shaders, both of which have known issues within SceneKit. Obviously you've found that the issue is that SKViews inside UICollectionViews aren't performant. That is the first problem. You're going to have far more problems gleaning performance from SKShapeNodes and your shaders later, and then the issue of immutability of the SKShapeNode is going to crop up for your animations, too.
It's not an incorrect assumption, you've been lead to believe this modularity of frameworks is a thing, and a huge feature of the massive frameworks of iOS.
However it's all early days, and that's not mentioned. Sprite Kit is fundamentally broken in iOS 9, you can read more about many of its problems here:
https://forums.developer.apple.com/thread/14487
here:
https://forums.developer.apple.com/thread/20758
here:
https://forums.developer.apple.com/thread/17463
There has been no formal communication about the cause and/or nature of the issues blighting those using Scene Kit in iOS 9 that I'm aware of, despite the noise about its many issues.
One enormous issue that's plagued many is that any use of UIKit and SKViews (in either way they can inter-relate) causes huge performance problems in iOS 9. At this point there are no apparent ways around this problem other than keeping these two frameworks separate.
One big change on iOS 9 vs. iOS 8 is the switch from OpenGL to Metal for SpriteKit and SceneKit. I suggest you try overriding that default, and switch back to OpenGL, by adding the key/value pair PrefersOpenGL/YES to your Info.plist. See Technical Q&A QA1904, Specifying the renderer for SpriteKit and SceneKit.

Increase drawing performance on an iPhone 4

My first question on StackOverflow. So feeling kind of shy ...
I've been working and tweaking on an curstom control for some weeks now. It uses ±6 subclassed CALayers for some fancy animations to give the best possible user-feedback. Additionally there are 2 animated UIViews adding up to some heavy animation and redrawing during user interaction.
I managed to get the responsiveness and performance on an iPhone 5S up to +50fps. But on a iPhone 4, it really makes me cry: 8 ~ 15fps. I tried to figure out what causes this awfull performance, but till now I found nothing other than the fact I might be wanting to much from Core Animation.
Using layer.drawsAsynchronously = YES; on all CALayers increased the responsiveness by A LOT. And I also took out all unnecessary animations (including implicit animations). But it still isn't enough. The performance on an iPhone 4 is still not the way I want it.
I notice a lot of improvement when I switch to layer.opaque = YES; But due to the design of my interface, this really isn't an option.
Is there anyting you guys can suggest to look into? Are there any other "magic" properties, like .drawsAsynchronously I might want to try or look into?
Are there any resources you can suggest on how to debug/analyse the performance?
Any help is appriciated. Thanks in advance!
The quickest answer will be don't use drawRect: because it's very expensive.
CALayers are the best way.
If you need to draw something complex is good to consider Core Graphics because it use GPU instead of CPU which is much more effective. You can draw image with Core Graphics and add it to the view.
Have a look on this link
There is good explanation how UIView works and how to write most efficient code

CATextLayer changing text fast

I am working on an app that needs to change a lot of CATextLayers strings, but, only one or two characters of it(in general though, the strings are in length of about 2-5 characters).
At first I went with UILabels which were extremely slow, and because of that I tried out CATextLayer, which was a lot faster, but not fast enough, I am updating about 150 CATextLayers quite often, all at once , and it just doesn't cut it, I feel a lag.
I then tried out to do it even more low-level with CoreText, I tried drawing it with a CTLine, which had about the same performance of CATextLayer, so I got back to the CATextLayers because my positioning code for CoreText wasn't perfect.
I started thinking about caching for each string the first two characters(which are always constant), and only changing the other 3 characters, with smaller bounds, which I assume will be a bit faster, but, will it be faster? After all it will have it to composite it with the other text-layer, and it will have to be update all of the 150 text-layers.
Does anybody have any advice? How would you approach it?
Attached is a screenshot from instruments showing that the problem lies in the performance of CATextLayer:
Bitmap Fonts are probably the best way to solve this problem, as they're far and away more performant than anything else in terms of font drawing for something of this nature. But you need to pre-render them to the scale you desire to get the best out of them both visually and in terms of performance.
And you might be best off using Sprite Kit, as it has native handling of them. Here's a github repo with a useful thing to make it easier to use rendered bitmaps from a common tool for creating them: https://github.com/tapouillo/BMGlyphLabel

Drag , Pinch and zoom images in UIView

I am adding multiple UIImageView to a UIView to perform operations such as drag,pinch and zoom images.I have added gesture recogniser to all the UIImageViews.Since i'm adding multiple images(UIImageViews) it has brought down the performance of my app.Does any one have a better solution to perform this? Thanks
The adding of many images should not generally, cause enough of a problem that your app would slow down. For example, to illustrate the point with an absurd example, I added 250 (!) image views each with three gestures, and it works fine on an iPad 3, including the animating of the images into their final resting place/size/rotation.
Two observations:
Are you doing anything computationally intensive with your image views? For example:
Simply adding shadows with Quartz 2D has a huge performance impact because it's actually quite computationally expensive. In the unlikely even that you're using layer shadows, you can try using shouldRasterize, which can mitigate the problem, but not solve it. There are other (kludgy) techniques for doing computationally efficient shadows if that's the problem.
Another surprising computationally intensive process is if your images are (for example) PNGs with transparency settings or if you have reduced the alpha/opacity for your views.
What is the resolution/size of the images being loaded? If the images are very large, the image view will render them according to the contentMode, but it can be very slow if you're taking large images and scaling them down. You should use screen resolution images if possible.
These are just a few examples of things that seem so innocuous, but are really quite computationally expensive. If you're doing any Quartz embellishments on your image views, I'd suggest temporarily paring them back and see if you see any changes.
In terms of diagnosing the performance problems yourself, I'd suggest watching the following two WWDC videos:
WWDC 2012 - #211 - Building Concurrent User Interfaces on iOS includes a fairly pragmatic demonstration of Instruments to identify the source of performance problems. This video is clearly focused on one particular solution (the moving of computationally expensive processes into the background and implementing a concurrent UI), which may or may not apply in this case, but I like the Instruments demonstration.
WWDC 2012 - #235 - iOS App Performance: Responsiveness is a more focused discussion on how one measures responsiveness in apps and techniques to address problems. I don't find the instruments tutorial to be quite as good as the prior video, but it does go into more detail.
Hopefully this can get you going. If you are still stumped, you should share some relevant code regarding how the views are being added/configured and what the gestures are doing. Perhaps you can also clarify the nature of the performance problem (e.g. is it in the initial rendition, is it a low frame rate while the gestures take place, etc.).

iOS: Performance of UIView animations vs CABasic/CAKeyframe animations

Is there a difference in the performance of UIView animation vs CA Animation blocks? I understand they are all interfaces to Core Animation, but am looking to squeeze the most performance vs resources per animation. Thanks.
You will have to benchmark them yourself to be sure, but my guess is there will be no difference. They are both using the same code under the covers and UIView and CALayers perform almost identically to each other. You'll get more performance by making sure you're handling alpha properly. Make everything opaque that can be. The less that has to be calculated through compositing the more responsive your animations will be.
Best regards.

Resources