I have a collection view that displays 3 images, two labels, and 1 attributed string(strings are of different colors and font sizes and values are not unique for every cell). One of the images is coming from the web and I used AFnetworking to do the downloading and caching. The collection view displays 15 cells simultaneously.
When I scroll I can only achieve 25 frames/sec.
Below are the things I did:
-Processing of data were done ahead and cached to objects
-Image and views are opaque
-Cells and views are reused
I have done all the optimizations I know but I can't achieve at least 55 frames/sec.
If you could share other techniques to speed up the re-use of cells.
I was even thinking of pre-rendering the subviews off screen and cache it somewhere but I am not sure how it is done.
When I run the app on the iPhone it is fast since it only shows at least four cells at a time.
The first thing you need to do is fire up instruments and find out if you're CPU-bound (computation or regular I/O is a bottleneck) or GPU-bound (the graphics card is struggling). . depending on which of these is the issue the solution varies.
Here's a video tutorial that shows how to do this (among other things) . . This one is from Sean Woodhouse # Itty Bitty Apps (they make the fine Reveal tool).
NB: In the context of performance tuning we usually talk about being I/O bound or CPU bound as separate concerns, however I've grouped them together here meaning "due to either slow computation or I/O data is not getting to the graphics card fast enough". . if this is indeed the problem, then the next step is to find out whether it is indeed related to waiting on I/O or the CPU is maxed-out.
Instruments can be really confusing at first, but the above videos helped me to harness its power.
And here's another great tutorial from Anthony Egerton.
What is the size of the image that you use?
One of the optimization technique which would work is that
Resize the image so that it matches the size of the view you are displaying.
Related
let me start off by showing that I have this UIImageView set up in my ViewController:
Each one of the lines contains a UIButton for a body part. If I select a particular button, it will segue me appropriately.
What'd I like to do is, when the user taps (but doesn't release) the button, I'd like the appropriate body part to show like this:
I can achieve this using 2 options:
UIBuzierPath class to draw, but would take a lot of trial and error and many overlapping shapes per body part to get fitting nicely as similiar in a previous question: Create clickable body diagram with Swift (iOS)
Crop out the highlighted body parts from the original image and position it over the UIImageView depending on which UIButton selected. However there would only be one image per body part, but still less cumbersome then option 1.
Now, my question is not HOW to do it, but which would be a BETTER option for achieving this in terms of cpu processing and memory allocation?
In other words, I'm just concerned about my app lagging as well as taking up app size storage. I'm not concerned about how much time it takes to do it, I want to just make sure my app doesn't stutter when it tries to draw all the shapes.
Thanks.
It is very very very unlikely that either of those approaches would have any significant impact on CPU or memory. Particularly if in option 2, you just use the alpha channels of the cutout images and make them semitransparent tinted overlays. CPU/GPU-wise, neither of the approaches would drop you below the max screen refresh rate of 60fps (which is how users would notice a performance problem). Memory-wise, loading a dozen bezier paths or single-channel images into RAM should be a drop in the bucket compared to what you have available, particularly on any iOS device released in the last 5 years unless it's the Apple Watch.
Keep in mind that "premature optimization is the root of all evil". Unless you have seen performance issues or have good reason to believe they would exist, your time is probably better spent on other concerns like making the code more readable, concise, reusable, etc. See this brief section in Wikipedia on "When to Optimize": https://en.wikipedia.org/wiki/Program_optimization#When_to_optimize
Xcode have tests functionality built in(and performance tests too), so the best way is to try both methods for one body part and compare the results.
You may find the second method to be a bit slower, but not enough to be noticed by the user and at the same time a lot more easier to implement.
For quick start on tests here.
Performance tests here.
As you can imagine these keywords (download image uitableviewcell) will return results related to problems of "lazy loading" which is not what I want to know.
What I want to know is if there is a way to download the images before the cell is displayed.
Would be great if we could download the images for the visible cells plus 5 or 6 in advance, and continue to download 5/6 images in advance, while the table runs
of course all the links are stored in one array, so it's easy to get them in advance
I'm using AFNetworking for all the network operations
Thanks
Naive solution
If your images have a caching policy then you can prefetch as many as you like and on the next actual call where you load the image in the cell it will load it faster (from the cache). This is a bit naive, since you base everything on the predicate that the server uses caching mechanisms on the client side to reduce the number of network operations (but this is not always the case).
Better Solution
Also, you can implement your own data structure (for example extend NSMutableDictionary) to store, handle the images and download the images whenever you want. For example when you load the table view you can start downloading the images that are going to be displayed plus a few more images for the next cells. However, as #picciano said the scrolling of a table view can be pretty quick so, prefetching is just a waste of your time.
Comment
This kind of prefetching makes sense only when the user is connected to a high speed network. But in case the user is connected to a slow network, then you will create a big chunk of network operations which will take time to complete (not very battery efficient).
I'm creating an app where there're up to 40 UIViews where every view stores a drawing of a stick on it which is available in several positions, rotated to 30 degree angle, 45 degree angle etc). Background of a View is transparent. These views can intersect with each other, so I need the UIViews to be transparent in order a user could see both drawings from overlapped and overlapping view. I wonder if this affects a performance of an application seriously? (all this transparency of all 40 UIViews). And how I can track how much memory or CPU my app currently uses.
I recommend watching WWDC 2012 Session 238 - iOS App Performance: Graphics and Animations, which covers these questions.
As a broad answer:
The iPhone will probably handle your 40-view requirement fine—but its impossible to know for sure without trying it out, and without more context (are they being animated? Are they scrolling?)
More views creates more performance problems, because all of the views need to be packaged up and shipped off to be rendered (by backboardd I think).
Transparency will hurt application performance. I believe the core reason is that transparent views need to be drawn in an off-screen buffer rather than be painted over existing content (something like that).
Use Instruments for Profiling
Profile your GPU usage using the Open GL ES Driver (look at 'Device Utilization')
Measure CPU usage using Time Profiler
Measure FPS and check for common performance problems using the CoreAnimation instrument
I wouldn't bother thinking about this until you actually see performance issues. If you do, I can't recommend that WWDC session enough—it covers things like what strategy you should take to optimize performance (e.g. moving work to the GPU as long as it can handle more; the basics of profiling, etc.) as well as tips and tricks based on the implementation details of iOS.
I am adding multiple UIImageView to a UIView to perform operations such as drag,pinch and zoom images.I have added gesture recogniser to all the UIImageViews.Since i'm adding multiple images(UIImageViews) it has brought down the performance of my app.Does any one have a better solution to perform this? Thanks
The adding of many images should not generally, cause enough of a problem that your app would slow down. For example, to illustrate the point with an absurd example, I added 250 (!) image views each with three gestures, and it works fine on an iPad 3, including the animating of the images into their final resting place/size/rotation.
Two observations:
Are you doing anything computationally intensive with your image views? For example:
Simply adding shadows with Quartz 2D has a huge performance impact because it's actually quite computationally expensive. In the unlikely even that you're using layer shadows, you can try using shouldRasterize, which can mitigate the problem, but not solve it. There are other (kludgy) techniques for doing computationally efficient shadows if that's the problem.
Another surprising computationally intensive process is if your images are (for example) PNGs with transparency settings or if you have reduced the alpha/opacity for your views.
What is the resolution/size of the images being loaded? If the images are very large, the image view will render them according to the contentMode, but it can be very slow if you're taking large images and scaling them down. You should use screen resolution images if possible.
These are just a few examples of things that seem so innocuous, but are really quite computationally expensive. If you're doing any Quartz embellishments on your image views, I'd suggest temporarily paring them back and see if you see any changes.
In terms of diagnosing the performance problems yourself, I'd suggest watching the following two WWDC videos:
WWDC 2012 - #211 - Building Concurrent User Interfaces on iOS includes a fairly pragmatic demonstration of Instruments to identify the source of performance problems. This video is clearly focused on one particular solution (the moving of computationally expensive processes into the background and implementing a concurrent UI), which may or may not apply in this case, but I like the Instruments demonstration.
WWDC 2012 - #235 - iOS App Performance: Responsiveness is a more focused discussion on how one measures responsiveness in apps and techniques to address problems. I don't find the instruments tutorial to be quite as good as the prior video, but it does go into more detail.
Hopefully this can get you going. If you are still stumped, you should share some relevant code regarding how the views are being added/configured and what the gestures are doing. Perhaps you can also clarify the nature of the performance problem (e.g. is it in the initial rendition, is it a low frame rate while the gestures take place, etc.).
I can already hear the wrenching guts of a thousand iOS developers.
No, I am not noob.
Why is -drawRect faster for UITableView performance than having multiple views?
I understand that compositing operations take place on the GPU. But compositing is a one-time operation; once the layers are committed to memory, it is no different from a cached buffer that, from the point of view of the GPU, gets translated in and out of view. Compare this to using Core Graphics in drawRect, which employ an unknown amount of operations on the CPU to produce pixels that end up getting cached in CALayers anyway. What's the difference if it all ends up cached and flattened anyway?
Also, if you're handling cell reuse properly, you shouldn't need to regenerate views on each call to -cellForRowAtIndexPath. In fact, there may be a performance benefit to having the state data (font, font size, text color, attributes, etc) cached by UIView/CALayer objects than having them constantly recreated during -drawRect.
Why the craze for drawRect? Can someone give me pointers?
When you talking about optimization, you need to provide specific situations and conditions and limitations. Because optimization is all about micro-management. Otherwise, it's meaningless.
What's the basis of your faster? How did you measured it? What's the numbers?
For example, no-op or very simple -drawRect: can be faster, but it doesn't mean it always does.
I don't know internal design of CA neither. So here are my guesses.
In case of static content
It's weird that your drawing code is being called constantly. Because CALayer caches drawing result, and won't draw it again until you send setNeedsDisplay message. If you don't update cell's content, it's just same with single bitmap layer. Should be faster than multiple composited layers because it doesn't need composition cost. If you're using only small number of cells which are enough to be exist all in the pool at same time, it doesn't need to be updated. As RAM becomes larger in recent model, it's more likely to happen in recent models.
In case of dynamic content
If it is being updated constantly, it means you're actually updating them yourself. So maybe your layer-composited version would also being updated constantly. It means it is being composited again for every frame. It could be slower by how it is complex and large. If it's complex and large and have a lot of overlapping areas, it could be slower. I guess CA will draw everything strictly if it can't determine what area is fine to ignore. Unlike you can choose what to draw or not.
In case of actual drawing is done in CPU
Even you configure your view as pure composition of many layers, each sublayers should be drawn eventually. And drawing of their content is not guaranteed to be done in GPU. For example, I believe CATextLayer is drawing itself in CPU. (because drawing text with polygons on current mobile GPU doesn't make sense in performance perspective) And some filtering effects too. In that case, overall cost would be similar and plus it requires compositing cost.
In case of well balanced load of CPU and GPU
If your GPU is very busy for heavy load because there're too many layers or direct OpenGL drawings, your CPU may be idle. If your CG drawing can be done within the idle CPU time, it could be faster than giving more load to GPU.
None of them is your case?
If your case is none of situations I listed above, I really want to see and check the CG code draws faster than CA composition. I wish you attach some source code.
well, your program could easily end up moving and converting a lot of pixel data if going back and forth from GPU to CPU based renderers.
as well, many layers can consume a lot of memory.
I'm only seeing half the conversation here, so I might have misunderstood. Based on my recent experiences optimizing CALayer rendering, and investigating the ways Apple does(n't) optimize stuff you'd expect to be optimized...
What's the difference if it all ends up cached and flattened anyway?
Apple ends up creating a separate GPU element per layer. If you have lots of layers, you have lots of GPU elements. If you have one drawRect, you only have one element. Apple often does NOT flatten those, even where they could (and possibly "should").
In many cases, "lots of elements" is no issue. But if they get to be large ... or there's enough of them ... or they're bad sizes for OpenGL ... AND (see below) they get stored in CPU instead of on GPU, then things start to get nasty. NB: in my experience:
"enough": 40+ in memory
"large": 100x100 points (200x200 retina pixels)
Apple's code for GPU elements / buffers is well optimized in MOST places, but in a few places it's very POORLY optimized. The performance drop is like going off a cliff.
Also, if you're handling cell reuse properly, you shouldn't need to
regenerate views on each call to -cellForRowAtIndexPath
You say "properly", except ... IIRC Apple's docs tell people not to do it that way, they go for a simpler approach (IMHO: weak docs), and instead re-populate all the subviews on every call. At which point ... how much are you saving?
FINALLY:
...doesn't all this change with iOS 6, where the cost of creating a UIView is greatly reduced? (I haven't profiled it yet, just been hearing about it from other devs)