Blending V.S. offscreen-rendering, which is worse for Core Animation performance? - ios

Blending and offscreen-rendering are both expensive in Core Animation.
One can see them in Core Animation instrument in Instruments, with Debug Options:
Here is my case:
Display 50x50 PNG images on UIImageViews. I want to round the images with a 6-point corer radius. The first method is to set UIImageView.layer's cornerRadius and masksToBounds which causes offscreen-rendering. The second method is to make PNG image copies with transparent corners which causes blending(because of the alpha channel).
I've tried both, but I can't see significant performance difference. However, I still want to know which is worse in theory and best practices if any.
Thanks a lot!

Well, short answer, the blending has to occur either way to correctly display the transparent corner pixels. However, this should typically only be an issue if you want the resulting view to also animate in some way (and remember, scrolling is the most common type of animation). Also, I'm able to recreate situations where "cornerRadius" will cause rendering errors on older devices (iPhone 3G in my case) when my views become complex. For situations where you do need performant animations, here are the recommendations I follow.
First, if you only need the resources with a single curve for the rounded corners (different scales are fine, as long as the desired curvature is the same), save them that way to avoid the extra calculation of "cornerRadius" at runtime.
Second, don't use transparency anywhere you don't need it (e.g. when the background is actually a solid color), and always specify the correct value for the "opaque" property to help the system more efficiently calculate the drawing.
Third, find ways to minimize the size of transparent views. For example, for a large border view with transparent elements (e.g. rounded corners), consider splitting the view into 3 (top, middle, bottom) or 7 (4 corners, top middle, middle, bottom middle) parts, keeping the transparent portions as small as possible and marking the rectangular portions as opaque, with solid backgrounds.
Fourth, in situations where you're drawing lots of text in scrollViews (e.g. highly customized UITableViewCell), consider using the "drawRect:" method to render these portions more efficiently. Continue using subviews for image elements, in order to split the render time between the overall view between pre-drawing (subviews) and "just-in-time" drawing (drawRect:). Obviously, experimentation (frames per second while scrolling) could show that violating this "rule-of-thumb" may be optimal for your particular views.
Finally, making sure you have plenty of time to experiment using the profiling tools (especially CoreAnimation) is key. I find that it's easiest to see improvements using the slowest device you want to target, and the results look great on newer devices.

After watching WWDC videos and having some experiments with Xcode and Instruments I can say that blending is better then offscreen rendering. Blending means that system requires some additional time to calculate color of pixels on transparent layers. The more transparent layers you have (and bigger size of these layers) then blending takes more time.
Offscreen rendering means that system will make more then one rendering iteration. At first iteration system will make rendering without visualization just to calculate bounds and shape of area which should be rendered. In next iterations system does regular rendering (depends on calculated shape) including blending if required.
Also for offscreen rendering system creates a separate graphics context and destroys it after rendering.
So you should avoid offscreen rendering and it's better to replace it with blending.

Related

Rounded Avatar Images on iOS and performance issues

Is there any way to draw rounded UIImages without doing any of the following?
Blending (Red in Core Animation Instruments)
Offscreen Rendering (Yellow in Core Animation Instruments)
drawRect
I've tried
drawRect with clipping path. This is just too slow. (see: http://developer.apple.com/library/ios/#qa/qa1708/_index.html)
CALayer contents with a maskLayer. This introduces offscreen rendering.
UIImageView and setting cornerRadius and masksToBounds. This has the same effect as #2.
My last resort would be to modify the images directly. Any suggestions?
I use a variation of blog.sallarp.com's rounded corners algorithm. I modified mine to only round certain corners, and it's a category for seamless integration, but that link gives you the basic idea. I assume this might qualify as "offscreen rendering", one of your rejected techniques, but I'm not entirely sure. I've never seen any observable performance issue, but maybe you have some extraordinary situation.
Could you overlay an imageview of just corners on top of your imageview to give the appearance of rounded corners?
Or, if you concerned about the performance hit resulting from rounding corners, why don't you have your app save a copy of the image with its rounded corners, that way you only get the performance hit the first time? It's equivalent to your notion of "modify the image directly", but do it just-in-time.
Update:
Bottom line, I don't know of any other solutions (short of my kludgy idea, point #2 below). But I did some experimentation (because I'm dealing with a similar problem) and came to the following conclusions:
If you have a rounding performance on a 3GS or later device, the problem may not be in the rounding of corners. I found that while it had a material impact on UI on 3G, on the 3GS it was barely noticeable, and on later devices it was imperceivable. If you're seeing a performance problem from rounded corners, make sure it's the rounding and not something else. I found that if I did just-in-time rounding of corners on previously cached images, the rounding had a negligible impact. (And when I did the rounding prior to caching, the UI was smooth as silk.)
An alternative to masking would be to create an image which is an inversion of the rounded corners (i.e. it matches the background in the corners, transparent where you want the image to show through), and then put this image in front of your tableviewcell's image. I found that, for this to work, I had to use a custom cell (e.g. create my own main image view control, my own labels, plus this corner mask image view control), but it definitely was better than just-in-time invocation of the corner rounding algorithm. This might not work well if you're trying to round the corners in a grouped table, but it's a cute little hack if you simply want the appearance of rounded corners without actually rounding them.
If you're rounding corners because you're using a grouped tableview and you don't like the image spilling over the rounded corner, you can simply round the upper left corner of the first row's image and the lower left corner of the last image. This will reduce the invocations of the corner rounding logic. It also looks nice.
I know you didn't ask about this, but other possible performance issues regarding images in tableviews include:
If you're using something like imageWithContentsOfFile (which doesn't cache) in cellForRowAtIndexPath, you'll definitely see performance problem. If you can't take advantage of the imageNamed caching (and some guys complain about it, anyway), you could cache your images yourself, either just-in-time or preloading them in advance on secondary thread. If your tableview references previously loaded images and you'll see huge performance improvement, at which point rounding of corners may or may not still be an issue.
Other sources of performance problems include using an image that isn't optimally sized for your tableview's imageview. A large image had a dramatic impact on performance.
Bottom line,
Before you kill yourself here, make sure rounding is the only performance problem. This is especially true if you're seeing the problem on contemporary hardware, because in my experience the impact was negligible. Plus, it would be a shame to loose too much sleep over this, when a broader solution would be better. Using images that are not size-optimized thumbnails, failure to cache images, etc., will all have dramatic impact on performance.
If at all possible, round the corners of the images in advance and the UI will be best, and the view controller code will be nice and simple.
If you can't do it in advance (because you're downloading the images realtime, for example) you can try the corner masking image trick, but this will only work in certain situations (though a non-grouped tableview is one of them).
If you have to round corners as the UI is going, do the corner rounding in a separate queue and don't block the UI.
It's what I do and I'm generally happy with its performance:
UIImageView *iv=[[UIImageView alloc] initWithImage:[UIImage imageNamed:#"769-male"]];
UIGraphicsBeginImageContextWithOptions(iv.bounds.size, NO, [UIScreen mainScreen].scale);
[[UIBezierPath bezierPathWithRoundedRect:v.bounds cornerRadius:s] addClip];
[image drawInRect:v.bounds];
v.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();

Painting app with huge canvas

I'm working on yet another drawing app with canvas that is many times bigger than screen.
I need some advice/direction on how to that.
Basically what i want is to scroll around this big canvas, drawing only in visible region.
I was thinking of two approaches:
Have 64x64 (or whatever) "tiles" to draw on, and then on scroll just load new tiles.
Record all user strokes (points) and on scroll calculate which are in specified region, and draw them, using only screen-size canvas.
If this matters, i'm using cocos2d for prototype.
Forget the 2000x200 limitation, I have an open source project that draws 18000 x 18000 NASA images.
I suggest you break this task into two parts. First, scrolling. As was suggested by CodaFi, when you scroll you will provide CATiledLayers. Each of those will be a CGImageRef that you create - a sub image of your really huge canvas. You can then easily support zooming in and out.
The second part is interacting with the user to draw or otherwise effect the canvas. When the user stops scrolling, you then create an opaque UIView subclass, which you add as a subview to your main view, overlaying the view hosting the CATiledLayers. At the moment you need to show this view, you populate it with the proper information so it can draw that portion of your larger canvas properly (say a circle at this point of such and such a color, etc).
You would do your drawing using the drawRect: method of this overlay view. So as the user takes action that changes the view, you do a "setDisplayInRect:" as needed to force iOS to call your drawRect:.
When the user decides to scroll, you need to update your large canvas model with whatever changes the user has made, then remove the opaque overlay, and let the CATiledLayers draw the proper portions of the large image. This transition is probably the most tricky part of the process to avoid visual glitches.
Supposing you have a large array of object definitions used for your canvas. When you need to create a CGImageRef for a tile, you scan through it looking for overlap between the object's frame and the tile's frame, and only then draw those items that are required for that tile.
Many mobile devices don't support textures over 2048x2048. So I would recommend:
make your big surface out of large 2048x2048 tiles
draw only the visible part of the currently visible tile to the screen
you will need to draw up to 4 tiles per frame, in case the user has scrolled to a corner of four tiles, but make sure you don't draw anything extra if there is only one visible tile.
This is probably the most efficient way. 64x64 tiles are really too small, and will be inefficient since there will be a large repeated overhead for the "draw tile" calls.
There is a tiling example in Apples ScrollViewSuite Doesn't have anything to do with the drawing part but it might give you some ideas about how to manage the tile part of things.
You can use CATiledLayer.
See WWDC2010 session 104
But for cocos2d, it might not work.

Does shouldRasterize on a CALayer cause rasterization before or after the layer's transform?

I'm attempting to optimize my app. It's quite visually rich, so has quite a lot of layered UIViews with large images and blending etc.
I've been experimenting with the shouldRasterize property on CALayers. In one case in particular, I have a UIView that consists of lots of sub views including a table. As part of a transition where the entire screen scrolls, this UIView also scales and rotates (using transforms).
The content of the UIView remains static, so I thought it would make sense to set view.layer.shouldRasterize = YES. However, I didn't see an increase in performance. Could it be that it's re-rasterizing every frame at the new scale and rotation? I was hoping that it would rasterize at the beginning when it has an identity transform matrix, and then cache that as it scales and rotates during the transition?
If not, is there a way I could force it to happen? Short of adding a redundant extra super-view/layer that does nothing but scale and rotate its rasterized contents...
You can answer your own question by profiling your application using the CoreAnimation instrument. Note that this one is only available in a device.
You can enable "Color hits in Green and Misses Red". If your layer remains red then it means that it is indeed rasterizing it every frame.

When does a view (or layer) require offscreen rendering?

Hellothis weekend I started to watch the 2011 WWDC videos. I've found really interesting topics about iOS. My favorites were about performance and graphics, but I've found two of them apparently in contradiction. Of course there is something that I didn't get.
The sessions that I'm talking about are Understanding UIKit Rendering -121 and Polishing your app -105.
Unfortunately sample code from 2011 is still not downloadable, so is pretty hard to have an overall view.
In one session they explain that most of times offscreen rendering should be avoided during visualization in scrollview etc. They fix the performance issues in the sample code almost drawing everything inside the -drawRect method.
In the other session the performance issue (on a table view) seems to be due to too much code in the -drawRect method of the table's cells.
First is not clear to me when an OffScreen rendering is required by the system, I've seen in the video that some quartz function such as: cornerRadious, shadowOffset, shadowColor requires it, but does exist a general rule?
Second I don't know if I understood well, but it seems that when there is no offscreen rendering adding layers or views is the way to go.
I hope someone could bring light about that..
Thanks,
Andrea
I don't think there is a rule written down anywhere, but hopefully this will help:
First, let's clear up some definitions. I think offscreen vs onscreen rendering is not the overriding concern most of the time, because offscreen rendering can be as fast as onscreen. The main issue is whether the rendering is done in hardware or software.
There is also very little practical difference between using layers and views. Views are just a thin wrapper around CALayer and they don't introduce a significant performance penalty most of the time. You can override the type of layer used by a view using the +layerClass method if you want to have a view backed by a CAShapeLayer or CATileLayer, etc.
Generally, on iOS, pixel effects and Quartz / Core Graphics drawing are not hardware accelerated, and most other things are.
The following things are not hardware accelerated, which means that they need to be done in software (offscreen):
Anything done in a drawRect. If your view has a drawRect, even an empty one, the drawing is not done in hardware, and there is a performance penalty.
Any layer with the shouldRasterize property set to YES.
Any layer with a mask or drop shadow.
Text (any kind, including UILabels, CATextLayers, Core Text, etc).
Any drawing you do yourself (either onscreen or offscreen) using a CGContext.
Most other things are hardware accelerated, so they are much faster. However, this may not mean what you think it does.
Any of the above types of drawing are slow compared to hardware accelerated drawing, however they don't necessarily slow down your app because they don't need to happen every frame. For example, drawing a drop shadow on a view is slow the first time, but after it is drawn it is cached, and is only redrawn if the view changes size or shape.
The same goes for rasterised views or views with a custom drawRect: the view typically isn't redrawn every frame, it is drawn once and then cached, so the performance after the view is first set up is no worse, unless the bounds change or you call setNeedsDisplay on it.
For good performance, the trick is to avoid using software drawing for views that change every frame. For example, if you need an animated vector shape you'll get better performance using CAShapeLayer or OpenGL than drawRect and Core Graphics. But if you draw a shape once and then don't need to change it, it won't make much difference.
Similarly, don't put a drop shadow on an animated view because it will slow down your frame rate. But a shadow on a view that doesn't change from frame to frame won't have much negative impact.
Another thing to watch out for is slowing down the view setup time. For example, suppose you have a page of text with drop shadows on all the text; this will take a very long time to draw initially since both the text and shadows all need to be rendered in software, but once drawn it will be fast. You will therefore want to set up this view in advance when your application loads, and keep a copy of it in memory so that the user doesn't have to wait ages for the view to display when it first appears on screen.
This is probably the reason for the apparent contradiction in the WWDC videos. For large, complex views that don't change every frame, drawing them once in software (after which they are cached and don't need to be redrawn) will yield better performance than having the hardware re-composite them every frame, even though it will be slower to draw the first time.
But for views that must be redrawn constantly, like table cells (the cells are recycled so they must be redrawn each time one cell scrolls offscreen and is re-used as it scrolls back onto the other side as a different row), software drawing may slow things down a lot.
Offscreen-rendering is one of the worst defined topics in iOS rendering, today. When Apple's UIKit engineers refer to offscreen-rendering, it has a very specific meaning, and a ton of third-party iOS dev blogs are getting it wrong.
When you override "drawRect:", you're drawing via the CPU, and spitting out a bitmap. The bitmap is packaged up and sent to separate process that lives in iOS, the render server. Ideally, the render server just displays the data on screen.
If you fiddle with properties on CALayer, like turning on drop shadows, the GPU will perform additional drawing. This additional work is what UIKit engineers mean when they say "off-screen rendering." This is always performed with hardware.
The issue with off-screen drawing isn't necessarily the drawing. The off-screen pass requires a context switch, as the GPU switches its drawing destination. During this switch, the GPU is idle.
While I don't know a full list of properties that trigger an off-screen pass, you can diagnose this with the Core Animation Instrument's "Color Offscreen-rendered layer" toggle. I assume any property other than alpha is performed via an offscreen pass.
With early iOS hardware, it was reasonable to say "do everything in drawRect." Nowadays GPUs are better, and UIKit has features like shouldRasterize. Today, it's a balancing act between the time spent in drawRect, the number of off-screen passes, and the amount of blending. For the full details, watch the 2014 WWDC session 419, "Advanced Graphics and Animation for iOS Apps."
That all said, it's good to understand what's going on behind-the-scenes, and keep it in the back of your head so you don't do anything insane, but you should start from the simplest solution. Then test it on the slowest hardware you support. If you aren't hitting 60FPS, use Instruments to measure things and figure it out. There are a few possible bottlenecks, and if you aren't using data to diagnose things, you're just guessing.
Offscreen rendering / Rendering on the CPU
The biggest bottlenecks to graphics performance are offscreen rendering and blending – they can happen for every frame of the animation and can cause choppy scrolling.
Offscreen rendering (software rendering) happens when it is necessary to do the drawing in software (offscreen) before it can be handed over to the GPU. Hardware does not handle text rendering and advanced compositions with masks and shadows.
The following will trigger offscreen rendering:
Any layer with a mask (layer.mask)
Any layer with layer.masksToBounds / view.clipsToBounds being true
Any layer with layer.allowsGroupOpacity set to YES and layer.opacity is less than 1.0
When does a view (or layer) require offscreen rendering?
Any layer with a drop shadow (layer.shadow*).
Tips on how to fix: https://markpospesel.wordpress.com/tag/performance/
Any layer with layer.shouldRasterize being true
Any layer with layer.cornerRadius, layer.edgeAntialiasingMask, layer.allowsEdgeAntialiasing
Any layer with layer.borderWith and layer.borderColor?
Missing reference / proof
Text (any kind, including UILabel, CATextLayer, Core Text, etc).
Most of the drawings you do with CGContext in drawRect:. Even an empty implementation will be rendered offscreen.
This post covers blending and other things affecting performance: What triggers offscreen rendering, blending and layoutSubviews in iOS?

iOS: Smooth button Glow effect by blending between images

I am creating a custom button that needs to be able to glow to a varying degree
How would I use these pictures to make a button that 'glows' the diamond when it is pressed, and have this glow gradually fade back to inert state?
I want to churn out several different colours of diamond as well... I am hoping to generate all different coloured diamonds from the same stock images presented here.
I would like to get my head around the basic methods available, in enough detail that I can see each one through and make a decision which path to take...
My tangled efforts so far... ( I will delete all of this, or move it into possibly several answers as a solution unfolds... )
I can see 3 potential solution paths:
GL
it looks as though GL has everything it takes to get complete fine-grained control over the process, although functions exposed by core graphics come tantalisingly close, and that would save several hundred lines of code spread over a bunch of source files, which seems a bit ridiculous for such a basic task.
core graphics, and core animation to accomplish the blending
documentation goes on to say
Anything underneath the unpainted samples, such as the current fill color or other drawing, shows through.
so I can chroma-key mask the left image, setting {0,0,0} ie Black as the key.
this at least secures a transparent background, now I have to work on making it yellow instead of grey.
so maybe I could have started instead with setting a yellow back colour for my image context, then use some CGContextSetBlendMode(...) to imprint the diamond on the yellow, THEN use chroma-key masking to get a transparent background
ok, this covers at least getting the basic unlit image on-screen
now I could overlay the sparkly image, using some blend mode, maybe I could keep it in its current greyscale state, and that would just boost the colours of the original
only problem with this is that it is a lot of heavy real-time blending
so maybe I could pre-calculate every image in the animation... this is looking increasingly mucky...
Cocos2D
if this allows me to set the blend mode to additive blending then I could just composite the glowing image over the original image with an appropriate Alpha setting.
After digging through a lot of documentation, the optimal solution seems to be to use core graphics functions to get the source images into a single 2-component GL texture, and then use GL to blend between them.
I will need to pass a uniform value glow_factor into the shader
The obvious solution might seem to simply use
r,g,b = in_r,g,b * { (1 - glow_factor) * inertPixel + glow_factor * shinyPixel }
(where inertPixel is the appropriate pixel of the inert diamond etc)...
it looks like I would also do well to manufacture my own sparkles and add them over the top; a gem should sparkle white irrespective of its characteristic colour.
After having looked at this problem a little more, I can see several solutions
Solution A -- store the transition from glow=0 to glow=1 as 60 frames in memory, then load the appropriate frame into a GL texture every time it is required.
this has an obvious benefit that a graphic designer could construct the entire sequence and I could load it in as a bunch of PNG files.
another advantage is that these frames wouldn't need to be played in sequence... the appropriate frame can be chosen on-the-fly
however, it has a potential drawback of a lot of sending data RAM->VRAM
this can be optimised by using glTexSubImage2D; several frames can be sent simultaneously and then unpacked from within GL... in fact maybe the entire sequence. if this is so, then it would make sense to use PVRT texture compression.
iOS: playing a frame-by-frame greyscale animation in a custom colour
Solution B -- load glow=0 and glow=1 images as GL textures, and manually write shader code that takes in the glow factor as a uniform and performs the blend
this has an advantage that it is close to the wire and can be tweaked in all sorts of ways. Also it is going to be very efficient. This advantage is that it is a big extra slice of code to maintain.
Solution C -- set glBlendMode to perform additive blending.
then draw the glow=0 image image, setting eg alpha=0.2 on each vertex.
then draw the glow=1 image image, setting eg alpha=0.8 on each vertex.
this has an advantage that it can be achieved with a more generic code structure -- ie a very general ' draw textured quad / sprite ' class.
disadvantage is that without some sort of wrapper it is a bit messy... in my game I have a couple of dozen diamonds -- at any one time maybe 2 or 3 are likely to be glowing. so first-pass I would render EVERYTHING ( just need to set Alpha appropriately for everything that is glowing ) and then on the second pass I could draw the glowing sprite again with appropriate Alpha for everything that IS glowing.
it is worth noting that if I pursue solution A, this would involve creating some sort of real-time movie player object, which could be a very useful reusable code component.

Resources