Is there any way to draw rounded UIImages without doing any of the following?
Blending (Red in Core Animation Instruments)
Offscreen Rendering (Yellow in Core Animation Instruments)
drawRect
I've tried
drawRect with clipping path. This is just too slow. (see: http://developer.apple.com/library/ios/#qa/qa1708/_index.html)
CALayer contents with a maskLayer. This introduces offscreen rendering.
UIImageView and setting cornerRadius and masksToBounds. This has the same effect as #2.
My last resort would be to modify the images directly. Any suggestions?
I use a variation of blog.sallarp.com's rounded corners algorithm. I modified mine to only round certain corners, and it's a category for seamless integration, but that link gives you the basic idea. I assume this might qualify as "offscreen rendering", one of your rejected techniques, but I'm not entirely sure. I've never seen any observable performance issue, but maybe you have some extraordinary situation.
Could you overlay an imageview of just corners on top of your imageview to give the appearance of rounded corners?
Or, if you concerned about the performance hit resulting from rounding corners, why don't you have your app save a copy of the image with its rounded corners, that way you only get the performance hit the first time? It's equivalent to your notion of "modify the image directly", but do it just-in-time.
Update:
Bottom line, I don't know of any other solutions (short of my kludgy idea, point #2 below). But I did some experimentation (because I'm dealing with a similar problem) and came to the following conclusions:
If you have a rounding performance on a 3GS or later device, the problem may not be in the rounding of corners. I found that while it had a material impact on UI on 3G, on the 3GS it was barely noticeable, and on later devices it was imperceivable. If you're seeing a performance problem from rounded corners, make sure it's the rounding and not something else. I found that if I did just-in-time rounding of corners on previously cached images, the rounding had a negligible impact. (And when I did the rounding prior to caching, the UI was smooth as silk.)
An alternative to masking would be to create an image which is an inversion of the rounded corners (i.e. it matches the background in the corners, transparent where you want the image to show through), and then put this image in front of your tableviewcell's image. I found that, for this to work, I had to use a custom cell (e.g. create my own main image view control, my own labels, plus this corner mask image view control), but it definitely was better than just-in-time invocation of the corner rounding algorithm. This might not work well if you're trying to round the corners in a grouped table, but it's a cute little hack if you simply want the appearance of rounded corners without actually rounding them.
If you're rounding corners because you're using a grouped tableview and you don't like the image spilling over the rounded corner, you can simply round the upper left corner of the first row's image and the lower left corner of the last image. This will reduce the invocations of the corner rounding logic. It also looks nice.
I know you didn't ask about this, but other possible performance issues regarding images in tableviews include:
If you're using something like imageWithContentsOfFile (which doesn't cache) in cellForRowAtIndexPath, you'll definitely see performance problem. If you can't take advantage of the imageNamed caching (and some guys complain about it, anyway), you could cache your images yourself, either just-in-time or preloading them in advance on secondary thread. If your tableview references previously loaded images and you'll see huge performance improvement, at which point rounding of corners may or may not still be an issue.
Other sources of performance problems include using an image that isn't optimally sized for your tableview's imageview. A large image had a dramatic impact on performance.
Bottom line,
Before you kill yourself here, make sure rounding is the only performance problem. This is especially true if you're seeing the problem on contemporary hardware, because in my experience the impact was negligible. Plus, it would be a shame to loose too much sleep over this, when a broader solution would be better. Using images that are not size-optimized thumbnails, failure to cache images, etc., will all have dramatic impact on performance.
If at all possible, round the corners of the images in advance and the UI will be best, and the view controller code will be nice and simple.
If you can't do it in advance (because you're downloading the images realtime, for example) you can try the corner masking image trick, but this will only work in certain situations (though a non-grouped tableview is one of them).
If you have to round corners as the UI is going, do the corner rounding in a separate queue and don't block the UI.
It's what I do and I'm generally happy with its performance:
UIImageView *iv=[[UIImageView alloc] initWithImage:[UIImage imageNamed:#"769-male"]];
UIGraphicsBeginImageContextWithOptions(iv.bounds.size, NO, [UIScreen mainScreen].scale);
[[UIBezierPath bezierPathWithRoundedRect:v.bounds cornerRadius:s] addClip];
[image drawInRect:v.bounds];
v.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Related
I have UIView ancestor - let say MyView, which overrides method -drawInRect to display an image
override func draw(_ rect: CGRect) {
super.draw(rect)
image?.draw(in: rect)
/*some code here*/
}
This view is scaling in both directions at some point. After enlarging, displayed image loses it's quality. Saying, if I create MyView with size 20*20, bounds property remains the same after scaling, but transformed image I'm trying to display is low quality, because I'm trying to paint it with the same 20x20 size. Is there any way to draw an image respectively to my view scaling?
First, strongly consider using a UIImageView as a subview rather than trying to implement even modestly complicated image drawing yourself. UIImageView is optimized for many of the problems you can face. It's not perfect, and sometimes it's possible to do better by hand if you know what you're doing, but it does its job well.
To your specific problem, your contentMode is almost certainly scaleToFill (that's the default). The simplest solution is to change it to redraw to force a full redraw when the bounds change. That's not highly efficient, but may be ok for your problem (it often is fine). Making it both high quality and highly efficient is a bit more complicated, and often requires a lot of special knowledge how your app behaves. For the easy cases, making this high quality and efficient is what UIImageView does for you.
If you're reliably scaling from 20x20 to some other specific size, and you don't have too many views, another trick is to draw at your full size and then scale down to 20x20. This is very inefficient if you have a lot of views and the size of the final images is large, but can be very fast and high quality, so is something to at least consider. I've done this to deal with small zooms (for example to 110% size). It probably wouldn't be a good way to turn thumbnails into full-screen images, though.
In 2011's WWDC video session 121, to improve performance of the UI, the presenter chose to draw the rounded corners using UIBezierPath in drawRect:, rather than setting corner radius directly on a layer.
Why is drawing using UIBezierPath necessarily faster? drawRect: happens in software which can be slow too.
Short answer: probably just stick with CALayer’s cornerRadius until you see a performance problem.
Long answer:
We first need to distinguish between “drawing” and “compositing”.
Drawing on iOS is the simple act of filling a texture with pixels (a CPU limited task). Compositing is the act of flattening all of those textures into a single frame to print to the screen (a GPU limited task). Generally speaking, when scrolling or animating you’re mostly taxing the GPU, which is good cause things like shifting all pixels down by one is something the GPU eats for breakfast.
-drawRect: is pure drawing, and uses the CPU to fill a texture. CALayer’s cornerRadius is done at the compositing step, and stresses the GPU.
Using -drawRect: has a very high initial cost (it could easily take longer than one frame) and non-trivial memory usage, but scrolls very smoothly after that (it’s just a texture now like any other texture). Using CALayer’s corner radius is ridiculously fast to create a bunch of views with corner radius, but once you get more than a dozen of them you can say goodbye to scrolling speed (because the GPU not only has to do normal scrolling duties but also needs to keep adding corner radius back onto your view).
But don’t take my word for it, have some math. I adapted Florian Kugler’s benchmark and ran on a iPhone 4S running iOS 6.1.3. I measure how many views can be initially created in 1/60th of a second, then measure how many views can be animated before the frame rate drops below 60fps. In other words: upfront cost vs framerate cost.
| -drawRect: | CALayer’s cornerRadus
max number of views rendered in 16.6ms | 5 views | 110 views
max number of views animating at 60fps | ~400 views | 12 views
(note that the app is killed for using too much memory at 500 -drawRect: views)
At the end of the day, in my own projects I tend to stick to CALayer’s cornerRadius as much as possible. I’ve rarely needed more than a couple of views with round corners and -drawRect: just has too much of an initial performance hit. And subclassing a view just to round the corners is just, ugh.
But no matter what method you end up choosing, make sure you measure and pay attention to how smooth and responsive your app is, and respond accordingly.
I have a UITableViewCell in which inside it I have 5 UILabel a UIButton and a UIImageView that fills out the cell as background. The performance seems to be a bit slow, so I was thinking of using CoreGraphics to improve it. Is it true that using CoreGraphics instead of UILabel as subViews will make things much faster? If yes why?
I have the following code to draw shadow on the cells:
[self.layer setBorderColor:[UIColor blackColor].CGColor];
[self.layer setShadowColor:[UIColor blackColor].CGColor];
[self.layer setShadowRadius:10.0];
[self.layer setCornerRadius:5.0];
[self.layer setShadowPath:[[UIBezierPath bezierPathWithRect:self.frame] CGPath]];
In general, (as Gavin indicated) I would say that you have to first confirm that the subviews are indeed causing a jitter in your scrolling.
When I'm testing UITableViewCell scrolling performance, I often use the Time Profiler in Instruments. Switch to Objective-C Only in the left-hand panel, and look at what is taking the most time on your Main Thread. If you see lots of time spent on rearranging (layout) or drawing of subviews, you may need to use CoreGraphics. If the time instead is spent on allocation/deallocation, then you may want to examine how your subviews are reused if at all. If they are not being reused, then of course this can cause performance problems.
Then of course, you should look at compositing. If your subviews are not opaque (identify this through the CoreAnimation instrument), then the performance may be seriously impacted.
Also of note -- realize that shadows are expensive to draw, and depending on your implementation, they may be redrawing on every frame! Your best option is to make sure that any CALayer shadows are fully rasterized, and have a path defined so live computations from the pixel mask don't have to be made.
If finally, you identify that the layout and redrawing of each subview individually is causing the slowdown, then I have a couple of points/explanations:
Your implementation of the drawing routine for your table view cell will probably be slower than the highly optimized drawing that Apple has written for its views. So you won't win any battles re-implementing drawing of UIImageView itself. Performance gains instead come from two places when drawing with CoreGraphics: a.) Pre-rendering of previously non-opaque views, and reduction of time spent in the layout phase of the view drawing cycle - this reduces the workload on the GPU/CPU when scrolling. b.) Reduction in time switching CG contexts for individual view drawing. Each element now draws into the same graphics context at the same time, reducing switching costs.
Drawing in drawRect using CoreGraphics on the main thread draws using the CPU, and depending on how complex your cells are, this may cause jitters of its own. Instead, consider drawing in a background thread to a separate CGContext, then dispatching a worker to insert the contents of the drawing as a CGImageRef into a CALayer, or as a UIImage in a UIImageView. There is a naive implementation on GitHub: https://github.com/mindsnacks/MSCachedAsyncViewDrawing
If you do decide to go with background CoreGraphics drawing, be warned that at present (December 2012), I believe there is a bug in the NSString drawing categories when drawing on background threads that results in a fallback to webkit which is absolutely not thread safe. This will cause a crash, so for the present time, make sure that the asynchronous drawing is done in a serial GCD/NSOperation queue.
On the simulator, Debug→Color Blended Layers. Red is bad, green is good.
More accurately, red means that the GPU needed to do an alpha blend. The main cost is in the memory bandwidth required to draw the pixels twice and possibly re-fetch the extra texture. Completely transparent pixels are really bad.
Three fixes (all of which reduce the amount of red), which you should consider before diving into Core Graphics:
Make views opaque when possible. Labels with the background set to [UIColor clearColor] can often be set to a flat colour instead.
Make views with transparency as small as possible. For labels, this involves using -sizeToFit/-sizeThatFits: and adjusting layout appropriately.
Remove the alpha channel from opaque images (e.g. if your cell background is an image) — some image editors don't do this, and it means the GPU needs to perform an alpha test and might need to render whatever's behind the image.
Additionally, turn on and Color Offscreen-Rendered (possibly after turning off Color Blended Layers so it's easier to see). Offscreen-rendered stuff appears in yellow, and usually means that you've applied a layer mask. These can be very bad for performance; I don't know if CoreAnimation caches the masked result.
Finally, you can make CoreAnimation rasterize the cell by setting cell.layer.shouldRasterize = YES (you might also need cell.layer.rasterizationScale = [[UIScreen mainScreen].scale on retina devices; I forget if this is done automatically). The main benefit is that it's easy, and it's also more efficient than rendering the image view yourself in Core Graphics. (The benefit for labels is reduced since the text needs to be rendered on the CPU.)
Also note that view animations will be affected. I forget what setting CALayer.shouldRasterize does (it might re-rasterize it every frame of the animation, which is a bit wasteful when it'll only be drawn to screen once), but using Core Graphics will (by default) stretch the rendered content during the animation. See CALayer.contentsGravity.
What evidence do you have to suggest that what you have in the views is causing the performance issues? This is a deep black hole that can suck you in, so be sure you know the problem is where you think it is.
Are you preloading all your data? Have you pre-downloaded the images? What you're describing shouldn't be causing a slow down in UITableViewCell. Apple developers are much smarter than you and I so make sure you've got the data to back up your decision!
I've also seen a lagging UITableViewCell within the simulator with no noticeable difference on real hardware.
It is true that using CoreGraphics can speed up your draw performance but it can also slow it down if you do it wrong! Have a look at the Apple Tutorial on Advanced Table View Cells for how to perform the technique.
I'm attempting to optimize my app. It's quite visually rich, so has quite a lot of layered UIViews with large images and blending etc.
I've been experimenting with the shouldRasterize property on CALayers. In one case in particular, I have a UIView that consists of lots of sub views including a table. As part of a transition where the entire screen scrolls, this UIView also scales and rotates (using transforms).
The content of the UIView remains static, so I thought it would make sense to set view.layer.shouldRasterize = YES. However, I didn't see an increase in performance. Could it be that it's re-rasterizing every frame at the new scale and rotation? I was hoping that it would rasterize at the beginning when it has an identity transform matrix, and then cache that as it scales and rotates during the transition?
If not, is there a way I could force it to happen? Short of adding a redundant extra super-view/layer that does nothing but scale and rotate its rasterized contents...
You can answer your own question by profiling your application using the CoreAnimation instrument. Note that this one is only available in a device.
You can enable "Color hits in Green and Misses Red". If your layer remains red then it means that it is indeed rasterizing it every frame.
Blending and offscreen-rendering are both expensive in Core Animation.
One can see them in Core Animation instrument in Instruments, with Debug Options:
Here is my case:
Display 50x50 PNG images on UIImageViews. I want to round the images with a 6-point corer radius. The first method is to set UIImageView.layer's cornerRadius and masksToBounds which causes offscreen-rendering. The second method is to make PNG image copies with transparent corners which causes blending(because of the alpha channel).
I've tried both, but I can't see significant performance difference. However, I still want to know which is worse in theory and best practices if any.
Thanks a lot!
Well, short answer, the blending has to occur either way to correctly display the transparent corner pixels. However, this should typically only be an issue if you want the resulting view to also animate in some way (and remember, scrolling is the most common type of animation). Also, I'm able to recreate situations where "cornerRadius" will cause rendering errors on older devices (iPhone 3G in my case) when my views become complex. For situations where you do need performant animations, here are the recommendations I follow.
First, if you only need the resources with a single curve for the rounded corners (different scales are fine, as long as the desired curvature is the same), save them that way to avoid the extra calculation of "cornerRadius" at runtime.
Second, don't use transparency anywhere you don't need it (e.g. when the background is actually a solid color), and always specify the correct value for the "opaque" property to help the system more efficiently calculate the drawing.
Third, find ways to minimize the size of transparent views. For example, for a large border view with transparent elements (e.g. rounded corners), consider splitting the view into 3 (top, middle, bottom) or 7 (4 corners, top middle, middle, bottom middle) parts, keeping the transparent portions as small as possible and marking the rectangular portions as opaque, with solid backgrounds.
Fourth, in situations where you're drawing lots of text in scrollViews (e.g. highly customized UITableViewCell), consider using the "drawRect:" method to render these portions more efficiently. Continue using subviews for image elements, in order to split the render time between the overall view between pre-drawing (subviews) and "just-in-time" drawing (drawRect:). Obviously, experimentation (frames per second while scrolling) could show that violating this "rule-of-thumb" may be optimal for your particular views.
Finally, making sure you have plenty of time to experiment using the profiling tools (especially CoreAnimation) is key. I find that it's easiest to see improvements using the slowest device you want to target, and the results look great on newer devices.
After watching WWDC videos and having some experiments with Xcode and Instruments I can say that blending is better then offscreen rendering. Blending means that system requires some additional time to calculate color of pixels on transparent layers. The more transparent layers you have (and bigger size of these layers) then blending takes more time.
Offscreen rendering means that system will make more then one rendering iteration. At first iteration system will make rendering without visualization just to calculate bounds and shape of area which should be rendered. In next iterations system does regular rendering (depends on calculated shape) including blending if required.
Also for offscreen rendering system creates a separate graphics context and destroys it after rendering.
So you should avoid offscreen rendering and it's better to replace it with blending.