I have UIView ancestor - let say MyView, which overrides method -drawInRect to display an image
override func draw(_ rect: CGRect) {
super.draw(rect)
image?.draw(in: rect)
/*some code here*/
}
This view is scaling in both directions at some point. After enlarging, displayed image loses it's quality. Saying, if I create MyView with size 20*20, bounds property remains the same after scaling, but transformed image I'm trying to display is low quality, because I'm trying to paint it with the same 20x20 size. Is there any way to draw an image respectively to my view scaling?
First, strongly consider using a UIImageView as a subview rather than trying to implement even modestly complicated image drawing yourself. UIImageView is optimized for many of the problems you can face. It's not perfect, and sometimes it's possible to do better by hand if you know what you're doing, but it does its job well.
To your specific problem, your contentMode is almost certainly scaleToFill (that's the default). The simplest solution is to change it to redraw to force a full redraw when the bounds change. That's not highly efficient, but may be ok for your problem (it often is fine). Making it both high quality and highly efficient is a bit more complicated, and often requires a lot of special knowledge how your app behaves. For the easy cases, making this high quality and efficient is what UIImageView does for you.
If you're reliably scaling from 20x20 to some other specific size, and you don't have too many views, another trick is to draw at your full size and then scale down to 20x20. This is very inefficient if you have a lot of views and the size of the final images is large, but can be very fast and high quality, so is something to at least consider. I've done this to deal with small zooms (for example to 110% size). It probably wouldn't be a good way to turn thumbnails into full-screen images, though.
Related
I have to create UIView with custom shape, such as triangular, half-rect, etc.
I used to crop special image to that form and set as a background of my view.
Although it is a popular solution, I am not sure whether it is the most efficient one in terms of sustainability.
On the other hand, I found useful way of solving this problem with CAShapeLayer()
Could you please provide pros and cons of both approaches?
CAShapeLayer all the way!
With little effort you can achieve the same result with less memory (RAM) and less maintenance time (if you want to make the triangle thicker for example, you'll need a new image however a small change in code and you have it!). Moreover your app size will be relatively smaller and you don't have to worry about the resolution like the images.
Hope this helps!
I'm working on a custom view, that has some specific Core Graphics drawings. I want to handle the view's autoresizing as efficiently as possible.
If I have a vertical line drawn in UIView, and the view's width stretches, the line's width will stretch with it. I want to keep the original width, therefore I redraw each time in -layoutSubviews:
- (void)drawRect:(CGRect)rect
{
[super drawRect:rect];
// ONLY drawing code ...
}
- (void)layoutSubviews
{
[super layoutSubviews];
[self setNeedsDisplay];
}
This works fine, however I don't think this is a efficient approach - unless CGContext drawing is blazing fast.
So is it really fast? Or is there better way to handle view's autoresizing? (CALayer does not support autoresizing on iOS).
UPDATE :
this is going to be a reusable view. And its task is to draw visual representation of data, supplied by the dataSource. So in practice there could really be a lot of drawing. If it is impossible to get this any more optimized, then there's nothing I can do... but I seriously doubt I'm taking the right approach.
It really depends on what you mean by "fast" but in your case the answer is probably "No, CoreGraphics drawing isn't going to give you fantastic performance."
Whenever you draw in drawRect (even if you use CoreGraphics to do it) you're essentially drawing into a bitmap, which backs your view. The bitmap is eventually sent over to the lower level graphics system, but it's a fundamentally slower process than (say) drawing into an OpenGL context.
When you have a view drawing with drawRect it's usually a good idea to imagine that every call to drawRect "creates" a bitmap, so you should minimize the number of times you need to call drawRect.
If all you want is a single vertical line, I would suggest making a simple view with a width of one point, configured to layout in the center of your root view and to stretch vertically. You can color that view by giving it a background color, and it does not need to implement drawRect.
Using views is usually not recommended, and drawing directly is actually preferred, especially when the scene is complex.
If you see your drawing code is taking a considerable toll, steps to optimize drawing further is to minimize drawing, by either only invalidating portions of the view rather than entirely (setNeedsDisplayInRect:) or using tiling to only draw portions.
For instance, when a view is resized, if you only need to draw in the areas where the view has changed, you can monitor and calculate the difference in size between current and previous layout, and only invalidate regions which have changed. Edit: It seems iOS does not allow partial view drawing, so you may need to move your drawing to a CALayer, and use that as the view's layer.
CATiledLayer can also give a possible solution, where you can cache and preload tiles and draw required tiles asynchronously and concurrently.
But before you take drastic measures, test your code in difficult conditions and see if your code is performant enough. Invalidating only updated regions can assist, but it is not always straightforward to limit drawing to a provided rectangle. Tiling adds even more difficulty, as the tiling mechanism requires learning, and elements are drawn on background threads, so concurrency issues also come in play.
Here is an interesting video on the subject of optimizing 2D drawing from Apple WWDC 2012:
https://developer.apple.com/videos/wwdc/2012/?include=506#506
Is there any way to draw rounded UIImages without doing any of the following?
Blending (Red in Core Animation Instruments)
Offscreen Rendering (Yellow in Core Animation Instruments)
drawRect
I've tried
drawRect with clipping path. This is just too slow. (see: http://developer.apple.com/library/ios/#qa/qa1708/_index.html)
CALayer contents with a maskLayer. This introduces offscreen rendering.
UIImageView and setting cornerRadius and masksToBounds. This has the same effect as #2.
My last resort would be to modify the images directly. Any suggestions?
I use a variation of blog.sallarp.com's rounded corners algorithm. I modified mine to only round certain corners, and it's a category for seamless integration, but that link gives you the basic idea. I assume this might qualify as "offscreen rendering", one of your rejected techniques, but I'm not entirely sure. I've never seen any observable performance issue, but maybe you have some extraordinary situation.
Could you overlay an imageview of just corners on top of your imageview to give the appearance of rounded corners?
Or, if you concerned about the performance hit resulting from rounding corners, why don't you have your app save a copy of the image with its rounded corners, that way you only get the performance hit the first time? It's equivalent to your notion of "modify the image directly", but do it just-in-time.
Update:
Bottom line, I don't know of any other solutions (short of my kludgy idea, point #2 below). But I did some experimentation (because I'm dealing with a similar problem) and came to the following conclusions:
If you have a rounding performance on a 3GS or later device, the problem may not be in the rounding of corners. I found that while it had a material impact on UI on 3G, on the 3GS it was barely noticeable, and on later devices it was imperceivable. If you're seeing a performance problem from rounded corners, make sure it's the rounding and not something else. I found that if I did just-in-time rounding of corners on previously cached images, the rounding had a negligible impact. (And when I did the rounding prior to caching, the UI was smooth as silk.)
An alternative to masking would be to create an image which is an inversion of the rounded corners (i.e. it matches the background in the corners, transparent where you want the image to show through), and then put this image in front of your tableviewcell's image. I found that, for this to work, I had to use a custom cell (e.g. create my own main image view control, my own labels, plus this corner mask image view control), but it definitely was better than just-in-time invocation of the corner rounding algorithm. This might not work well if you're trying to round the corners in a grouped table, but it's a cute little hack if you simply want the appearance of rounded corners without actually rounding them.
If you're rounding corners because you're using a grouped tableview and you don't like the image spilling over the rounded corner, you can simply round the upper left corner of the first row's image and the lower left corner of the last image. This will reduce the invocations of the corner rounding logic. It also looks nice.
I know you didn't ask about this, but other possible performance issues regarding images in tableviews include:
If you're using something like imageWithContentsOfFile (which doesn't cache) in cellForRowAtIndexPath, you'll definitely see performance problem. If you can't take advantage of the imageNamed caching (and some guys complain about it, anyway), you could cache your images yourself, either just-in-time or preloading them in advance on secondary thread. If your tableview references previously loaded images and you'll see huge performance improvement, at which point rounding of corners may or may not still be an issue.
Other sources of performance problems include using an image that isn't optimally sized for your tableview's imageview. A large image had a dramatic impact on performance.
Bottom line,
Before you kill yourself here, make sure rounding is the only performance problem. This is especially true if you're seeing the problem on contemporary hardware, because in my experience the impact was negligible. Plus, it would be a shame to loose too much sleep over this, when a broader solution would be better. Using images that are not size-optimized thumbnails, failure to cache images, etc., will all have dramatic impact on performance.
If at all possible, round the corners of the images in advance and the UI will be best, and the view controller code will be nice and simple.
If you can't do it in advance (because you're downloading the images realtime, for example) you can try the corner masking image trick, but this will only work in certain situations (though a non-grouped tableview is one of them).
If you have to round corners as the UI is going, do the corner rounding in a separate queue and don't block the UI.
It's what I do and I'm generally happy with its performance:
UIImageView *iv=[[UIImageView alloc] initWithImage:[UIImage imageNamed:#"769-male"]];
UIGraphicsBeginImageContextWithOptions(iv.bounds.size, NO, [UIScreen mainScreen].scale);
[[UIBezierPath bezierPathWithRoundedRect:v.bounds cornerRadius:s] addClip];
[image drawInRect:v.bounds];
v.image = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I'm attempting to optimize my app. It's quite visually rich, so has quite a lot of layered UIViews with large images and blending etc.
I've been experimenting with the shouldRasterize property on CALayers. In one case in particular, I have a UIView that consists of lots of sub views including a table. As part of a transition where the entire screen scrolls, this UIView also scales and rotates (using transforms).
The content of the UIView remains static, so I thought it would make sense to set view.layer.shouldRasterize = YES. However, I didn't see an increase in performance. Could it be that it's re-rasterizing every frame at the new scale and rotation? I was hoping that it would rasterize at the beginning when it has an identity transform matrix, and then cache that as it scales and rotates during the transition?
If not, is there a way I could force it to happen? Short of adding a redundant extra super-view/layer that does nothing but scale and rotate its rasterized contents...
You can answer your own question by profiling your application using the CoreAnimation instrument. Note that this one is only available in a device.
You can enable "Color hits in Green and Misses Red". If your layer remains red then it means that it is indeed rasterizing it every frame.
Blending and offscreen-rendering are both expensive in Core Animation.
One can see them in Core Animation instrument in Instruments, with Debug Options:
Here is my case:
Display 50x50 PNG images on UIImageViews. I want to round the images with a 6-point corer radius. The first method is to set UIImageView.layer's cornerRadius and masksToBounds which causes offscreen-rendering. The second method is to make PNG image copies with transparent corners which causes blending(because of the alpha channel).
I've tried both, but I can't see significant performance difference. However, I still want to know which is worse in theory and best practices if any.
Thanks a lot!
Well, short answer, the blending has to occur either way to correctly display the transparent corner pixels. However, this should typically only be an issue if you want the resulting view to also animate in some way (and remember, scrolling is the most common type of animation). Also, I'm able to recreate situations where "cornerRadius" will cause rendering errors on older devices (iPhone 3G in my case) when my views become complex. For situations where you do need performant animations, here are the recommendations I follow.
First, if you only need the resources with a single curve for the rounded corners (different scales are fine, as long as the desired curvature is the same), save them that way to avoid the extra calculation of "cornerRadius" at runtime.
Second, don't use transparency anywhere you don't need it (e.g. when the background is actually a solid color), and always specify the correct value for the "opaque" property to help the system more efficiently calculate the drawing.
Third, find ways to minimize the size of transparent views. For example, for a large border view with transparent elements (e.g. rounded corners), consider splitting the view into 3 (top, middle, bottom) or 7 (4 corners, top middle, middle, bottom middle) parts, keeping the transparent portions as small as possible and marking the rectangular portions as opaque, with solid backgrounds.
Fourth, in situations where you're drawing lots of text in scrollViews (e.g. highly customized UITableViewCell), consider using the "drawRect:" method to render these portions more efficiently. Continue using subviews for image elements, in order to split the render time between the overall view between pre-drawing (subviews) and "just-in-time" drawing (drawRect:). Obviously, experimentation (frames per second while scrolling) could show that violating this "rule-of-thumb" may be optimal for your particular views.
Finally, making sure you have plenty of time to experiment using the profiling tools (especially CoreAnimation) is key. I find that it's easiest to see improvements using the slowest device you want to target, and the results look great on newer devices.
After watching WWDC videos and having some experiments with Xcode and Instruments I can say that blending is better then offscreen rendering. Blending means that system requires some additional time to calculate color of pixels on transparent layers. The more transparent layers you have (and bigger size of these layers) then blending takes more time.
Offscreen rendering means that system will make more then one rendering iteration. At first iteration system will make rendering without visualization just to calculate bounds and shape of area which should be rendered. In next iterations system does regular rendering (depends on calculated shape) including blending if required.
Also for offscreen rendering system creates a separate graphics context and destroys it after rendering.
So you should avoid offscreen rendering and it's better to replace it with blending.