I have a UIView and drawing its content from drawRect(), but I need to resize the view sometimes depending on some calculation that are done during the drawing:
When I change the size, the drawing seams to scale and not to use the additional size gained from the resize.
This is how I'm resizing form drawRect()
CGRect limits = self.frame ;
limits.size.width = 400;
self.frame = limits;
It seams like the context is not aware of the resize and is drawing using the old sizes. I already try [self setNeedsDisplay] after resizing but the draw method is not called again.
Calculations should be done ahead of time and not during drawing, for performance reasons alone (though early calculations would also avoid this problem).
For example, if your calculations depend on factors X, Y and Z, arrange to redo those calculations whenever there is a change to X, Y or Z. Cache the results of the calculation if necessary, and resize your view frame at that time. The drawRect: implementation should only draw into the area it is given.
For anybody struggling with that.
Apparently [self setNeedsDisplay] won't have any effect if called from drawRect: (or any method called from drawRect:). Somewhy it just skips the call.
To make things happen use following instead
[self performSelector:#selector(setNeedsDisplay) withObject:nil afterDelay:0];
This will be called right when main queue is free again; just make sure to use some condition for calling needsDisplay inside of drawRect - or else you're going to get looped.
Related
The problem in short:
LooksLike drawRect itself (even empty) leads to a significant performance bottleneck depending on device's resolution - the bigger screen is the worse things are.
Is there a way to speed up redrawing of view's content?
In more details:
I'm creating a small drawing app on iOS - user moves his finger over a display to draw a line.
The idea behind this is quite simple - while user moves his finger touchesMoved accumulates the changes into offscreen buffer image and invalidates the view to merge the offscreen buffer with the view's content.
The simple code snippet may look like this:
#interface CanvasView : UIView
...
end;
#implementation CanvasView{
UIImage *canvasContentImage;
UIImage *bufferImage
CGContextRef drawingContext;
}
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event{
// prepare drawing to start
UIGraphicsBeginImageContext(canvasSize);
drawingContext = UIGraphicsGetCurrentContext();
...
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event{
// draw to bufferImage
CGContextMoveToPoint(drawingContext, prevPoint.x, prevPoint.y);
CGContextAddLineToPoint(drawingContext, point.x, point.y);
CGContextStrokePath(drawingContext);
...
[self setNeedDisplay];
}
-(void) touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event{
//finish drawing
UIGraphicsEndImageContext();
//merge canvasContentImage with bufferImage
...
}
-(void)drawRect:(CGRect)rect{
// draw bufferImage - merge it with current view's content
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, self.bounds, canvasContentImage.CGImage);
CGContextDrawImage(context, imageRect, bufferImage.CGImage);
...
}
I've also implemented a small helper class to calculate fps rate.
So the approach above works rather good on non-retina screens producing nearly 60fps. However fps rate dramatically drops on retina-screens. For example on iPad Retina it is about 15-20 fps which is too slow.
The first obvious reason I thought is that setNeedsDisplay causes to redraw a full screen which is a big wast of resources. So I moved to setNeedsDisplayInRect to update only a dirty region. Surprisingly it didn't change anything regarding to performance (at lest nothing noticeable according to measurements and visually).
So I've started to try different approaches to figure out the bottleneck. When I've commented out all the drawing logic the fps rate still stayed at 15-20 - looks like the problem lies outside of drawing logic.
Finally when I've fully commented out the drawRect method the fps rises to 60. Not that I removed only the implementation but even a declaration. Not sure of my terminology so here is the results:
// -(void)drawRect:(CGRect)rect{
// // draw bufferImage - merge it with current view's content
// ...
// }
What is more interesting when I moved all the drawing code from drawRect method to touchMoved method it doesn't impact the performance, however the same amount of drawing/processing logic still remains comparing to the version with drawRect method - even updating the entire view every time still gives me 60fps.
One problem is that without drawRect I'm not able to visualize that changes.
So I've came to what pregenerated drawRect method warns about:
"Only override drawRect: if you perform custom drawing. An empty
implementation adversely affects performance during animation."
My guess is that the system creates and destroys graphics context every time custom drawRect is triggered leading to "dversely affects performance"
So the questions are:
Is there any way to speed up drawRect calls, like make the system reuse resources from call to call of drawRect or something?
If this is a dead end, what other approaches available to update view's content? Moving to OpenGL is not an option at the moment as there are a lot of code/logic already implemented and it will take a lot of effort to port it.
I'll be glad to provide any additional information needed.
And Thanks in advance for any suggestions!
EDIT:
After some more investigation and experiments I've came to using UIImageView and it's image property to update view's content dynamically. It gave a small performance improvements (the drawing is more stable at 19-22 fps). However it is still far from target 50-60fps.
One think I've noticed is that updating only a dirty part of offscreen buffer image really makes sense - without forcing the view's content update, pure logic of redrawing offscreen buffer gives around 60 fps.
But once I'm trying to attach the updated image to UIImageView.image property to update it's content the fps drops to mentioned 19-22. This is reasonable as assigning the property forces whole image to be redrawn on view's side.
So the question still remains - is there any way to updated only a specified portion of view's (UIImageView's) displaying content?
After spending several days I've came to unexpected (at least for myself) results.
I was able to achieve 30fps on retina iPad which is acceptable result for now.
The trick that worked for me was:
Subclass UIImageView
Use UIImageView.image property to update content - it gives a better results comparing to ordinary UIView with setNeedsDisplay/setNeedsDisplayInRect methods.
Use [self performSelector:#selector(setImage:) withObject:img afterDelay:0]; instead of just UIImageView.image = img to set the updated image to UIImageView.
The last point is a kind of spell for me now however it gives the minimal necessary frame rate (even if the original image is redrawn fully each frame not only dirty regions).
My guess if why performSelector helped to gain fps for updating view in my case is that scheduling setImage on view's queue optimizes possible internal idles which may occur during touch event processing.
This is only my guess and if anyone can provide relevant explanation I'll be glad to post it here or accept that answer.
I'm working on a custom view, that has some specific Core Graphics drawings. I want to handle the view's autoresizing as efficiently as possible.
If I have a vertical line drawn in UIView, and the view's width stretches, the line's width will stretch with it. I want to keep the original width, therefore I redraw each time in -layoutSubviews:
- (void)drawRect:(CGRect)rect
{
[super drawRect:rect];
// ONLY drawing code ...
}
- (void)layoutSubviews
{
[super layoutSubviews];
[self setNeedsDisplay];
}
This works fine, however I don't think this is a efficient approach - unless CGContext drawing is blazing fast.
So is it really fast? Or is there better way to handle view's autoresizing? (CALayer does not support autoresizing on iOS).
UPDATE :
this is going to be a reusable view. And its task is to draw visual representation of data, supplied by the dataSource. So in practice there could really be a lot of drawing. If it is impossible to get this any more optimized, then there's nothing I can do... but I seriously doubt I'm taking the right approach.
It really depends on what you mean by "fast" but in your case the answer is probably "No, CoreGraphics drawing isn't going to give you fantastic performance."
Whenever you draw in drawRect (even if you use CoreGraphics to do it) you're essentially drawing into a bitmap, which backs your view. The bitmap is eventually sent over to the lower level graphics system, but it's a fundamentally slower process than (say) drawing into an OpenGL context.
When you have a view drawing with drawRect it's usually a good idea to imagine that every call to drawRect "creates" a bitmap, so you should minimize the number of times you need to call drawRect.
If all you want is a single vertical line, I would suggest making a simple view with a width of one point, configured to layout in the center of your root view and to stretch vertically. You can color that view by giving it a background color, and it does not need to implement drawRect.
Using views is usually not recommended, and drawing directly is actually preferred, especially when the scene is complex.
If you see your drawing code is taking a considerable toll, steps to optimize drawing further is to minimize drawing, by either only invalidating portions of the view rather than entirely (setNeedsDisplayInRect:) or using tiling to only draw portions.
For instance, when a view is resized, if you only need to draw in the areas where the view has changed, you can monitor and calculate the difference in size between current and previous layout, and only invalidate regions which have changed. Edit: It seems iOS does not allow partial view drawing, so you may need to move your drawing to a CALayer, and use that as the view's layer.
CATiledLayer can also give a possible solution, where you can cache and preload tiles and draw required tiles asynchronously and concurrently.
But before you take drastic measures, test your code in difficult conditions and see if your code is performant enough. Invalidating only updated regions can assist, but it is not always straightforward to limit drawing to a provided rectangle. Tiling adds even more difficulty, as the tiling mechanism requires learning, and elements are drawn on background threads, so concurrency issues also come in play.
Here is an interesting video on the subject of optimizing 2D drawing from Apple WWDC 2012:
https://developer.apple.com/videos/wwdc/2012/?include=506#506
I was overriding the drawRect:(CGRect)rect method while making a simple iOS application. In the book I was reading, the bounds were defined using self.bounds as shown here:
- (void)drawRect:(CGRect)rect
{
CGRect bounds = self.bounds;
//rest of drawing code here
}
I noticed that in the book, the rest of the method did not even use the rect argument and worked fine. I assumed that rect would set the bounds in the view, so I tried the following:
- (void)drawRect:(CGRect)rect
{
CGRect bounds = rect;
//rest of drawing code here
}
(Obviously, I would not even need to make bounds equal to rect since I can refer directly to rect within the method.) I tried both ways and they yielded the same result. So are self.bounds and rect equal? If they are, I am assuming rect is used to set the bounds of the current view somewhere behind the scenes. But if they are not, what is the use of having rect as an argument to a method that does not even use it? Am I overlooking something obvious?
rect tells you which area you need to draw. It will always be less than or equal to self.bounds. As per the documentation (emphasis added):
The portion of the view’s bounds that needs to be updated. The first
time your view is drawn, this rectangle is typically the entire
visible bounds of your view. However, during subsequent drawing
operations, the rectangle may specify only part of your view.
If it's less efficient for you to draw subdivisions of your view then you might as well draw the whole thing.
In practice just drawing the whole thing is pretty much never a bottleneck so most people just do that as per the rule that the simplest code is preferable unless or until performance requires a different approach.
drawRect is written to pass in a rectangle that the method is supposed to draw. It's possible that the system may decide that only a portion of the view (perhaps because most of the view is covered by another view.
If only a portion of the view needs to be drawn, it can be faster to only draw that part.
As Tommy said while I was typing my answer, it is sometimes easier to just draw the entire view.
That is, if
[self setNeedsDisplayInRect:rect];
is called, and rect is very carefully calculated for the region that needs to be redrawn, but if our drawRect code doesn't care about rect and draw everything anyway, can the iOS system still somehow improve the drawing speed? (or possibly improve very little?) This question probably requires somebody who is very familiar with UIKit/CoreGraphics.
There are a few ways the answer could be yes:
You can clip to the rectangle, in which case anything outside it won't be painted, even if you draw in it. Drawing outside the rectangle won't be free, but it will be cheaper. iOS can't do this for you because you might deliberately ignore the rect, or use the rect but also draw something else elsewhere in your bounds unconditionally. (Though that other thing should probably be another view.)
Even if your current drawRect: doesn't use the rectangle, you might go back to that code later to optimize it. As you're probably aware, one very good way to do that—if it's at all possible—is to use the rectangle to decide what you draw. Even if you're not doing that now, you may do it in the future, and specifying changed rects now means that many fewer things to change then.
A corollary to #3 is that even if what you're drawing now can't be so optimized, you may decide in a future major version to completely change what the view draws to something that can. Again, specifying changed rects now means that many fewer things to do in the future.
Subviews. If your view doesn't actually draw some of the things that the user sees in it, but rather delegates (not in the Cocoa/Cocoa Touch sense) those things to subviews, then you might override setNeedsDisplayInRect: to send setNeedsDisplay: messages to subviews—and only the subviews whose frames intersect the rect—before calling super. (And UIView's implementation might already do this. You should test it.)
If your drawRect: implementation ignores the rect passed in and draws the entire view, any optimization of the rect passed to setNeedsDisplayInRect: is for nought.
The "needs display" rect is passed straight through; the only thing it's there for is for the drawRect: implementation to use for ignoring unnecessary drawing. Once your drawRect: implementation is entered, the system can't tell whether your drawing outside the passed rect is intentional, so all the drawing really happens (performance implications included).
Depending on what you're drawing and how, it's not too difficult to restrict your drawRect: implementation to at least make some use of the rect passed in. Everything you want to draw has a bounding rect, whether it's a chunk of text or a bezier path or an image or just a rect you're filling with some color. Surround each bit to drawing with a CGRectIntersectsRect() test -- you won't completely restrict your drawing to the passed in rect that way, but you'll at least eliminate anything that doesn't touch that rect from needing to be drawn.
I'm wondering whether I need to check if something is within the bounds of the CGRect passed to drawRect:, or if drawRect: automatically handles that for me.
For example, assume that I have 10 UIBezierPaths on the screen. Each curve is in an NSMutableArray named curves. Every time drawRect: is called, it loops through this array and draws the curves it finds there. If the use moves one curve, I find its containing CGRect and call [self setNeedsDisplayInRect:containingRect]. In my drawRect: implementation, do I need to personally check whether each of the UIBezierPaths falls within the CGRect passed to drawRect: (using CGRectIntersectsRect), or is that handled automatically?
This falls into a class of optimizations you'll have to make yourself if you think it's necessary after profiling.
UIKit isn't that smart unfortunately. Though it would probably be too slow if it was!