I have custom UICollectionViewCell subclass where I draw with clipping, stroking and transparency. It works pretty well on Simulator and iPhone 5, but on older devices there is noticeable performance problems.
So I want to move time-consuming drawing to background thread. Since -drawRect method is always called on the main thread, I ended up saving drawn context to CGImage (original question contained code with using CGLayer, but it is sort of obsolete as Matt Long pointed out).
Here is my implementation of drawRect method inside this class:
-(void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
if (self.renderedSymbol != nil) {
CGContextDrawImage(ctx, self.bounds, self.renderedSymbol);
}
}
Rendering method that defines this renderedSymbol property:
- (void) renderCurrentSymbol {
[self.queue addOperationWithBlock:^{
// creating custom context to draw there (contexts are not thread safe)
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, self.bounds.size.width, self.bounds.size.height, 8, self.bounds.size.width * (CGColorSpaceGetNumberOfComponents(space) + 1), space, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(space);
// custom drawing goes here using 'ctx' context
// then saving context as CGImageRef to property that will be used in drawRect
self.renderedSymbol = CGBitmapContextCreateImage(ctx);
// asking main thread to update UI
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
[self setNeedsDisplayInRect:self.bounds];
}];
CGContextRelease(ctx);
}];
}
This setup works perfectly on main thread, but when I wrap it with NSOperationQueue or GCD, I'm getting lots of different "invalid context 0x0" errors. App doesn't crash itself, but drawing doesn't happen. I suppose there is a problem with releasing custom created CGContextRef, but I don't know what to do about it.
Here's my property declarations. (I tried using atomic versions, but that didn't help)
#property (nonatomic) CGImageRef renderedSymbol;
#property (nonatomic, strong) NSOperationQueue *queue;
#property (nonatomic, strong) NSString *symbol; // used in custom drawing
Custom setters / getters for properties:
-(NSOperationQueue *)queue {
if (!_queue) {
_queue = [[NSOperationQueue alloc] init];
_queue.name = #"Background Rendering";
}
return _queue;
}
-(void)setSymbol:(NSString *)symbol {
_symbol = symbol;
self.renderedSymbol = nil;
[self setNeedsDisplayInRect:self.bounds];
}
-(CGImageRef) renderedSymbol {
if (_renderedSymbol == nil) {
[self renderCurrentSymbol];
}
return _renderedSymbol;
}
What can I do?
Did you notice the document on CGLayer you're referencing hasn't been updated since 2006? The assumption you've made that CGLayer is the right solution is incorrect. Apple has all but abandoned this technology and you probably should too: http://iosptl.com/posts/cglayer-no-longer-recommended/ Use Core Animation.
Issue solved by using amazing third party library by Mind Snacks — MSCachedAsyncViewDrawing.
Related
Trying to draw a graph in UIView from values pulled down from a server.
I have a block that is successfully pulling the start/end points (I did have to add the delay to make sure the array had the values before commencing. I've tried moving the CGContextRef both inside and outside the dispatch but I still get 'Invalid Context'.
I have tried adding [self setNeedsDisplay]; at various places without luck.
Here's the code:
- (void)drawRect:(CGRect)rect {
// Drawing code
// Array - accepts values from method
float *values;
UIColor * greenColor = [UIColor colorWithRed:0.0 green:1.0 blue:0.0 alpha:1.0];
UIColor * redColor = [UIColor colorWithRed:1.0 green:0.0 blue:0.0 alpha:1.0];
// Call to method to run server query, get data, parse (TBXML), assign values to array
// this is working - NSLog output shows proper values are downloaded and parsed...
values = [self downloadData];
// Get context
CGContextRef context = UIGraphicsGetCurrentContext();
NSLog (#"Context: %#", context);
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(2.0 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
NSLog(#"Waiting for array to populate from URL/Parsing....");
NSLog(#"length 1: %f", values[0]);
NSLog(#"length 2: %f", values[1]);
float starty = 100.0;
float startleft = 25.0;
CGContextSetLineWidth (context, 24.0);
CGContextSetStrokeColorWithColor (context, greenColor.CGColor);
CGContextMoveToPoint(context, startleft, starty);
CGContextAddLineToPoint(context, values[0], starty);
NSLog(#"Start/Stop Win values: %f", values[0]);
CGContextStrokePath (context);
starty = starty + 24.0;
CGContextSetLineWidth (context, 24.0);
CGContextSetStrokeColorWithColor (context, redColor.CGColor);
CGContextMoveToPoint(context, startleft, starty);
CGContextAddLineToPoint(context, values[1], starty);
NSLog(#"Start/Stop Loss values: %f", values[1]);
CGContextStrokePath (context);
*/
});
}
A couple of observations:
This invalid context is a result that you’re initiating an asynchronous process, so by the time the dispatch_after block is called, the context supplied to drawRect no longer exists and your asynchronously called block has no context in which to stroke these lines.
But the view shouldn’t be initiating this network request and parsing. Usually the view controller (or better, some other network controller or the like) should be initiating the network request and the parsing.
The drawRect is for rendering the view at a given moment in time. If there’s nothing to render yet, it should just return immediately. When the data is available, you supply the view the data necessary to do the rendering and initiate the setNeedsDisplay.
So, a common pattern would be to have a property in your view subclass, and have the setter for that property call setNeedsDisplay for you.
Rather than initiating the asynchronous request and trying to use the data in two seconds (or any arbitrary amount of time), you instead give your downloadData a completion handler block parameter, which it calls when the download is done, and trigger the updating immediately as soon as the download and parse is done. This avoids unnecessary delays (e.g. if you wait two seconds, but get data in 0.5 seconds, why wait longer than necessary; if you want two seconds, but get data in 2.1 seconds, you risk not having any data to show). Initiate the update of the view exactly when the download and parse is done.
This float * reference is a local variable and will never get populated. Your downloadData probably should return the necessary data in the aforementioned completion handler. Frankly, this notion of a pointer to a C array is not a pattern you should be using in Objective-C, anyway. If your network response really returns just two floats, that’s what you should be passing to this view, not a float *.
Note, I’ve replaced the CoreGraphics code with UIKit drawing. Personally, I’d be inclined to go further and move to CAShapeLayer and not have a drawRect at all. But I didn’t want to throw too much at you. But the general idea is to use the highest level of abstraction as you can and there’s no need to get into the weeds of CoreGraphics for something as simple as this.
This isn’t going to be quite right as I don’t really understand what your model data is, but let’s assume for a second it’s just returning a series of float values. So you might have something like:
// BarView.h
#import <UIKit/UIKit.h>
NS_ASSUME_NONNULL_BEGIN
#interface BarView : UIView
#property (nonatomic, copy, nullable) NSArray <NSNumber *> *values;
#end
NS_ASSUME_NONNULL_END
And
// BarView.m
#import "BarView.h"
#implementation BarView
- (void)drawRect:(CGRect)rect {
if (!self.values) { return; }
NSArray *colors = #[UIColor.greenColor, UIColor.redColor]; // we’re just alternating between red and green, but do whatever you want
float y = 100.0;
float x = 25.0;
for (NSInteger i = 0; i < self.values.count; i++) {
float value = [self.values[i] floatValue];
UIBezierPath *path = [UIBezierPath bezierPath];
path.lineWidth = 24;
[colors[i % colors.count] setStroke];
[path moveToPoint:CGPointMake(x, y)];
[path addLineToPoint:CGPointMake(x + value, y)];
[path stroke];
y += 24;
}
}
- (void)setValues:(NSArray<NSNumber *> *)values {
_values = [values copy];
[self setNeedsDisplay];
}
#end
Note, this isn’t doing any network requests. It’s just rendering whatever values have been supplied to it. And the setter for values will trigger setNeedsDisplay for us.
Then
// ViewController.h
#import <UIKit/UIKit.h>
NS_ASSUME_NONNULL_BEGIN
#interface ViewController : UIViewController
- (void)download:(void (^)(NSArray <NSNumber *> * _Nullable, NSError * _Nullable))completion;
#end
NS_ASSUME_NONNULL_END
And
// ViewController.m
#import "ViewController.h"
#import "BarView.h"
#interface ViewController ()
#property (nonatomic, weak) IBOutlet BarView *barView;
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
[self download:^(NSArray <NSNumber *> *values, NSError *error) {
if (error) {
NSLog(#"%#", error);
return;
}
self.barView.values = values;
}];
}
- (void)download:(void (^)(NSArray <NSNumber *> *, NSError *))completion {
NSURL *url = [NSURL URLWithString:#"..."];
[[[NSURLSession sharedSession] dataTaskWithURL:url completionHandler:^(NSData * _Nullable data, NSURLResponse * _Nullable response, NSError * _Nullable error) {
// parse the data here
if (error) {
dispatch_async(dispatch_get_main_queue(), ^{
completion(nil, error);
});
return;
}
NSArray *values = ...
// when done, call the completion handler
dispatch_async(dispatch_get_main_queue(), ^{
completion(values, nil);
});
}] resume];
}
#end
Now, I’ll leave it up to you to build the NSArray of NSNumber values, as that’s a completely separate question. And while moving this network/parsing code out of the view and into the view controller is a little better, it probably doesn’t even belong there. You might have another object dedicated to performing network requests and/or parsing results. But, again, that’s probably beyond the scope of this question.
But hopefully this illustrates the idea: Get the view out of the business of performing network requests or parsing data. Have it just render whatever data was supplied.
That yields:
I'm trying to optimize some image filtering by using a singleton object that contains the image filter for me to use anywhere. I am running into an issue where I believe I am calling the filter several times at once (several cells are displayed, after downloading image, before display, I am processing the image with a filter, which may occur pretty much at the same time).
I run into this issue:
NSAssert(framebufferReferenceCount > 0, #"Tried to overrelease a framebuffer, did you forget to call -useNextFrameForImageCapture before using -imageFromCurrentFramebuffer?");
https://github.com/BradLarson/GPUImage/blob/master/framework/Source/GPUImageFramebuffer.m#L269
Here's my code:
Interface:
#import <GPUImage/GPUImage.h>
#interface GPUImageFilterManager : NSObject
+ (GPUImageFilterManager*)sharedInstance;
#property (nonatomic, strong) GPUImageiOSBlurFilter *blurFilter;
#property (nonatomic, strong) GPUImageLuminanceThresholdFilter *luminanceFilter;
#end
Implementation:
#import "GPUImageFilterManager.h"
#implementation GPUImageFilterManager
+ (GPUImageFilterManager*)sharedInstance {
static GPUImageFilterManager *_sharedInstance = nil;
static dispatch_once_t oncePredicate;
dispatch_once(&oncePredicate, ^{
_sharedInstance = [GPUImageFilterManager new];
});
return _sharedInstance;
}
- (instancetype)init {
self = [super init];
if (self) {
self.luminanceFilter = [[GPUImageLuminanceThresholdFilter alloc] init];
self.luminanceFilter.threshold = 0.5;
self.blurFilter = [[GPUImageiOSBlurFilter alloc] init];
[self.blurFilter setBlurRadiusInPixels:BLUR_RADIUS];
[self.blurFilter setSaturation:SATURATION];
[self.blurFilter setDownsampling:DOWNSAMPLING];
[self.blurFilter setRangeReductionFactor:RANGEREDUCTION];
}
return self;
}
-(GPUImageiOSBlurFilter*)blurFilter {
[_blurFilter useNextFrameForImageCapture];
return _blurFilter;
}
-(GPUImageLuminanceThresholdFilter*)luminanceFilter {
[_luminanceFilter useNextFrameForImageCapture];
return _luminanceFilter;
}
And in my code I call:
[[[GPUImageFilterManager sharedInstance] blurFilter] imageByFilteringImage:image];
Previously, I had a GPUImageLuminanceThresholdFilter property in each cell and I wanted to optimize this to just use a single instance of it, but it now seems I won't be able to process multiple images at once. Any ideas or advice here?
My guess about what's going wrong here is that -imageByFilteringImage: isn't really an atomic operation, and if you're triggering it on multiple threads at the same time, you're going to get bizarre behavior. Take a look at what happens inside that method (via the -newCGImageByFilteringCGImage: method):
- (CGImageRef)newCGImageByFilteringCGImage:(CGImageRef)imageToFilter;
{
GPUImagePicture *stillImageSource = [[GPUImagePicture alloc] initWithCGImage:imageToFilter];
[self useNextFrameForImageCapture];
[stillImageSource addTarget:(id<GPUImageInput>)self];
[stillImageSource processImage];
CGImageRef processedImage = [self newCGImageFromCurrentlyProcessedOutput];
[stillImageSource removeTarget:(id<GPUImageInput>)self];
return processedImage;
}
A GPUImagePicture is created, attached to the filter, processed, and the output extracted. If you're using the same filter across multiple threads, and you call this method, odds are that you're going to interrupt this at various points and create bizarre filtering pathways. Thus the error above (as well as other potential crashes and image corruption).
If you want to use this filter in a singleton, my recommendation would be to set up a serial dispatch queue and wrap accessed to that filter in dispatches to that queue. In particular, I'd wrap synchronous dispatches to this queue around your -imageByFilteringImage: above to guarantee that this always fully executes and returns an image before being triggered again by a different image and thread.
The GPUImage OpenGL ES context can only perform a single rendering operation at a time, so you don't lose much by limiting access to the filter to only one thread at a time. It's also pretty simple to implement, and I use this pattern of limiting access to shared resources via GCD queues throughout the framework.
i have a big jpeg image that i want to load in tiles asynchronously in my opengl engine.
Everything works well if its done on the main thread but its slow.
When i try to put the tile loading on an NSOperationBlock, it always crashes when trying to access the shared image data pointer that i previously loaded on the main thread.
There must be something i don't get with background operation because i assume i can access memory sections that i created on the main thread.
What i try to do is the following :
#interface MyViewer
{
}
#property (atomic, assign) CGImageRef imageRef;
#property (atomic, assign) CGDataProviderRef dataProvider;
#property (atomic, assign) int loadedTextures;
#end
...
- (void) loadAllTiles:(NSData*) imgData
{
queue = [[NSOperationQueue alloc] init];
//Loop for Total Number of Textures
self.dataProvider = CGDataProviderCreateWithData(NULL,[imgData bytes],[imgData length],0);
self.imageRef = CGImageCreateWithJPEGDataProvider(self.dataProvider, NULL, NO, kCGRenderingIntentDefault);
for (int i=0; i<tileCount; i++)
{
// I also tried this but without luck
//CGImageRetain(self.imageRef);
//CGDataProviderRetain(self.dataProvider);
NSBlockOperation *partsLoading = [[NSBlockOperation alloc] init];
__weak NSBlockOperation *weakpartsLoadingOp = partsLoading;
[partsLoading addExecutionBlock: ^ {
TamTexture2D& pTex2D = viewer->getTile(i);
CGImageRef subImgRef = CGImageCreateWithImageInRect(self.imageRef, CGRectMake(pTex2D.left, pTex2D.top, pTex2D.width, pTex2D.height));
//!!!Its crashing here!!!
CFDataRef cgSubImgDataRef = CGDataProviderCopyData(CGImageGetDataProvider(subImgRef));
CGImageRelease(subImgRef);
...
}];
//Adding Parts loading on low priority thread. Is it all right ????
[partsLoading setThreadPriority:0.0];
[queue addOperation:partsLoading];
}
I finally found out my problem...
I have read the Quartz2D doc and it seems that we should not use CGDataProviderCreateWithData and CGImageCreateWithJPEGDataProvider anymore. I guess there usage is not thread safe.
As suggested by the doc, i now use CGImageSource API like this :
self.imageSrcRef = CGImageSourceCreateWithData((__bridge CFDataRef)imgData, NULL);
// get imagePropertiesDictionary
CFDictionaryRef imagePropertiesDictionary = CGImageSourceCopyPropertiesAtIndex(m_imageSrcRef,0, NULL);
self.imageRef = CGImageSourceCreateImageAtIndex(m_imageSrcRef, 0, imagePropertiesDictionary);
I am new to iPhone programming and Objective-C.
I am building a View-based application.
The problem is that none of the UIViewController's dealloc functions are ever called.
I have decided to unload all my retained objects programmaticaly, right before presenting the next UIViewController class.
I have resolved all the leaks detected by Xcode, tested the application with Leaks and Allocations Tools and everything seams OK, but the memory used builds up to around 180 MB and the app crashes.
The self.retainCount before presenting the next UIViewController is 1, the objects owned are released and the pointers are nil, but still the memory builds up.
Can you give me a clue? If needed I will post some code samples. Or can you send me some references to something like Objective-C memory management 101?
This is my interface:
#interface GameScreen : UIViewController
{
IBOutlet UILabel *timeLabel;
IBOutlet UIView *gameView;
IBOutlet UIImageView *gameViewFrame;
IBOutlet UIButton *showOutlineButton;
UIImage *puzzleImage;
UIView *optionsView;
UIView *blockView;
NSTimer *timer;
UIImageView *viewOriginalPicture;
SHKActivityIndicator *activityIndicator;
BOOL originalPictureShown;
BOOL outLineShown;
}
#property (nonatomic, retain) IBOutlet UILabel *timeLabel;
#property (nonatomic, retain) IBOutlet UIView *gameView;
#property (nonatomic, retain) IBOutlet UIImageView *gameViewFrame;
#property (nonatomic, retain) IBOutlet UIButton *showOutlineButton;
#property (nonatomic, retain) UIImage *puzzleImage;
And here is the implementation:
- (id) initWithPuzzleImage: (UIImage *) img
{
if((self = [super init]))
{
puzzleImage = [[UIImage alloc] init];
puzzleImage = img;
}
return self;
}
This is the function called when the user taps the exit button:
- (void) onExit
{
[timer invalidate];
CurrentMinuts = 0;
CurrentSeconds = 0;
//remove piece configurations
[pieceConfigMatrix removeAllObjects];
[pieceFramesMatrix removeAllObjects];
PuzzleViewController *modalView = [[PuzzleViewController alloc] init];
[self unloadObjects];
[self presentModalViewController:modalView animated:YES];
[modalView release];
}
And the unloadObjects function:
- (void) unloadObjects
{
[self resignFirstResponder];
[viewOriginalPicture release];
viewOriginalPicture = nil;
[timeLabel release];
timeLabel = nil;
[gameView release];
gameView = nil;
[originalImage release];
originalImage = nil;
[gameViewFrame release];
gameViewFrame = nil;
[timer release];
[showOutlineButton release];
showOutlineButton = nil;
}
I have a lead of what I do wrong, but I am not sure. Let me explain. I am adding the puzzle pieces to the 'gameView' property. For this, I have a 'SplitImage' object. The following function is called in - (void) viewDidLoad:
- (void) generatePuzzleWithImage:(UIImage *) image
{
SplitImage *splitSystem = [[SplitImage alloc] initWithImage:image andPuzzleSize:gPuzzleSize];
[splitSystem splitImageAndAddToView:self.gameView];
[splitSystem release];
}
Next, initialization function for the SplitImage class and the splitImageAndAddToView function:
- (id) initWithImage: (UIImage *) image andPuzzleSize: (int) pSize
{
if((self = [super init]))
{
UIImage *aux = [[[UIImage alloc] init] autorelease];
pieceCenterSize = [SplitImage puzzlePieceSizeForNumberOfPieces:pSize];
UIImage *outSideBallSample = [UIImage imageNamed:#"sampleHorizontal.jpg"]; //convexity size for puzzle size
outSideBallSample = [outSideBallSample resizedImageWithContentMode:UIViewContentModeScaleAspectFit bounds:CGSizeMake(pieceCenterSize, pieceCenterSize) interpolationQuality:kCGInterpolationHigh];
outSideBallSize = roundf(outSideBallSample.size.height);
puzzleSize = pieceCenterSize * pSize;
pieceNumber = pSize;
if(image.size.height < puzzleSize || image.size.height > puzzleSize || image.size.width < puzzleSize || image.size.width > puzzleSize)
{
aux = [SplitImage resizeImageForPuzzle:image withSize:puzzleSize];
aux = [SplitImage cropImageForPuzzle:aux withSize:puzzleSize];
}
aux = [aux imageWithAlpha];
originalImage = [[UIImage imageWithCGImage:aux.CGImage] retain];
mainImage = aux.CGImage;
imageSize = CGSizeMake(aux.size.width, aux.size.height);
NSLog(#"%#", NSStringFromCGSize(imageSize));
splitImageSize = CGSizeMake(pieceCenterSize + 2*outSideBallSize, pieceCenterSize+2*outSideBallSize);
}
return self;
}
- (void) splitImageAndAddToView: (UIView *) view
{
for (int i = 0; i < pieceNumber; i++)
for(int j = 0; j < pieceNumber; j++)
//some code
UIImage *mask;
mask = [self randomlyRetriveMaskWithPrefix:1 forPieceAtI:i andJ:j];
CGImageRef split = CGImageCreateWithImageInRect(mainImage, cuttingRect);
PuzzlePiece *splitView = [[PuzzlePiece alloc] initWithImage:[UIImage imageWithCGImage:split] andMask:mask centerSize:pieceCenterSize objectMatrixPosition:i*pieceNumber+j outSideBallSize:outSideBallSize pieceType:pieceType pieceFrame:cuttingRect];
[pieceFramesMatrix addObject:[NSValue valueWithCGRect:cuttingRect]];
[splitView setTag:(i+1)*100+j];
[view addSubview:splitView];
CGImageRelease(split);
[splitView release];
}
Thank you,
Andrei
Objective-C memory management 101 is here:
https://developer.apple.com/library/ios/#documentation/Cocoa/Conceptual/MemoryMgmt/MemoryMgmt.html
Ignore the stuff on garbage collection, it isn't available for iOS.
It's not normal to release objects before presenting the next UIViewController.
Assuming you have a NIB, its contents will be loaded the first time that you (or the system) accesses the view member. Your view controller will get a call to loadView, then subsequently to viewDidLoad. If you have any views you want to add programmatically, you can do that in loadView or viewDidLoad, if you want to interrogate objects loaded from the NIB then you can do that in viewDidLoad.
Once loaded, the view will remain in memory unless and until a low memory warning occurs. At that point it'll be released. You'll get viewDidUnload. You should release anything view related that you've still got an owning reference to in there, and set to nil any weak references you may have.
Any attempt to access the view property subsequently will cause the NIB to be reloaded, etc.
So, when presenting a new UIViewController, just present it. If you create objects that are used only for display of that controller then do so on viewDidLoad, and release them on viewDidUnload.
That all being said, the Leaks tool in Instruments should be able to tell you which types of object are leaking and where you first allocated them, so it makes finding leaks really quite easy. The only thing to watch out for is that if one object handles its properties/members entirely correctly but is itself leaked then anything it creates will generally also leak. So when something leaks, check the thing that created it isn't also leaking before tearing your hair out over why you can't find a problem.
First, retainCount is useless.
Activity Monitor is pretty close to just as useless. As well, the behavior in the simulator can be quite different on the memory use front. Better to focus debugging of a problem like this on the device.
Next, this:
The problem is that none of the
UIViewController's dealloc functions
are ever called. I have decided to
unload all my retained objects
programmaticaly, right before
presenting the next UIViewController
class.
Any time you find yourself programmatically working around incorrect behavior, you are just creating more bugs.
Step back from your custom hack and figure out why the dealloc isn't being called. Something somewhere is over-retaining the object. The allocations instrument with retain tracking turned on will show you exactly where all retains and releases are sent to the errant objects.
Leaks likely won't show anything if whatever is retaining the objects is still reachable from a global or the stack (i.e. leaks are objects that can never be used by your program again -- but there are many more ways to explode memory use without it being truly a leak).
You should also "Build and Analyze", then fix any problems it identifies.
Next, if you are seeing memory accretion of repeated operations on the user's part, then Heapshot analysis is extremely effective at figure out exactly what has gone wrong.
Some specific comments:
puzzleImage = [[UIImage alloc] init];
puzzleImage = img;
2 bugs; you are leaking a UIImage and not retaining img. That your app doesn't crash in light of the above code indicates that there is likely an over-retain elsewhere.
retainCount is not a reliable debugging tool.
You should never pay attention to or rely on retainCount. Just because you released it does not mean that some part of the program does not still have a reference to it. retainCount has no value.
Generally as a rule of thumb. If you use 'alloc' you must 'release' at some point.
Unless ofcourse you have put it into an autorelease pool.
Leaks should be able to point you to the objects that are leaking, using that, narrow down to where those objects are added, stored etc.
Post examples of your code on how you instantiate objects and where you release them, you maybe doing something wrong early on.
edit: apologies, i put 'init' not 'alloc' previously, thank you dreamlax, early morning mistake.
If you are an advanced user of drawRect, you will know that of course drawRect will not actually run until "all processing is finished."
setNeedsDisplay flags a view as invalidated and the OS, and basically waits until all processing is done. This can be infuriating in the common situation where you want to have:
a view controller 1
starts some function 2
which incrementally 3
creates a more and more complicated artwork and 4
at each step, you setNeedsDisplay (wrong!) 5
until all the work is done 6
Of course, when you do the above 1-6, all that happens is that drawRect is run once only after step 6.
Your goal is for the view to be refreshed at point 5. What to do?
If I understand your question correctly, there is a simple solution to this. During your long-running routine you need to tell the current runloop to process for a single iteration (or more, of the runloop) at certain points in your own processing. e.g, when you want to update the display. Any views with dirty update regions will have their drawRect: methods called when you run the runloop.
To tell the current runloop to process for one iteration (and then return to you...):
[[NSRunLoop currentRunLoop] runMode: NSDefaultRunLoopMode beforeDate: [NSDate date]];
Here's an example of an (inefficient) long running routine with a corresponding drawRect - each in the context of a custom UIView:
- (void) longRunningRoutine:(id)sender
{
srand( time( NULL ) );
CGFloat x = 0;
CGFloat y = 0;
[_path moveToPoint: CGPointMake(0, 0)];
for ( int j = 0 ; j < 1000 ; j++ )
{
x = 0;
y = (CGFloat)(rand() % (int)self.bounds.size.height);
[_path addLineToPoint: CGPointMake( x, y)];
y = 0;
x = (CGFloat)(rand() % (int)self.bounds.size.width);
[_path addLineToPoint: CGPointMake( x, y)];
x = self.bounds.size.width;
y = (CGFloat)(rand() % (int)self.bounds.size.height);
[_path addLineToPoint: CGPointMake( x, y)];
y = self.bounds.size.height;
x = (CGFloat)(rand() % (int)self.bounds.size.width);
[_path addLineToPoint: CGPointMake( x, y)];
[self setNeedsDisplay];
[[NSRunLoop currentRunLoop] runMode: NSDefaultRunLoopMode beforeDate: [NSDate date]];
}
[_path removeAllPoints];
}
- (void) drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor( ctx, [UIColor blueColor].CGColor );
CGContextFillRect( ctx, rect);
CGContextSetStrokeColorWithColor( ctx, [UIColor whiteColor].CGColor );
[_path stroke];
}
And here is a fully working sample demonstrating this technique.
With some tweaking you can probably adjust this to make the rest of the UI (i.e. user-input) responsive as well.
Update (caveat for using this technique)
I just want to say that I agree with much of the feedback from others here saying this solution (calling runMode: to force a call to drawRect:) isn't necessarily a great idea. I've answered this question with what I feel is a factual "here's how" answer to the stated question, and I am not intending to promote this as "correct" architecture. Also, I'm not saying there might not be other (better?) ways to achieve the same effect - certainly there may be other approaches that I wasn't aware of.
Update (response to the Joe's sample code and performance question)
The performance slowdown you're seeing is the overhead of running the runloop on each iteration of your drawing code, which includes rendering the layer to the screen as well as all of the other processing the runloop does such as input gathering and processing.
One option might be to invoke the runloop less frequently.
Another option might be to optimize your drawing code. As it stands (and I don't know if this is your actual app, or just your sample...) there are a handful of things you could do to make it faster. The first thing I would do is move all the UIGraphicsGet/Save/Restore code outside the loop.
From an architectural standpoint however, I would highly recommend considering some of the other approaches mentioned here. I see no reason why you can't structure your drawing to happen on a background thread (algorithm unchanged), and use a timer or other mechanism to signal the main thread to update it's UI on some frequency until the drawing is complete. I think most of the folks who've participated in the discussion would agree that this would be the "correct" approach.
Updates to the user interface happen at the end of the current pass through the run loop. These updates are performed on the main thread, so anything that runs for a long time in the main thread (lengthy calculations, etc.) will prevent the interface updates from being started. Additionally, anything that runs for a while on the main thread will also cause your touch handling to be unresponsive.
This means that there is no way to "force" a UI refresh to occur from some other point in a process running on the main thread. The previous statement is not entirely correct, as Tom's answer shows. You can allow the run loop to come to completion in the middle of operations performed on the main thread. However, this still may reduce the responsiveness of your application.
In general, it is recommended that you move anything that takes a while to perform to a background thread so that the user interface can remain responsive. However, any updates you wish to perform to the UI need to be done back on the main thread.
Perhaps the easiest way to do this under Snow Leopard and iOS 4.0+ is to use blocks, like in the following rudimentary sample:
dispatch_queue_t main_queue = dispatch_get_main_queue();
dispatch_async(queue, ^{
// Do some work
dispatch_async(main_queue, ^{
// Update the UI
});
});
The Do some work part of the above could be a lengthy calculation, or an operation that loops over multiple values. In this example, the UI is only updated at the end of the operation, but if you wanted continuous progress tracking in your UI, you could place the dispatch to the main queue where ever you needed a UI update to be performed.
For older OS versions, you can break off a background thread manually or through an NSOperation. For manual background threading, you can use
[NSThread detachNewThreadSelector:#selector(doWork) toTarget:self withObject:nil];
or
[self performSelectorInBackground:#selector(doWork) withObject:nil];
and then to update the UI you can use
[self performSelectorOnMainThread:#selector(updateProgress) withObject:nil waitUntilDone:NO];
Note that I've found the NO argument in the previous method to be needed to get constant UI updates while dealing with a continuous progress bar.
This sample application I created for my class illustrates how to use both NSOperations and queues for performing background work and then updating the UI when done. Also, my Molecules application uses background threads for processing new structures, with a status bar that is updated as this progresses. You can download the source code to see how I achieved this.
You can do this repeatedly in a loop and it'll work fine, no threads, no messing with the runloop, etc.
[CATransaction begin];
// modify view or views
[view setNeedsDisplay];
[CATransaction commit];
If there is an implicit transaction already in place prior to the loop you need to commit that with [CATransaction commit] before this will work.
In order to get drawRect called the soonest (which is not necessarily immediately, as the OS may still wait until, for instance, the next hardware display refresh, etc.), an app should idle it's UI run loop as soon as possible, by exiting any and all methods in the UI thread, and for a non-zero amount of time.
You can either do this in the main thread by chopping any processing that takes more than an animation frame time into shorter chunks and scheduling continuing work only after a short delay (so drawRect might run in the gaps), or by doing the processing in a background thread, with a periodic call to performSelectorOnMainThread to do a setNeedsDisplay at some reasonable animation frame rate.
A non-OpenGL method to update the display near immediately (which means at the very next hardware display refresh or three) is by swapping visible CALayer contents with an image or CGBitmap that you have drawn into. An app can do Quartz drawing into a Core Graphics bitmap at pretty much at any time.
New added answer:
Please see Brad Larson's comments below and Christopher Lloyd's comment on another answer here as the hint leading towards this solution.
[ CATransaction flush ];
will cause drawRect to be called on views on which a setNeedsDisplay request has been done, even if the flush is done from inside a method that is blocking the UI run loop.
Note that, when blocking the UI thread, a Core Animation flush is required to update changing CALayer contents as well. So, for animating graphic content to show progress, these may both end up being forms of the same thing.
New added note to new added answer above:
Do not flush faster than your drawRect or animation drawing can complete, as this might queue up flushes, causing weird animation effects.
Without questioning the wisdom of this (which you ought to do), you can do:
[myView setNeedsDisplay];
[[myView layer] displayIfNeeded];
-setNeedsDisplay will mark the view as needing to be redrawn.
-displayIfNeeded will force the view's backing layer to redraw, but only if it has been marked as needing to be displayed.
I will emphasize, however, that your question is indicative of an architecture that could use some re-working. In all but exceptionally rare cases, you should never need to or want to force a view to redraw immediately. UIKit with not built with that use-case in mind, and if it works, consider yourself lucky.
Have you tried doing the heavy processing on a secondary thread and calling back to the main thread to schedule view updates? NSOperationQueue makes this sort of thing pretty easy.
Sample code that takes an array of NSURLs as input and asynchronously downloads them all, notifying the main thread as each of them is finished and saved.
- (void)fetchImageWithURLs:(NSArray *)urlArray {
[self.retriveAvatarQueue cancelAllOperations];
self.retriveAvatarQueue = nil;
NSOperationQueue *opQueue = [[NSOperationQueue alloc] init];
for (NSUInteger i=0; i<[urlArray count]; i++) {
NSURL *url = [urlArray objectAtIndex:i];
NSInvocation *inv = [NSInvocation invocationWithMethodSignature:[self methodSignatureForSelector:#selector(cacheImageWithIndex:andURL:)]];
[inv setTarget:self];
[inv setSelector:#selector(cacheImageWithIndex:andURL:)];
[inv setArgument:&i atIndex:2];
[inv setArgument:&url atIndex:3];
NSInvocationOperation *invOp = [[NSInvocationOperation alloc] initWithInvocation:inv];
[opQueue addOperation:invOp];
[invOp release];
}
self.retriveAvatarQueue = opQueue;
[opQueue release];
}
- (void)cacheImageWithIndex:(NSUInteger)index andURL:(NSURL *)url {
NSData *imageData = [NSData dataWithContentsOfURL:url];
NSFileManager *fileManager = [NSFileManager defaultManager];
NSString *filePath = PATH_FOR_IMG_AT_INDEX(index);
NSError *error = nil;
// Save the file
if (![fileManager createFileAtPath:filePath contents:imageData attributes:nil]) {
DLog(#"Error saving file at %#", filePath);
}
// Notifiy the main thread that our file is saved.
[self performSelectorOnMainThread:#selector(imageLoadedAtPath:) withObject:filePath waitUntilDone:NO];
}
I think, the most complete answer comes from the Jeffrey Sambell's blog post 'Asynchronous Operations in iOS with Grand Central Dispatch' and it worked for me!
It's basically the same solution as proposed by Brad above but fully explained in terms of OSX/IOS concurrency model.
The dispatch_get_current_queue function will return the current queue
from which the block is dispatched and the dispatch_get_main_queue
function will return the main queue where your UI is running.
The dispatch_get_main_queue function is very useful for updating the
iOS app’s UI as UIKit methods are not thread safe (with a few
exceptions) so any calls you make to update UI elements must always be
done from the main queue.
A typical GCD call would look something like this:
// Doing something on the main thread
dispatch_queue_t myQueue = dispatch_queue_create("My Queue",NULL);
dispatch_async(myQueue, ^{
// Perform long running process
dispatch_async(dispatch_get_main_queue(), ^{
// Update the UI
});
});
// Continue doing other stuff on the
// main thread while process is running.
And here goes my working example (iOS 6+). It displays frames of a stored video using the AVAssetReader class:
//...prepare the AVAssetReader* asset_reader earlier and start reading frames now:
[asset_reader startReading];
dispatch_queue_t readerQueue = dispatch_queue_create("Reader Queue", NULL);
dispatch_async(readerQueue, ^{
CMSampleBufferRef buffer;
while ( [asset_reader status]==AVAssetReaderStatusReading )
{
buffer = [asset_reader_output copyNextSampleBuffer];
if (buffer!=nil)
{
//The point is here: to use the main queue for actual UI operations
dispatch_async(dispatch_get_main_queue(), ^{
// Update the UI using the AVCaptureVideoDataOutputSampleBufferDelegate style function
[self captureOutput:nil didOutputSampleBuffer:buffer fromConnection:nil];
CFRelease (buffer);
});
}
}
});
The first part of this sample may be found here in Damian's answer.
I'd like to offer a clean solution to the given problem.
I agree with other posters that in an ideal situation all the heavy lifting should be done in a background thread, however there are times when this simply isn't possible because the time consuming part requires lots of accessing to non thread-safe methods such as those offered by UIKit. In my case, initialising my UI is time consuming and there's nothing I can run in the background, so my best option is to update a progress bar during the init.
However, once we think in terms of the ideal GCD approach, the solution is actually a simple. We do all the work in a background thread, dividing it into chucks that are called synchronously on the main thread. The run loop will be run for each chuck, updating the UI and any progress bars etc.
- (void)myInit
{
// Start the work in a background thread.
dispatch_async(dispatch_get_global_queue(0, 0), ^{
// Back to the main thread for a chunk of code
dispatch_sync(dispatch_get_main_queue(), ^{
...
// Update progress bar
self.progressIndicator.progress = ...:
});
// Next chunk
dispatch_sync(dispatch_get_main_queue(), ^{
...
// Update progress bar
self.progressIndicator.progress = ...:
});
...
});
}
Of course, this is essentially the same as Brad's technique, but his answer doesn't quite address the issue at hand - that of running a lot of non thread safe code while updating the UI periodically.
Joe -- if you are willing to set it up so that your lengthy processing all happens inside of drawRect, you can make it work. I just wrote a test project. It works. See code below.
LengthyComputationTestAppDelegate.h:
#import <UIKit/UIKit.h>
#interface LengthyComputationTestAppDelegate : NSObject <UIApplicationDelegate> {
UIWindow *window;
}
#property (nonatomic, retain) IBOutlet UIWindow *window;
#end
LengthComputationTestAppDelegate.m:
#import "LengthyComputationTestAppDelegate.h"
#import "Incrementer.h"
#import "IncrementerProgressView.h"
#implementation LengthyComputationTestAppDelegate
#synthesize window;
#pragma mark -
#pragma mark Application lifecycle
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
// Override point for customization after application launch.
IncrementerProgressView *ipv = [[IncrementerProgressView alloc]initWithFrame:self.window.bounds];
[self.window addSubview:ipv];
[ipv release];
[self.window makeKeyAndVisible];
return YES;
}
Incrementer.h:
#import <Foundation/Foundation.h>
//singleton object
#interface Incrementer : NSObject {
NSUInteger theInteger_;
}
#property (nonatomic) NSUInteger theInteger;
+(Incrementer *) sharedIncrementer;
-(NSUInteger) incrementForTimeInterval: (NSTimeInterval) timeInterval;
-(BOOL) finishedIncrementing;
incrementer.m:
#import "Incrementer.h"
#implementation Incrementer
#synthesize theInteger = theInteger_;
static Incrementer *inc = nil;
-(void) increment {
theInteger_++;
}
-(BOOL) finishedIncrementing {
return (theInteger_>=100000000);
}
-(NSUInteger) incrementForTimeInterval: (NSTimeInterval) timeInterval {
NSTimeInterval negativeTimeInterval = -1*timeInterval;
NSDate *startDate = [NSDate date];
while (!([self finishedIncrementing]) && [startDate timeIntervalSinceNow] > negativeTimeInterval)
[self increment];
return self.theInteger;
}
-(id) init {
if (self = [super init]) {
self.theInteger = 0;
}
return self;
}
#pragma mark --
#pragma mark singleton object methods
+ (Incrementer *) sharedIncrementer {
#synchronized(self) {
if (inc == nil) {
inc = [[Incrementer alloc]init];
}
}
return inc;
}
+ (id)allocWithZone:(NSZone *)zone {
#synchronized(self) {
if (inc == nil) {
inc = [super allocWithZone:zone];
return inc; // assignment and return on first allocation
}
}
return nil; // on subsequent allocation attempts return nil
}
- (id)copyWithZone:(NSZone *)zone
{
return self;
}
- (id)retain {
return self;
}
- (unsigned)retainCount {
return UINT_MAX; // denotes an object that cannot be released
}
- (void)release {
//do nothing
}
- (id)autorelease {
return self;
}
#end
IncrementerProgressView.m:
#import "IncrementerProgressView.h"
#implementation IncrementerProgressView
#synthesize progressLabel = progressLabel_;
#synthesize nextUpdateTimer = nextUpdateTimer_;
-(id) initWithFrame:(CGRect)frame {
if (self = [super initWithFrame: frame]) {
progressLabel_ = [[UILabel alloc]initWithFrame:CGRectMake(20, 40, 300, 30)];
progressLabel_.font = [UIFont systemFontOfSize:26];
progressLabel_.adjustsFontSizeToFitWidth = YES;
progressLabel_.textColor = [UIColor blackColor];
[self addSubview:progressLabel_];
}
return self;
}
-(void) drawRect:(CGRect)rect {
[self.nextUpdateTimer invalidate];
Incrementer *shared = [Incrementer sharedIncrementer];
NSUInteger progress = [shared incrementForTimeInterval: 0.1];
self.progressLabel.text = [NSString stringWithFormat:#"Increments performed: %d", progress];
if (![shared finishedIncrementing])
self.nextUpdateTimer = [NSTimer scheduledTimerWithTimeInterval:0. target:self selector:(#selector(setNeedsDisplay)) userInfo:nil repeats:NO];
}
- (void)dealloc {
[super dealloc];
}
#end
Regarding the original issue:
In a word, you can (A) background the large painting, and call to the foreground for UI updates or (B) arguably controversially there are four 'immediate' methods suggested that do not use a background process. For the result of what works, run the demo program. It has #defines for all five methods.
Alternately per Tom Swift
Tom Swift has explained the amazing idea of quite simply manipulating the run loop. Here's how you trigger the run loop:
[[NSRunLoop currentRunLoop] runMode: NSDefaultRunLoopMode beforeDate: [NSDate date]];
This is a truly amazing piece of engineering. Of course one should be extremely careful when manipulating the run loop and as many pointed out this approach is strictly for experts.
However, a bizarre problem arises ...
Even though a number of the methods work, they don't actually "work" because there is a bizarre progressive-slow-down artifact you will see clearly in the demo.
Scroll to the 'answer' I pasted in below, showing the console output - you can see how it progressively slows.
Here's the new SO question:
Mysterious "progressive slowing" problem in run loop / drawRect
Here is V2 of the demo app...
http://www.fileswap.com/dl/p8lU3gAi/stepwiseDrawingV2.zip.html
You will see it tests all five methods,
#ifdef TOMSWIFTMETHOD
[self setNeedsDisplay];
[[NSRunLoop currentRunLoop]
runMode:NSDefaultRunLoopMode beforeDate:[NSDate date]];
#endif
#ifdef HOTPAW
[self setNeedsDisplay];
[CATransaction flush];
#endif
#ifdef LLOYDMETHOD
[CATransaction begin];
[self setNeedsDisplay];
[CATransaction commit];
#endif
#ifdef DDLONG
[self setNeedsDisplay];
[[self layer] displayIfNeeded];
#endif
#ifdef BACKGROUNDMETHOD
// here, the painting is being done in the bg, we have been
// called here in the foreground to inval
[self setNeedsDisplay];
#endif
You can see for yourself which methods work and which do not.
you can see the bizarre "progressive-slow-down". Why does it happen?
you can see with the controversial TOMSWIFT method, there is actually no problem at all with responsiveness. tap for response at any time (but still the bizarre "progressive-slow-down" problem)
So the overwhelming thing is this weird "progressive-slow-down": on each iteration, for unknown reasons, the time taken for a loop decreases. Note that this applies to both doing it "properly" (background look) or using one of the 'immediate' methods.
Practical solutions?
For anyone reading in the future, if you are actually unable to get this to work in production code because of the "mystery progressive slowdown", Felz and Void have each presented astounding solutions in the other specific question.