drawRect Invalid Context - ios

Trying to draw a graph in UIView from values pulled down from a server.
I have a block that is successfully pulling the start/end points (I did have to add the delay to make sure the array had the values before commencing. I've tried moving the CGContextRef both inside and outside the dispatch but I still get 'Invalid Context'.
I have tried adding [self setNeedsDisplay]; at various places without luck.
Here's the code:
- (void)drawRect:(CGRect)rect {
// Drawing code
// Array - accepts values from method
float *values;
UIColor * greenColor = [UIColor colorWithRed:0.0 green:1.0 blue:0.0 alpha:1.0];
UIColor * redColor = [UIColor colorWithRed:1.0 green:0.0 blue:0.0 alpha:1.0];
// Call to method to run server query, get data, parse (TBXML), assign values to array
// this is working - NSLog output shows proper values are downloaded and parsed...
values = [self downloadData];
// Get context
CGContextRef context = UIGraphicsGetCurrentContext();
NSLog (#"Context: %#", context);
dispatch_after(dispatch_time(DISPATCH_TIME_NOW, (int64_t)(2.0 * NSEC_PER_SEC)), dispatch_get_main_queue(), ^{
NSLog(#"Waiting for array to populate from URL/Parsing....");
NSLog(#"length 1: %f", values[0]);
NSLog(#"length 2: %f", values[1]);
float starty = 100.0;
float startleft = 25.0;
CGContextSetLineWidth (context, 24.0);
CGContextSetStrokeColorWithColor (context, greenColor.CGColor);
CGContextMoveToPoint(context, startleft, starty);
CGContextAddLineToPoint(context, values[0], starty);
NSLog(#"Start/Stop Win values: %f", values[0]);
CGContextStrokePath (context);
starty = starty + 24.0;
CGContextSetLineWidth (context, 24.0);
CGContextSetStrokeColorWithColor (context, redColor.CGColor);
CGContextMoveToPoint(context, startleft, starty);
CGContextAddLineToPoint(context, values[1], starty);
NSLog(#"Start/Stop Loss values: %f", values[1]);
CGContextStrokePath (context);
*/
});
}

A couple of observations:
This invalid context is a result that you’re initiating an asynchronous process, so by the time the dispatch_after block is called, the context supplied to drawRect no longer exists and your asynchronously called block has no context in which to stroke these lines.
But the view shouldn’t be initiating this network request and parsing. Usually the view controller (or better, some other network controller or the like) should be initiating the network request and the parsing.
The drawRect is for rendering the view at a given moment in time. If there’s nothing to render yet, it should just return immediately. When the data is available, you supply the view the data necessary to do the rendering and initiate the setNeedsDisplay.
So, a common pattern would be to have a property in your view subclass, and have the setter for that property call setNeedsDisplay for you.
Rather than initiating the asynchronous request and trying to use the data in two seconds (or any arbitrary amount of time), you instead give your downloadData a completion handler block parameter, which it calls when the download is done, and trigger the updating immediately as soon as the download and parse is done. This avoids unnecessary delays (e.g. if you wait two seconds, but get data in 0.5 seconds, why wait longer than necessary; if you want two seconds, but get data in 2.1 seconds, you risk not having any data to show). Initiate the update of the view exactly when the download and parse is done.
This float * reference is a local variable and will never get populated. Your downloadData probably should return the necessary data in the aforementioned completion handler. Frankly, this notion of a pointer to a C array is not a pattern you should be using in Objective-C, anyway. If your network response really returns just two floats, that’s what you should be passing to this view, not a float *.
Note, I’ve replaced the CoreGraphics code with UIKit drawing. Personally, I’d be inclined to go further and move to CAShapeLayer and not have a drawRect at all. But I didn’t want to throw too much at you. But the general idea is to use the highest level of abstraction as you can and there’s no need to get into the weeds of CoreGraphics for something as simple as this.
This isn’t going to be quite right as I don’t really understand what your model data is, but let’s assume for a second it’s just returning a series of float values. So you might have something like:
// BarView.h
#import <UIKit/UIKit.h>
NS_ASSUME_NONNULL_BEGIN
#interface BarView : UIView
#property (nonatomic, copy, nullable) NSArray <NSNumber *> *values;
#end
NS_ASSUME_NONNULL_END
And
// BarView.m
#import "BarView.h"
#implementation BarView
- (void)drawRect:(CGRect)rect {
if (!self.values) { return; }
NSArray *colors = #[UIColor.greenColor, UIColor.redColor]; // we’re just alternating between red and green, but do whatever you want
float y = 100.0;
float x = 25.0;
for (NSInteger i = 0; i < self.values.count; i++) {
float value = [self.values[i] floatValue];
UIBezierPath *path = [UIBezierPath bezierPath];
path.lineWidth = 24;
[colors[i % colors.count] setStroke];
[path moveToPoint:CGPointMake(x, y)];
[path addLineToPoint:CGPointMake(x + value, y)];
[path stroke];
y += 24;
}
}
- (void)setValues:(NSArray<NSNumber *> *)values {
_values = [values copy];
[self setNeedsDisplay];
}
#end
Note, this isn’t doing any network requests. It’s just rendering whatever values have been supplied to it. And the setter for values will trigger setNeedsDisplay for us.
Then
// ViewController.h
#import <UIKit/UIKit.h>
NS_ASSUME_NONNULL_BEGIN
#interface ViewController : UIViewController
- (void)download:(void (^)(NSArray <NSNumber *> * _Nullable, NSError * _Nullable))completion;
#end
NS_ASSUME_NONNULL_END
And
// ViewController.m
#import "ViewController.h"
#import "BarView.h"
#interface ViewController ()
#property (nonatomic, weak) IBOutlet BarView *barView;
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
[self download:^(NSArray <NSNumber *> *values, NSError *error) {
if (error) {
NSLog(#"%#", error);
return;
}
self.barView.values = values;
}];
}
- (void)download:(void (^)(NSArray <NSNumber *> *, NSError *))completion {
NSURL *url = [NSURL URLWithString:#"..."];
[[[NSURLSession sharedSession] dataTaskWithURL:url completionHandler:^(NSData * _Nullable data, NSURLResponse * _Nullable response, NSError * _Nullable error) {
// parse the data here
if (error) {
dispatch_async(dispatch_get_main_queue(), ^{
completion(nil, error);
});
return;
}
NSArray *values = ...
// when done, call the completion handler
dispatch_async(dispatch_get_main_queue(), ^{
completion(values, nil);
});
}] resume];
}
#end
Now, I’ll leave it up to you to build the NSArray of NSNumber values, as that’s a completely separate question. And while moving this network/parsing code out of the view and into the view controller is a little better, it probably doesn’t even belong there. You might have another object dedicated to performing network requests and/or parsing results. But, again, that’s probably beyond the scope of this question.
But hopefully this illustrates the idea: Get the view out of the business of performing network requests or parsing data. Have it just render whatever data was supplied.
That yields:

Related

How to wait till a function is completely executed when multiple objects call a function in iOS?

I have an architecture where I have to call a local function to display an Image and then in the background need to upload the image to server so that once the Uploading is finished I can remove the local path used for displaying the image.
Functions
DidCmpletePickingImage() , DisplayImageUsingLocalPath() , UploadImageToServer() and RemoveImageFromLocal().
These are the activities. Now i have option to upload multiple images.
This is my current approach. I pick an array of images from and call The function to show them using local path.
for (NSInteger i = 0;i < photos.count ; i ++){
UIImage *img = photos[i];
img = [self imageWithImage:img scaledToWidth:400];
NSData *imageData = UIImageJPEGRepresentation(img, 0.40);
[self showLocally:imageData img:img];
}
After they are showed I start uploading them to server on background thread
-(void) showLocally:(NSData *)imageData img:(UIImage *)img{
// Code for showing it using temp path.
dispatch_queue_t queue = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0ul);
dispatch_async(queue, ^{
[self fileUpload:img];
});
}
Then using the background thread File is uploaded to the server using AFNetwork and then after I get a response I call remove local file path.
But when I do this calling on Background all the images are being simultaneously calling fileUpload method and doing it concurrently which is increasing load on server. How can I block a call of the function till a the previous object which called the function is completely executed ?
You may want to have a look at using NSOperationQueue in conjunction with maxConcurrentOperationCount -- limiting the number of operations that can occur all at once.
Here's a small example of what I mean:
#import "ViewController.h"
#interface ViewController ()
#property (strong, nullable) NSOperationQueue *imageUploaderQ;
- (void)_randomOperationWithDelayOutputtingInteger:(NSInteger)intToOutput;
#end
#implementation ViewController
- (void)viewDidLoad {
[super viewDidLoad];
self.imageUploaderQ = [[NSOperationQueue alloc] init];
// you can experiment with how many concurrent operations you want here
self.imageUploaderQ.maxConcurrentOperationCount = 3;
}
- (void)viewDidAppear:(BOOL)animated {
[super viewDidAppear:animated];
for (NSInteger index = 0; index < 100; index++) {
[self.imageUploaderQ addOperationWithBlock:^{
[self _randomOperationWithDelayOutputtingInteger:index];
}];
}
}
- (void)_randomOperationWithDelayOutputtingInteger:(NSInteger)intToOutput {
// simulating taking some time to upload
// don't ever explicitly call sleep in your actual code
sleep(2);
NSLog(#"Integer output = %li", intToOutput);
}
#end
Here are links to Apple's documentation on
NSOperationQueue
maxConcurrentOperationCount

Is this example of Polymorphism wrong?

I'm trying to get my head around polymorphism, my understanding is that it means that you can have the same method across multiple classes and at runtime the correct version will be called based on the type of the object it's being invoked on.
This example below, states:
http://www.tutorialspoint.com/objective_c/objective_c_polymorphism.htm
"Objective-C polymorphism means that a call to a member function will cause a different function to be executed depending on the type of object that invokes the function."
In the example both square and rectangle are subclasses of shape which both implement their own calculateArea method, I'm assuming it's this method that's being used to demonstrate the polymorphism concept. They call 'calculateArea' on a Square object and squares calculateArea method is called, then they call 'caculateArea on a Rectangle object and rectangles 'calculateArea method is called. It can't be that simple, surely this is obvious, square doesn't even know about rectangles 'calculateArea' which is in a completely different class so couldn't ever possibly be confused about which version of the method to use.
What am I missing?
You are correct, that example doesn't illustrate polymorphism. This is how they should've written the example.
#import <Foundation/Foundation.h>
//PARENT CLASS FOR ALL THE SHAPES
#interface Shape : NSObject
{
CGFloat area;
}
- (void)printArea;
- (void)calculateArea;
#end
#implementation Shape
- (void)printArea{
NSLog(#"The area is %f", area);
}
- (void)calculateArea
{
NSLog(#"Subclass should implement this %s", __PRETTY_FUNCTION__);
}
#end
#interface Square : Shape
{
CGFloat length;
}
- (id)initWithSide:(CGFloat)side;
#end
#implementation Square
- (id)initWithSide:(CGFloat)side{
length = side;
return self;
}
- (void)calculateArea{
area = length * length;
}
- (void)printArea{
NSLog(#"The area of square is %f", area);
}
#end
#interface Rectangle : Shape
{
CGFloat length;
CGFloat breadth;
}
- (id)initWithLength:(CGFloat)rLength andBreadth:(CGFloat)rBreadth;
#end
#implementation Rectangle
- (id)initWithLength:(CGFloat)rLength andBreadth:(CGFloat)rBreadth{
length = rLength;
breadth = rBreadth;
return self;
}
- (void)calculateArea{
area = length * breadth;
}
#end
int main(int argc, const char * argv[])
{
NSAutoreleasePool * pool = [[NSAutoreleasePool alloc] init];
Shape *shape_s = [[Square alloc]initWithSide:10.0];
[shape_s calculateArea]; //shape_s of type Shape, but calling calculateArea will call the
//method defined inside Square
[shape_s printArea]; //printArea implemented inside Square class will be called
Shape *shape_rect = [[Rectangle alloc]
initWithLength:10.0 andBreadth:5.0];
[shape_rect calculateArea]; //Even though shape_rect is type Shape, Rectangle's
//calculateArea will be called.
[shape_rect printArea]; //printArea of Rectangle will be called.
[pool drain];
return 0;
}
As mentioned in the tutorials point example, printArea(this explains about polymorphism) is called based on availability of method in based or derived class. Actually calculateArea is independent methods specific to the Rectangle and Square and calculateArea doesn't explain about polymorphism. Its misunderstood by you. Also you cannot call the calculateArea method if you create an object of type Shape since it don't have a method calculateArea.
Checkout the correct answer in this post that explains about polymorphism.
What is the main difference between Inheritance and Polymorphism?

Reading CGImageRef in a background thread crashes the app

i have a big jpeg image that i want to load in tiles asynchronously in my opengl engine.
Everything works well if its done on the main thread but its slow.
When i try to put the tile loading on an NSOperationBlock, it always crashes when trying to access the shared image data pointer that i previously loaded on the main thread.
There must be something i don't get with background operation because i assume i can access memory sections that i created on the main thread.
What i try to do is the following :
#interface MyViewer
{
}
#property (atomic, assign) CGImageRef imageRef;
#property (atomic, assign) CGDataProviderRef dataProvider;
#property (atomic, assign) int loadedTextures;
#end
...
- (void) loadAllTiles:(NSData*) imgData
{
queue = [[NSOperationQueue alloc] init];
//Loop for Total Number of Textures
self.dataProvider = CGDataProviderCreateWithData(NULL,[imgData bytes],[imgData length],0);
self.imageRef = CGImageCreateWithJPEGDataProvider(self.dataProvider, NULL, NO, kCGRenderingIntentDefault);
for (int i=0; i<tileCount; i++)
{
// I also tried this but without luck
//CGImageRetain(self.imageRef);
//CGDataProviderRetain(self.dataProvider);
NSBlockOperation *partsLoading = [[NSBlockOperation alloc] init];
__weak NSBlockOperation *weakpartsLoadingOp = partsLoading;
[partsLoading addExecutionBlock: ^ {
TamTexture2D& pTex2D = viewer->getTile(i);
CGImageRef subImgRef = CGImageCreateWithImageInRect(self.imageRef, CGRectMake(pTex2D.left, pTex2D.top, pTex2D.width, pTex2D.height));
//!!!Its crashing here!!!
CFDataRef cgSubImgDataRef = CGDataProviderCopyData(CGImageGetDataProvider(subImgRef));
CGImageRelease(subImgRef);
...
}];
//Adding Parts loading on low priority thread. Is it all right ????
[partsLoading setThreadPriority:0.0];
[queue addOperation:partsLoading];
}
I finally found out my problem...
I have read the Quartz2D doc and it seems that we should not use CGDataProviderCreateWithData and CGImageCreateWithJPEGDataProvider anymore. I guess there usage is not thread safe.
As suggested by the doc, i now use CGImageSource API like this :
self.imageSrcRef = CGImageSourceCreateWithData((__bridge CFDataRef)imgData, NULL);
// get imagePropertiesDictionary
CFDictionaryRef imagePropertiesDictionary = CGImageSourceCopyPropertiesAtIndex(m_imageSrcRef,0, NULL);
self.imageRef = CGImageSourceCreateImageAtIndex(m_imageSrcRef, 0, imagePropertiesDictionary);

Drawing in another thread with CGImage / CGLayer

I have custom UICollectionViewCell subclass where I draw with clipping, stroking and transparency. It works pretty well on Simulator and iPhone 5, but on older devices there is noticeable performance problems.
So I want to move time-consuming drawing to background thread. Since -drawRect method is always called on the main thread, I ended up saving drawn context to CGImage (original question contained code with using CGLayer, but it is sort of obsolete as Matt Long pointed out).
Here is my implementation of drawRect method inside this class:
-(void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
if (self.renderedSymbol != nil) {
CGContextDrawImage(ctx, self.bounds, self.renderedSymbol);
}
}
Rendering method that defines this renderedSymbol property:
- (void) renderCurrentSymbol {
[self.queue addOperationWithBlock:^{
// creating custom context to draw there (contexts are not thread safe)
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();
CGContextRef ctx = CGBitmapContextCreate(nil, self.bounds.size.width, self.bounds.size.height, 8, self.bounds.size.width * (CGColorSpaceGetNumberOfComponents(space) + 1), space, kCGImageAlphaPremultipliedLast);
CGColorSpaceRelease(space);
// custom drawing goes here using 'ctx' context
// then saving context as CGImageRef to property that will be used in drawRect
self.renderedSymbol = CGBitmapContextCreateImage(ctx);
// asking main thread to update UI
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
[self setNeedsDisplayInRect:self.bounds];
}];
CGContextRelease(ctx);
}];
}
This setup works perfectly on main thread, but when I wrap it with NSOperationQueue or GCD, I'm getting lots of different "invalid context 0x0" errors. App doesn't crash itself, but drawing doesn't happen. I suppose there is a problem with releasing custom created CGContextRef, but I don't know what to do about it.
Here's my property declarations. (I tried using atomic versions, but that didn't help)
#property (nonatomic) CGImageRef renderedSymbol;
#property (nonatomic, strong) NSOperationQueue *queue;
#property (nonatomic, strong) NSString *symbol; // used in custom drawing
Custom setters / getters for properties:
-(NSOperationQueue *)queue {
if (!_queue) {
_queue = [[NSOperationQueue alloc] init];
_queue.name = #"Background Rendering";
}
return _queue;
}
-(void)setSymbol:(NSString *)symbol {
_symbol = symbol;
self.renderedSymbol = nil;
[self setNeedsDisplayInRect:self.bounds];
}
-(CGImageRef) renderedSymbol {
if (_renderedSymbol == nil) {
[self renderCurrentSymbol];
}
return _renderedSymbol;
}
What can I do?
Did you notice the document on CGLayer you're referencing hasn't been updated since 2006? The assumption you've made that CGLayer is the right solution is incorrect. Apple has all but abandoned this technology and you probably should too: http://iosptl.com/posts/cglayer-no-longer-recommended/ Use Core Animation.
Issue solved by using amazing third party library by Mind Snacks — MSCachedAsyncViewDrawing.

Is there a way to make drawRect work right NOW?

If you are an advanced user of drawRect, you will know that of course drawRect will not actually run until "all processing is finished."
setNeedsDisplay flags a view as invalidated and the OS, and basically waits until all processing is done. This can be infuriating in the common situation where you want to have:
a view controller 1
starts some function 2
which incrementally 3
creates a more and more complicated artwork and 4
at each step, you setNeedsDisplay (wrong!) 5
until all the work is done 6
Of course, when you do the above 1-6, all that happens is that drawRect is run once only after step 6.
Your goal is for the view to be refreshed at point 5. What to do?
If I understand your question correctly, there is a simple solution to this. During your long-running routine you need to tell the current runloop to process for a single iteration (or more, of the runloop) at certain points in your own processing. e.g, when you want to update the display. Any views with dirty update regions will have their drawRect: methods called when you run the runloop.
To tell the current runloop to process for one iteration (and then return to you...):
[[NSRunLoop currentRunLoop] runMode: NSDefaultRunLoopMode beforeDate: [NSDate date]];
Here's an example of an (inefficient) long running routine with a corresponding drawRect - each in the context of a custom UIView:
- (void) longRunningRoutine:(id)sender
{
srand( time( NULL ) );
CGFloat x = 0;
CGFloat y = 0;
[_path moveToPoint: CGPointMake(0, 0)];
for ( int j = 0 ; j < 1000 ; j++ )
{
x = 0;
y = (CGFloat)(rand() % (int)self.bounds.size.height);
[_path addLineToPoint: CGPointMake( x, y)];
y = 0;
x = (CGFloat)(rand() % (int)self.bounds.size.width);
[_path addLineToPoint: CGPointMake( x, y)];
x = self.bounds.size.width;
y = (CGFloat)(rand() % (int)self.bounds.size.height);
[_path addLineToPoint: CGPointMake( x, y)];
y = self.bounds.size.height;
x = (CGFloat)(rand() % (int)self.bounds.size.width);
[_path addLineToPoint: CGPointMake( x, y)];
[self setNeedsDisplay];
[[NSRunLoop currentRunLoop] runMode: NSDefaultRunLoopMode beforeDate: [NSDate date]];
}
[_path removeAllPoints];
}
- (void) drawRect:(CGRect)rect
{
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextSetFillColorWithColor( ctx, [UIColor blueColor].CGColor );
CGContextFillRect( ctx, rect);
CGContextSetStrokeColorWithColor( ctx, [UIColor whiteColor].CGColor );
[_path stroke];
}
And here is a fully working sample demonstrating this technique.
With some tweaking you can probably adjust this to make the rest of the UI (i.e. user-input) responsive as well.
Update (caveat for using this technique)
I just want to say that I agree with much of the feedback from others here saying this solution (calling runMode: to force a call to drawRect:) isn't necessarily a great idea. I've answered this question with what I feel is a factual "here's how" answer to the stated question, and I am not intending to promote this as "correct" architecture. Also, I'm not saying there might not be other (better?) ways to achieve the same effect - certainly there may be other approaches that I wasn't aware of.
Update (response to the Joe's sample code and performance question)
The performance slowdown you're seeing is the overhead of running the runloop on each iteration of your drawing code, which includes rendering the layer to the screen as well as all of the other processing the runloop does such as input gathering and processing.
One option might be to invoke the runloop less frequently.
Another option might be to optimize your drawing code. As it stands (and I don't know if this is your actual app, or just your sample...) there are a handful of things you could do to make it faster. The first thing I would do is move all the UIGraphicsGet/Save/Restore code outside the loop.
From an architectural standpoint however, I would highly recommend considering some of the other approaches mentioned here. I see no reason why you can't structure your drawing to happen on a background thread (algorithm unchanged), and use a timer or other mechanism to signal the main thread to update it's UI on some frequency until the drawing is complete. I think most of the folks who've participated in the discussion would agree that this would be the "correct" approach.
Updates to the user interface happen at the end of the current pass through the run loop. These updates are performed on the main thread, so anything that runs for a long time in the main thread (lengthy calculations, etc.) will prevent the interface updates from being started. Additionally, anything that runs for a while on the main thread will also cause your touch handling to be unresponsive.
This means that there is no way to "force" a UI refresh to occur from some other point in a process running on the main thread. The previous statement is not entirely correct, as Tom's answer shows. You can allow the run loop to come to completion in the middle of operations performed on the main thread. However, this still may reduce the responsiveness of your application.
In general, it is recommended that you move anything that takes a while to perform to a background thread so that the user interface can remain responsive. However, any updates you wish to perform to the UI need to be done back on the main thread.
Perhaps the easiest way to do this under Snow Leopard and iOS 4.0+ is to use blocks, like in the following rudimentary sample:
dispatch_queue_t main_queue = dispatch_get_main_queue();
dispatch_async(queue, ^{
// Do some work
dispatch_async(main_queue, ^{
// Update the UI
});
});
The Do some work part of the above could be a lengthy calculation, or an operation that loops over multiple values. In this example, the UI is only updated at the end of the operation, but if you wanted continuous progress tracking in your UI, you could place the dispatch to the main queue where ever you needed a UI update to be performed.
For older OS versions, you can break off a background thread manually or through an NSOperation. For manual background threading, you can use
[NSThread detachNewThreadSelector:#selector(doWork) toTarget:self withObject:nil];
or
[self performSelectorInBackground:#selector(doWork) withObject:nil];
and then to update the UI you can use
[self performSelectorOnMainThread:#selector(updateProgress) withObject:nil waitUntilDone:NO];
Note that I've found the NO argument in the previous method to be needed to get constant UI updates while dealing with a continuous progress bar.
This sample application I created for my class illustrates how to use both NSOperations and queues for performing background work and then updating the UI when done. Also, my Molecules application uses background threads for processing new structures, with a status bar that is updated as this progresses. You can download the source code to see how I achieved this.
You can do this repeatedly in a loop and it'll work fine, no threads, no messing with the runloop, etc.
[CATransaction begin];
// modify view or views
[view setNeedsDisplay];
[CATransaction commit];
If there is an implicit transaction already in place prior to the loop you need to commit that with [CATransaction commit] before this will work.
In order to get drawRect called the soonest (which is not necessarily immediately, as the OS may still wait until, for instance, the next hardware display refresh, etc.), an app should idle it's UI run loop as soon as possible, by exiting any and all methods in the UI thread, and for a non-zero amount of time.
You can either do this in the main thread by chopping any processing that takes more than an animation frame time into shorter chunks and scheduling continuing work only after a short delay (so drawRect might run in the gaps), or by doing the processing in a background thread, with a periodic call to performSelectorOnMainThread to do a setNeedsDisplay at some reasonable animation frame rate.
A non-OpenGL method to update the display near immediately (which means at the very next hardware display refresh or three) is by swapping visible CALayer contents with an image or CGBitmap that you have drawn into. An app can do Quartz drawing into a Core Graphics bitmap at pretty much at any time.
New added answer:
Please see Brad Larson's comments below and Christopher Lloyd's comment on another answer here as the hint leading towards this solution.
[ CATransaction flush ];
will cause drawRect to be called on views on which a setNeedsDisplay request has been done, even if the flush is done from inside a method that is blocking the UI run loop.
Note that, when blocking the UI thread, a Core Animation flush is required to update changing CALayer contents as well. So, for animating graphic content to show progress, these may both end up being forms of the same thing.
New added note to new added answer above:
Do not flush faster than your drawRect or animation drawing can complete, as this might queue up flushes, causing weird animation effects.
Without questioning the wisdom of this (which you ought to do), you can do:
[myView setNeedsDisplay];
[[myView layer] displayIfNeeded];
-setNeedsDisplay will mark the view as needing to be redrawn.
-displayIfNeeded will force the view's backing layer to redraw, but only if it has been marked as needing to be displayed.
I will emphasize, however, that your question is indicative of an architecture that could use some re-working. In all but exceptionally rare cases, you should never need to or want to force a view to redraw immediately. UIKit with not built with that use-case in mind, and if it works, consider yourself lucky.
Have you tried doing the heavy processing on a secondary thread and calling back to the main thread to schedule view updates? NSOperationQueue makes this sort of thing pretty easy.
Sample code that takes an array of NSURLs as input and asynchronously downloads them all, notifying the main thread as each of them is finished and saved.
- (void)fetchImageWithURLs:(NSArray *)urlArray {
[self.retriveAvatarQueue cancelAllOperations];
self.retriveAvatarQueue = nil;
NSOperationQueue *opQueue = [[NSOperationQueue alloc] init];
for (NSUInteger i=0; i<[urlArray count]; i++) {
NSURL *url = [urlArray objectAtIndex:i];
NSInvocation *inv = [NSInvocation invocationWithMethodSignature:[self methodSignatureForSelector:#selector(cacheImageWithIndex:andURL:)]];
[inv setTarget:self];
[inv setSelector:#selector(cacheImageWithIndex:andURL:)];
[inv setArgument:&i atIndex:2];
[inv setArgument:&url atIndex:3];
NSInvocationOperation *invOp = [[NSInvocationOperation alloc] initWithInvocation:inv];
[opQueue addOperation:invOp];
[invOp release];
}
self.retriveAvatarQueue = opQueue;
[opQueue release];
}
- (void)cacheImageWithIndex:(NSUInteger)index andURL:(NSURL *)url {
NSData *imageData = [NSData dataWithContentsOfURL:url];
NSFileManager *fileManager = [NSFileManager defaultManager];
NSString *filePath = PATH_FOR_IMG_AT_INDEX(index);
NSError *error = nil;
// Save the file
if (![fileManager createFileAtPath:filePath contents:imageData attributes:nil]) {
DLog(#"Error saving file at %#", filePath);
}
// Notifiy the main thread that our file is saved.
[self performSelectorOnMainThread:#selector(imageLoadedAtPath:) withObject:filePath waitUntilDone:NO];
}
I think, the most complete answer comes from the Jeffrey Sambell's blog post 'Asynchronous Operations in iOS with Grand Central Dispatch' and it worked for me!
It's basically the same solution as proposed by Brad above but fully explained in terms of OSX/IOS concurrency model.
The dispatch_get_current_queue function will return the current queue
from which the block is dispatched and the dispatch_get_main_queue
function will return the main queue where your UI is running.
The dispatch_get_main_queue function is very useful for updating the
iOS app’s UI as UIKit methods are not thread safe (with a few
exceptions) so any calls you make to update UI elements must always be
done from the main queue.
A typical GCD call would look something like this:
// Doing something on the main thread
dispatch_queue_t myQueue = dispatch_queue_create("My Queue",NULL);
dispatch_async(myQueue, ^{
// Perform long running process
dispatch_async(dispatch_get_main_queue(), ^{
// Update the UI
});
});
// Continue doing other stuff on the
// main thread while process is running.
And here goes my working example (iOS 6+). It displays frames of a stored video using the AVAssetReader class:
//...prepare the AVAssetReader* asset_reader earlier and start reading frames now:
[asset_reader startReading];
dispatch_queue_t readerQueue = dispatch_queue_create("Reader Queue", NULL);
dispatch_async(readerQueue, ^{
CMSampleBufferRef buffer;
while ( [asset_reader status]==AVAssetReaderStatusReading )
{
buffer = [asset_reader_output copyNextSampleBuffer];
if (buffer!=nil)
{
//The point is here: to use the main queue for actual UI operations
dispatch_async(dispatch_get_main_queue(), ^{
// Update the UI using the AVCaptureVideoDataOutputSampleBufferDelegate style function
[self captureOutput:nil didOutputSampleBuffer:buffer fromConnection:nil];
CFRelease (buffer);
});
}
}
});
The first part of this sample may be found here in Damian's answer.
I'd like to offer a clean solution to the given problem.
I agree with other posters that in an ideal situation all the heavy lifting should be done in a background thread, however there are times when this simply isn't possible because the time consuming part requires lots of accessing to non thread-safe methods such as those offered by UIKit. In my case, initialising my UI is time consuming and there's nothing I can run in the background, so my best option is to update a progress bar during the init.
However, once we think in terms of the ideal GCD approach, the solution is actually a simple. We do all the work in a background thread, dividing it into chucks that are called synchronously on the main thread. The run loop will be run for each chuck, updating the UI and any progress bars etc.
- (void)myInit
{
// Start the work in a background thread.
dispatch_async(dispatch_get_global_queue(0, 0), ^{
// Back to the main thread for a chunk of code
dispatch_sync(dispatch_get_main_queue(), ^{
...
// Update progress bar
self.progressIndicator.progress = ...:
});
// Next chunk
dispatch_sync(dispatch_get_main_queue(), ^{
...
// Update progress bar
self.progressIndicator.progress = ...:
});
...
});
}
Of course, this is essentially the same as Brad's technique, but his answer doesn't quite address the issue at hand - that of running a lot of non thread safe code while updating the UI periodically.
Joe -- if you are willing to set it up so that your lengthy processing all happens inside of drawRect, you can make it work. I just wrote a test project. It works. See code below.
LengthyComputationTestAppDelegate.h:
#import <UIKit/UIKit.h>
#interface LengthyComputationTestAppDelegate : NSObject <UIApplicationDelegate> {
UIWindow *window;
}
#property (nonatomic, retain) IBOutlet UIWindow *window;
#end
LengthComputationTestAppDelegate.m:
#import "LengthyComputationTestAppDelegate.h"
#import "Incrementer.h"
#import "IncrementerProgressView.h"
#implementation LengthyComputationTestAppDelegate
#synthesize window;
#pragma mark -
#pragma mark Application lifecycle
- (BOOL)application:(UIApplication *)application didFinishLaunchingWithOptions:(NSDictionary *)launchOptions {
// Override point for customization after application launch.
IncrementerProgressView *ipv = [[IncrementerProgressView alloc]initWithFrame:self.window.bounds];
[self.window addSubview:ipv];
[ipv release];
[self.window makeKeyAndVisible];
return YES;
}
Incrementer.h:
#import <Foundation/Foundation.h>
//singleton object
#interface Incrementer : NSObject {
NSUInteger theInteger_;
}
#property (nonatomic) NSUInteger theInteger;
+(Incrementer *) sharedIncrementer;
-(NSUInteger) incrementForTimeInterval: (NSTimeInterval) timeInterval;
-(BOOL) finishedIncrementing;
incrementer.m:
#import "Incrementer.h"
#implementation Incrementer
#synthesize theInteger = theInteger_;
static Incrementer *inc = nil;
-(void) increment {
theInteger_++;
}
-(BOOL) finishedIncrementing {
return (theInteger_>=100000000);
}
-(NSUInteger) incrementForTimeInterval: (NSTimeInterval) timeInterval {
NSTimeInterval negativeTimeInterval = -1*timeInterval;
NSDate *startDate = [NSDate date];
while (!([self finishedIncrementing]) && [startDate timeIntervalSinceNow] > negativeTimeInterval)
[self increment];
return self.theInteger;
}
-(id) init {
if (self = [super init]) {
self.theInteger = 0;
}
return self;
}
#pragma mark --
#pragma mark singleton object methods
+ (Incrementer *) sharedIncrementer {
#synchronized(self) {
if (inc == nil) {
inc = [[Incrementer alloc]init];
}
}
return inc;
}
+ (id)allocWithZone:(NSZone *)zone {
#synchronized(self) {
if (inc == nil) {
inc = [super allocWithZone:zone];
return inc; // assignment and return on first allocation
}
}
return nil; // on subsequent allocation attempts return nil
}
- (id)copyWithZone:(NSZone *)zone
{
return self;
}
- (id)retain {
return self;
}
- (unsigned)retainCount {
return UINT_MAX; // denotes an object that cannot be released
}
- (void)release {
//do nothing
}
- (id)autorelease {
return self;
}
#end
IncrementerProgressView.m:
#import "IncrementerProgressView.h"
#implementation IncrementerProgressView
#synthesize progressLabel = progressLabel_;
#synthesize nextUpdateTimer = nextUpdateTimer_;
-(id) initWithFrame:(CGRect)frame {
if (self = [super initWithFrame: frame]) {
progressLabel_ = [[UILabel alloc]initWithFrame:CGRectMake(20, 40, 300, 30)];
progressLabel_.font = [UIFont systemFontOfSize:26];
progressLabel_.adjustsFontSizeToFitWidth = YES;
progressLabel_.textColor = [UIColor blackColor];
[self addSubview:progressLabel_];
}
return self;
}
-(void) drawRect:(CGRect)rect {
[self.nextUpdateTimer invalidate];
Incrementer *shared = [Incrementer sharedIncrementer];
NSUInteger progress = [shared incrementForTimeInterval: 0.1];
self.progressLabel.text = [NSString stringWithFormat:#"Increments performed: %d", progress];
if (![shared finishedIncrementing])
self.nextUpdateTimer = [NSTimer scheduledTimerWithTimeInterval:0. target:self selector:(#selector(setNeedsDisplay)) userInfo:nil repeats:NO];
}
- (void)dealloc {
[super dealloc];
}
#end
Regarding the original issue:
In a word, you can (A) background the large painting, and call to the foreground for UI updates or (B) arguably controversially there are four 'immediate' methods suggested that do not use a background process. For the result of what works, run the demo program. It has #defines for all five methods.
Alternately per Tom Swift
Tom Swift has explained the amazing idea of quite simply manipulating the run loop. Here's how you trigger the run loop:
[[NSRunLoop currentRunLoop] runMode: NSDefaultRunLoopMode beforeDate: [NSDate date]];
This is a truly amazing piece of engineering. Of course one should be extremely careful when manipulating the run loop and as many pointed out this approach is strictly for experts.
However, a bizarre problem arises ...
Even though a number of the methods work, they don't actually "work" because there is a bizarre progressive-slow-down artifact you will see clearly in the demo.
Scroll to the 'answer' I pasted in below, showing the console output - you can see how it progressively slows.
Here's the new SO question:
Mysterious "progressive slowing" problem in run loop / drawRect
Here is V2 of the demo app...
http://www.fileswap.com/dl/p8lU3gAi/stepwiseDrawingV2.zip.html
You will see it tests all five methods,
#ifdef TOMSWIFTMETHOD
[self setNeedsDisplay];
[[NSRunLoop currentRunLoop]
runMode:NSDefaultRunLoopMode beforeDate:[NSDate date]];
#endif
#ifdef HOTPAW
[self setNeedsDisplay];
[CATransaction flush];
#endif
#ifdef LLOYDMETHOD
[CATransaction begin];
[self setNeedsDisplay];
[CATransaction commit];
#endif
#ifdef DDLONG
[self setNeedsDisplay];
[[self layer] displayIfNeeded];
#endif
#ifdef BACKGROUNDMETHOD
// here, the painting is being done in the bg, we have been
// called here in the foreground to inval
[self setNeedsDisplay];
#endif
You can see for yourself which methods work and which do not.
you can see the bizarre "progressive-slow-down". Why does it happen?
you can see with the controversial TOMSWIFT method, there is actually no problem at all with responsiveness. tap for response at any time (but still the bizarre "progressive-slow-down" problem)
So the overwhelming thing is this weird "progressive-slow-down": on each iteration, for unknown reasons, the time taken for a loop decreases. Note that this applies to both doing it "properly" (background look) or using one of the 'immediate' methods.
Practical solutions?
For anyone reading in the future, if you are actually unable to get this to work in production code because of the "mystery progressive slowdown", Felz and Void have each presented astounding solutions in the other specific question.

Resources