I'm building a spectrograph and would like to know how I can improve the performance of my UIView-based code. I know that I cannot update user interface for iPhone/iPad from a background thread, so I'm doing most of my processing using GCD. The issue that I'm running into is that my interface still updates way too slowly.
With the code below, I'm trying to take 32 stacked 4x4 pixel UIViews and change their background color (see the green squares on the attached image). The operation produces visible lag for other user interface.
Is there a way I can "prepare" these colors from some kind of background thread and then ask the main thread to refresh the interface all at once?
//create a color intensity map used to color pixels
static dispatch_once_t onceToken;
dispatch_once(&onceToken, ^{
colorMap = [[NSMutableDictionary alloc] initWithCapacity:128];
for(int i = 0; i<128; i ++)
{
[colorMap setObject:[UIColor colorWithHue:0.2 saturation:1 brightness:i/128.0 alpha:1] forKey:[NSNumber numberWithInt:i]];
}
});
-(void)updateLayoutFromMainThread:(id)sender
{
for(UIView* tempView in self.markerViews)
{
tempView.backgroundColor =[colorMap objectForKey:[NSNumber numberWithInt:arc4random()%128]];
}
}
//called from background, would do heavy processing and fourier transforms
-(void)updateLayout
{
//update the interface from the main thread
[self performSelectorOnMainThread:#selector(updateLayoutFromMainThread:) withObject:nil waitUntilDone:NO];
}
I ended up pre-calculating a dictionary of 256 colors and then asking the dictionary for the color based on the value that the circle is trying to display. Trying to allocate colors on the fly was the bottleneck.
, Yes, a couple of points.
While you shouldn't process UIView on the main thread, you can instantiate views on a background thread before using them. Not sure if that will help you at all. However beyond instantiating a view on a background thread, UIView's are really just a meta-data wrapper for CALayer objects and are optimised for flexibility rather than performance.
Your best bet is to draw to a layer object or an image object on a background thread (which is a slower process because drawing uses the CPU as well as the GPU), pass the layer object or image to the main thread, then draw the pre-rendered image to your view's layer (much faster because a simple call is made to get the Graphics Processor to blit the image to the UIView's backing store directly).
see this answer:
Render to bitmap then blit to screen
The code:
- (void)drawRect:(CGRect)rect {
CGContextRef context = UIGraphicsGetCurrentContext();
CGContextDrawImage(context, rect, image);
}
executes far faster than if you were to execute other drawing operations, such as drawing bezier curves, in the same method.
Related
I have (multiple) UIViews with layers of type CAEAGLLayer, and am able to call [EAGLContext presentRenderBuffer:] on renderbuffers attached to these layers, on a secondary thread, without any kind of graphical glitches.
I would have expected to see at least some tearing, since other UI with which these UIViews are composited is updated on the main thread.
Does CAEAGLLayer (I have kEAGLDrawablePropertyRetainedBacking set to NO) do some double-buffering behind the scenes?
I just want to understand why it is that this works...
Example:
BView is a UIView subclass that owns a framebuffer with renderbuffer storage assigned to its OpenGLES layer, in a shared EAGLContext:
#implementation BView
-(id) initWithFrame:(CGRect)frame context:(EAGLContext*)context
{
self = [super initWithFrame:frame];
// Configure layer
CAEAGLLayer* eaglLayer = (CAEAGLLayer*)self.layer;
eaglLayer.opaque = YES;
eaglLayer.drawableProperties = #{ kEAGLDrawablePropertyRetainedBacking : [NSNumber numberWithBool:NO], kEAGLDrawablePropertyColorFormat : kEAGLColorFormatSRGBA8 };
// Create framebuffer with renderbuffer attached to layer
[EAGLContext setCurrentContext:context];
glGenFramebuffers( 1, &FrameBuffer );
glBindFramebuffer( GL_FRAMEBUFFER, FrameBuffer );
glGenRenderbuffers( 1, &RenderBuffer );
glBindRenderbuffer( GL_RENDERBUFFER, RenderBuffer );
[context renderbufferStorage:GL_RENDERBUFFER fromDrawable:(id<EAGLDrawable>)self.layer];
glFramebufferRenderbuffer( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, RenderBuffer );
return self;
}
+(Class) layerClass
{
return [CAEAGLLayer class];
}`
A UIViewController adds a BView instance on the main thread at init time:
BView* view = [[BView alloc] initWithFrame:(CGRect){ 0.0, 0.0, 75.0, 75.0 } context:Context];
[self.view addSubView:view];
On a secondary thread, render to the framebuffer in the BView and present it; in this case it's in a callback from a video AVCaptureDevice, called regularly:
-(void) captureOutput:(AVCaptureOutput*)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection*)connection
{
[EAGLContext setCurrentContext:bPipe->Context.GlContext];
// Render into framebuffer ...
// Present renderbuffer
glBindRenderbuffer( GL_RENDERBUFFER, BViewsRenderBuffer );
[Context presentRenderbuffer:GL_RENDERBUFFER];
}
It used to not work. There used to be several issues with updating the view if the buffer was presented on any but the main thread. It seems this has been working for some time now but it is on your own risk to implement it as you do. Later versions may have issues with it as well as some older probably still do (not that you need to support some old OS versions anyway).
Apple was always a bit closed as to how things work internally but we may guess quite a few things. Since iOS seems to be the only platform that uses your main buffer as a FBO (frame buffer object) I would expect the main frame buffer is inaccessible for development and your main FBO is actually redrawn to the main frame buffer when you present the render buffer. The last time I checked the method to present the render buffer will block your current thread and seems to be limited to the screen refresh rate (60FPS in most cases) which implies there is still some locking mechanism. Some additional test should be done but I would expect there is some sort of a pool of buffers which need to be redrawn to the main buffer where in the pool only one unique buffer id can be present at the time or the calling thread is blocked. This would result in the first call to the present render buffer would not be blocked at all but each sequential would be if the previous buffer has not yet been redrawn.
If this is true then yes, a double buffering is mandatory at some point since you may immediately continue drawing to your buffer. Since the render buffer has the same id over the frames it may not be swapped (for what I know) but it could be redrawn/copied to another buffer (most likely a texture) which can be done on the fly at any given time. In this procedure then when you first present the buffer you will copy the buffer to the texture which will be locked. When the screen refreshes the texture will be collected and unlocked. So if this texture is locked your presentation call will block the thread, otherwise it will continue smoothly. It is hard to say this is double buffering then. It has 2 buffers but it still works with a locking mechanism.
I do hope you may then understand why it works. It is pretty much the same procedure you would use when loading large data structures on the separate shared context which runs on a separate thread.
Still most of this is just guessing unfortunately.
In core Graphics, I am hoping to be able to draw updates to a UIView without having to redraw the entire image each time. The initial image is drawn from a CGImageRef:
CGImageRef image = CGBitmapContextCreateImage(ctxImage);
CGContextDrawImage(context, _screenRect, image);
Initially I was adding the new sections to an off screen context with the 'full image' and then creating a new CGImageRef from it before re-drawing.
CGContextDrawImage(tmp_Context,targetRect,_section);
CGImageRef image = CGBitmapContextCreateImage(tmp_Context);
CGContextDrawImage(context, _screenRect, image);
The problem with this is that in every frame update I need to redraw the entire image to the screen rather than somehow superimposing just the sections. This would yield a large performance increase if possible but not sure how to go about it. Possibly with CGLayer?
edit: I have been trying to rapidly update the view in a for loop as I receive small sections, the problem is drawRect isn't getting called to do any updates maybe because the for loop is going too fast and continually interrupting previous calls?
drawLayer = CGLayerCreateWithContext(ctxImage, targetRect.size, nil);
layerContext = CGLayerGetContext(drawLayer);
CGContextDrawImage(layerContext,targetRect,ipegSection);
CGContextDrawLayerAtPoint(currentScreen, targetRect.origin, drawLayer);
// Update the UI
[targetView setNeedsDisplay];
Can't you do (?) :
CGContextDrawImage(context, targetRect, _section);
Like this you draw only your section on the current context, not draw it on another context then redraw the screen entirely.
I'm drawing a view using renderInContext to create a screenshot for storage in core-data.
I'm using this 'standard' code...
UIGraphicsBeginImageContextWithOptions( myview.bounds.size, YES, 0 );
CGContextRef ctx = UIGraphicsGetCurrentContext();
[myview.layer renderInContext:ctx];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
Originally, I was doing this on the main thread which worked fine. However, as the view and its subviews became more complex (possibly hundreds of subviews with their own draw routines) the UI became too slow. So, I moved the rendering into a background thread.
This works except for the background color of the view 'myview' is black, which isn't what I've set it to...white.
With experimentation, I've noticed that if I pause my background thread for a second or two, the rendering is complete, with my background the required color. This is kind-of-ok but as the view becomes more complex, the pause needs to be longer in order to get the correct image of the view and it's not really correct to have a time delay in which I need to 'up' as the view gets more complex.
Has anyone got any suggestions how to resolve?
For info purposes, I've managed to fix this.
When I needed to render in the background, I was recreating my view and its subviews programmatically but the main view was not immediately calling drawRect when I called setNeedsDisplay...(of course!). So my background thread sometimes ran before the main view had rendered.
By forcing the view to render itself (with the code below) immediately, I could 'synchronise' everything and get the correct thumbnails.
CALayer *layer = self.layer;
[layer setNeedsDisplay];
[layer displayIfNeeded];
Hope this helps anyone else.
Alternative:
UIGraphicsBeginImageContextWithOptions(myview.bounds.size, YES, 0);
[myview drawViewHierarchyInRect:_captureView.bounds afterScreenUpdates:YES];
UIImage *img = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
I've got a simple UIView class that draws some text in it's drawRect routine:
[mString drawInRect:theFrameRect withFont:theFont];
That looks OK at regular resolution, but when zoomed, it's fuzzy:
[image removed, not enough posts]
So, I added some tiling:
CATiledLayer *theLayer = (CATiledLayer *) self.layer;
theLayer.levelsOfDetailBias = 8;
theLayer.levelsOfDetail = 8;
theLayer.tileSize = CGSizeMake(1024,1024);
(plus the requisite layerClass routine)
but now the text will draw twice, when zoomed, when the size of the frame is larger than the size of tile:
[image removed, not enough posts]
I'm not clear as to the solution for this. Drawing the text is an atomic operation. I could figure out how to calculate what portion of the text to draw based on the rect that's passed in...but is that really the way to go? Older code examples use drawLayer, but that seems to have been obviated by iOS 5, and is clearly more cumbersome than a straight drawRect call.
I am completely new to implementing custom drawRect method (and Core Graphics) but am doing so to improve the scrolling performance for my UITableView. Please do let me know if I am doing anything stupid.
In my cell, I have a UIImage and over the bottom part of it I would like to print the caption of the image. However, in order for the caption text to show up clearly regardless of the image , I would like to have a black rectangle with opacity of ~75% on top of the UIImage and below the caption text.
I tried the following
[self.picture drawAtPoint:point];
[[UIColor colorWithRed:0.0 green:0.0 blue:0.0 alpha:0.75] setFill];
UIRectFill(CGRectMake(rect));
but that resulting fill actually eat into the UIImage (excuse my poor description sorry) and the part showing below the slightly transparent fill is the background of my UITableView...
I guess I could have made another image for the rectangle and then draw it on top of the self.picture but I am wondering whether this is an easier way to use UIRectFill to achieve this instead...
as mentioned, I am completely new to Core Graphics so any hints would be much appreciated. thanks in advance!
Also, I have a second question... the dimension (in pixel) of the image downloaded is twice that of the rect (in points) that it will fit in, to account for retina display. However, it is now currently going over that rect, even on an iPhone4 device... How can I fix that (including for pre-iPhone4 devices too?)
I don't do much custom drawRect stuff, so I'll defer that portion of the question to someone else, but usually tableview performance issues are solved much more easily by moving the expensive calculations into a background queue and then asynchronously updating cell from the main queue when that background operation is done. Thus, something like:
First, define an operation queue property for the tableview:
#property (nonatomic, strong) NSOperationQueue *queue;
Then in viewDidLoad, initialize this:
self.queue = [[NSOperationQueue alloc] init];
self.queue.maxConcurrentOperationQueue = 4;
And then in cellForRowAtIndexPath, you could then:
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath
{
static NSString *CellIdentifier = #"MyCellIdentifier";
UITableViewCell *cell = [tableView dequeueReusableCellWithIdentifier:CellIdentifier];
// Do the quick, computationally inexpensive stuff first, stuff here.
// Examples might include setting the labels adding/setting various controls
// using any images that you might already have cached, clearing any of the
// image stuff you might be recalculating in the background queue in case you're
// dealing with a dequeued cell, etc.
// Now send the slower stuff to the background queue.
[self.queue addOperationWithBlock:^{
// Do the slower stuff (like complex image processing) here.
// If you're doing caching, update the cache here, too.
// When done with the slow stuff, send the UI update back
// to the main queue...
[[NSOperationQueue mainQueue] addOperationWithBlock:^{
// see if the cell is still visible, and if so ...
UITableViewCell *cell = [tableView cellForRowAtIndexPath:indexPath];
if (cell)
{
// now update the UI back in the main queue
}
}];
}];
return cell;
}
You can optimize this further by making sure that you cache the results of your computationally-expensive stuff into something like a NSCache, and perhaps to Documents or elsewhere as well, thus as you can optimize how often that complex stuff has to be done and really optimize the UI.
And, by the way, when you do that, you can now just have your UILabel (with backgroundColor using that UIColor for black with 0.75 alpha) on top of the the UIImageView, and iOS takes care of it for you. As easy as it gets.
On the final question about image resolution, you can either:
use the view's contentScaleFactor to figure out whether you're dealing with retina or not and resize the thumbnail image accordingly; or
just use the imageview's contentMode of UIViewContentModeScaleAspectFill which will make sure that your thumbnail images are rendered correctly regardless ... if you're using small thumbnail images (even 2x images), the performance is generally fine.
this is the right way to do it, using the kCGBlendModeNormal option, per another stackoverflow question
[[UIColor colorWithRed:0.0 green:0.0 blue:0.0 alpha:0.75] setFill];
UIRectFillUsingBlendMode(rect, kCGBlendModeNormal);